Stuck on a bug with auth, but got to push anyway.

This commit is contained in:
Your Name
2025-08-20 06:20:32 -04:00
parent b2b1240136
commit 8c3d2b1aac
18 changed files with 10443 additions and 151 deletions

View File

@@ -102,31 +102,31 @@ This document outlines the implementation plan for ginxsom, a FastCGI-based Blos
- [x] Implement request logging
### 2.5 List Blobs Endpoint
- [ ] Implement `GET /list/<pubkey>` endpoint
- [ ] Extract pubkey from URL path
- [ ] Query database for blobs uploaded by specified pubkey
- [ ] Support `since` and `until` query parameters for date filtering
- [ ] Return JSON array of blob descriptors
- [ ] Handle empty results gracefully
- [ ] Implement optional authorization with kind 24242 event validation
- [ ] Validate `t` tag is set to "list"
- [ ] Check authorization expiration
- [ ] Verify event signature and structure
- [x] Implement `GET /list/<pubkey>` endpoint
- [x] Extract pubkey from URL path
- [x] Query database for blobs uploaded by specified pubkey
- [x] Support `since` and `until` query parameters for date filtering
- [x] Return JSON array of blob descriptors
- [x] Handle empty results gracefully
- [x] Implement optional authorization with kind 24242 event validation
- [x] Validate `t` tag is set to "list"
- [x] Check authorization expiration
- [x] Verify event signature and structure
### 2.6 Delete Blob Endpoint
- [ ] Implement `DELETE /<sha256>` endpoint
- [ ] Extract SHA-256 hash from URL path
- [ ] Require authorization with kind 24242 event validation
- [ ] Validate `t` tag is set to "delete"
- [ ] Verify at least one `x` tag matches the requested hash
- [ ] Check authorization expiration
- [ ] Verify event signature and structure
- [ ] Check blob exists in database
- [ ] Verify uploader_pubkey matches authorized pubkey (ownership check)
- [ ] Remove blob file from filesystem
- [ ] Remove blob metadata from database
- [ ] Handle file deletion errors gracefully
- [ ] Return appropriate success/error responses
- [x] Implement `DELETE /<sha256>` endpoint
- [x] Extract SHA-256 hash from URL path
- [x] Require authorization with kind 24242 event validation
- [x] Validate `t` tag is set to "delete"
- [x] Verify at least one `x` tag matches the requested hash
- [x] Check authorization expiration
- [x] Verify event signature and structure
- [x] Check blob exists in database
- [x] Verify uploader_pubkey matches authorized pubkey (ownership check)
- [x] Remove blob file from filesystem
- [x] Remove blob metadata from database
- [x] Handle file deletion errors gracefully
- [x] Return appropriate success/error responses
### 2.7 Testing & Validation
- [x] Test uploads without authentication
@@ -148,30 +148,179 @@ This document outlines the implementation plan for ginxsom, a FastCGI-based Blos
- [ ] Authentication requirements
- [ ] Rate limiting settings
- [ ] Storage quota limits
- [ ] Hash-based banning/filtering
### 3.2 HEAD /upload Endpoint
- [ ] Implement `HEAD /upload` endpoint
- [ ] Return upload requirements in headers
- [ ] Handle optional Authorization header
- [ ] Return proper status codes for policy checks
- [ ] Add custom headers for requirements
### 3.2 HEAD /upload Endpoint Implementation
- [ ] Implement `HEAD /upload` endpoint for pre-flight upload validation
- [ ] Parse client headers:
- [ ] `X-SHA-256`: blob's SHA-256 hash
- [ ] `X-Content-Length`: blob size in bytes
- [ ] `X-Content-Type`: blob's MIME type
- [ ] Handle optional Authorization header (same as PUT /upload)
- [ ] Perform validation checks without file transfer:
- [ ] Validate SHA-256 format
- [ ] Check file size against limits
- [ ] Validate MIME type restrictions
- [ ] Check authentication if required
- [ ] Check if hash already exists (duplicate detection)
- [ ] Check if hash is banned
- [ ] Return appropriate HTTP status codes:
- [ ] `200 OK` - upload can proceed
- [ ] `400 Bad Request` - invalid headers
- [ ] `401 Unauthorized` - auth required
- [ ] `403 Forbidden` - not permitted (banned hash, etc.)
- [ ] `411 Length Required` - missing content length
- [ ] `413 Content Too Large` - file too large
- [ ] `415 Unsupported Media Type` - invalid MIME type
- [ ] Add `X-Reason` header with human-readable error messages
### 3.3 Upload Validation
- [ ] Implement pre-upload validation
- [ ] Check file size before processing
- [ ] Validate MIME types if restricted
- [ ] Check authentication requirements
- [ ] Verify user permissions/quotas
### 3.3 Upload Pre-validation Logic
- [ ] Create validation functions that can be shared between HEAD and PUT endpoints
- [ ] `validate_upload_headers()` - check required headers present and valid
- [ ] `check_file_size_limits()` - enforce maximum size restrictions
- [ ] `check_mime_type_allowed()` - validate against allowed types list
- [ ] `check_hash_restrictions()` - check banned hashes, duplicates
- [ ] `check_upload_permissions()` - user-specific upload rights
### 3.4 Testing & Validation
- [ ] Test upload requirements endpoint
- [ ] Test policy enforcement
- [ ] Test with various client scenarios
- [ ] Verify error responses match spec
### 3.4 DOS Protection Benefits
- [ ] Implement early rejection before file transfer:
- [ ] Authentication happens before any file data sent
- [ ] Size validation prevents large file uploads that would be rejected
- [ ] MIME type checking prevents unwanted file types
- [ ] Hash checking prevents duplicate uploads
- [ ] Update PUT /upload to use same validation functions for consistency
### 3.5 Client Integration Support
- [ ] Update nginx configuration to properly handle HEAD requests to /upload
- [ ] Ensure FastCGI handles HEAD method for /upload endpoint
- [ ] Add CORS headers for preflight requests
### 3.6 Testing & Validation
- [ ] Test HEAD /upload with valid headers
- [ ] Test various error scenarios (missing headers, invalid formats)
- [ ] Test authorization requirements
- [ ] Test policy enforcement (size limits, MIME types, banned hashes)
- [ ] Verify error responses match BUD-06 specification
- [ ] Test client workflow: HEAD check → PUT upload
- [ ] Verify DOS protection effectiveness
---
## Phase 4: Optional Features
## Phase 4: Advanced Authentication & Administration System
### 4.1 Flexible Authentication Rules System
#### 4.1.1 Database Schema Extension
- [ ] Create authentication rules tables
- [ ] `auth_rules` table: rule_type, rule_target, operation, rule_value, enabled, expires_at
- [ ] `auth_cache` table: performance caching for rule evaluation results
- [ ] Add indexes on rule_type, rule_target, operation for performance
#### 4.1.2 Authentication Rule Types Implementation
- [ ] Basic rule types:
- [ ] `pubkey_whitelist`: Only specific pubkeys allowed
- [ ] `pubkey_blacklist`: Specific pubkeys banned
- [ ] `hash_blacklist`: Specific file hashes cannot be uploaded
- [ ] `mime_type_whitelist`: Only specific content types allowed
- [ ] `mime_type_blacklist`: Specific content types banned
- [ ] Advanced rule types:
- [ ] `rate_limit`: Limit operations per pubkey/IP per time period
- [ ] `size_limit`: Per-pubkey or global size limits
- [ ] `conditional`: Complex JSON-based rules (time-based, size-based, etc.)
#### 4.1.3 Rule Evaluation Engine
- [ ] Core authentication functions:
- [ ] `evaluate_auth_rules()`: Main rule evaluation with caching
- [ ] `check_rule_cache()`: Performance optimization layer
- [ ] `process_rule_priority()`: Handle rule precedence and conflicts
- [ ] `update_auth_cache()`: Store evaluation results for reuse
- [ ] Integration points:
- [ ] Extend `handle_upload_request()` with rule evaluation
- [ ] Extend `handle_delete_request()` with rule evaluation
- [ ] Extend `handle_list_request()` with rule evaluation (optional)
#### 4.1.4 Rule Management Interface
- [ ] SQL-based rule management:
- [ ] `add_auth_rule()`: Add new authentication rules
- [ ] `remove_auth_rule()`: Remove rules by ID
- [ ] `list_auth_rules()`: Query existing rules with filters
- [ ] `update_auth_rule()`: Modify existing rule parameters
### 4.2 Nostr-Native Administrative Interface
#### 4.2.1 Server Identity Management
- [ ] Server keypair generation and storage:
- [ ] Generate server public/private keypair on first run
- [ ] Store server pubkey in `server_config` table
- [ ] Secure private key storage (encrypted file or environment)
- [ ] Key rotation capabilities for security
#### 4.2.2 Administrator Management System
- [ ] Administrator database schema:
- [ ] `administrators` table: pubkey, permissions, added_by, expires_at
- [ ] Permission levels: rules, config, users, stats, * (full access)
- [ ] Initial admin setup during server deployment
- [ ] Administrative functions:
- [ ] `check_admin_permissions()`: Verify admin authorization
- [ ] `add_administrator()`: Grant admin privileges
- [ ] `remove_administrator()`: Revoke admin privileges
- [ ] `list_administrators()`: Query admin list with permissions
#### 4.2.3 Administrative Event Types
- [ ] Event kind definitions:
- [ ] Kind 30242: Administrative commands (rule_add, rule_remove, config_set, etc.)
- [ ] Kind 30243: Administrative queries (stats_get, rule_list, audit_log, etc.)
- [ ] Kind 30244: Administrative responses (command results, query data)
- [ ] Command implementations:
- [ ] Rule management: `rule_add`, `rule_remove`, `rule_update`, `rule_list`
- [ ] System management: `config_set`, `config_get`, `admin_add`, `admin_remove`
- [ ] Query operations: `stats_get`, `blob_list`, `audit_log`, `storage_cleanup`
#### 4.2.4 Administrative Event Processing
- [ ] HTTP administrative endpoint:
- [ ] `POST /admin` with nostr event authorization
- [ ] JSON command interface with parameter validation
- [ ] Synchronous response with operation results
- [ ] Direct nostr relay integration (future enhancement):
- [ ] Subscribe to administrative events on configured relays
- [ ] Real-time event processing and response
- [ ] Publish response events back to relays
#### 4.2.5 Administrative Audit Trail
- [ ] Administrative logging system:
- [ ] `admin_log` table: track all administrative actions
- [ ] Event ID references for nostr event traceability
- [ ] Success/failure tracking with detailed error messages
- [ ] Audit query capabilities for compliance
#### 4.2.6 Security & Permission Framework
- [ ] Multi-level permission system:
- [ ] Granular permissions: rules, config, users, stats
- [ ] Permission inheritance and delegation
- [ ] Time-limited administrative access (expires_at)
- [ ] Authentication security:
- [ ] Strong nostr signature validation
- [ ] Administrator authorization chain verification
- [ ] Command-specific permission checks
- [ ] Rate limiting for administrative operations
### 4.3 Integration & Testing
- [ ] Authentication system integration:
- [ ] Integrate rule evaluation into existing authentication flow
- [ ] Maintain backward compatibility with nostr-only authentication
- [ ] Performance testing with rule caching
- [ ] Administrative system testing:
- [ ] Test all administrative commands and queries
- [ ] Verify permission enforcement and security
- [ ] Test audit logging and compliance features
- [ ] Load testing for administrative operations
---
## Phase 5: Optional Features
---
## Phase 5: Optional Features
### 4.1 User Server Lists (BUD-03) - Optional
- [ ] Implement server list advertisement
@@ -230,15 +379,21 @@ This document outlines the implementation plan for ginxsom, a FastCGI-based Blos
- [x] Authenticated uploads working (Nostr kind 24242 event validation)
- [x] Proper error handling for upload scenarios
- [x] Database metadata storage during upload (with uploader_pubkey and filename)
- [ ] List blobs endpoint implemented (GET /list/<pubkey>)
- [ ] Delete blob endpoint implemented (DELETE /<sha256>)
- [x] List blobs endpoint implemented (GET /list/<pubkey>)
- [x] Delete blob endpoint implemented (DELETE /<sha256>)
### Milestone 3: Policy Compliance (Phase 3 Pending)
- [ ] Upload requirements implemented
- [ ] Server policies configurable
- [ ] Spec compliance verified
### Milestone 4: Production Ready (Phase 4 Complete)
### Milestone 4: Advanced Authentication (Phase 4 Complete)
- [ ] Flexible authentication rules system operational
- [ ] Nostr-native administrative interface implemented
- [ ] Rule evaluation engine with caching performance
- [ ] Administrative audit trail and compliance features
### Milestone 5: Production Ready (Phase 5 Complete)
- Optional features implemented as needed
- Performance optimized
- Security hardened
@@ -274,6 +429,51 @@ This document outlines the implementation plan for ginxsom, a FastCGI-based Blos
---
## Future Improvements
### Upload Security & Performance Enhancements
**Current Issue**: The existing upload flow has a DOS vulnerability where large files are loaded entirely into memory before authentication occurs. This allows unauthenticated attackers to exhaust server memory.
**Current Flow**:
```
Client → nginx → FastCGI ginxsom
├─ reads entire file into memory (malloc + fread)
├─ validates auth (after file in memory)
└─ saves to blobs/ or errors
```
**Proposed Solution - nginx Upload Module**:
```
Client → nginx upload module → temp file → FastCGI ginxsom
├─ saves to /tmp/uploads/ ├─ validates auth quickly
└─ passes metadata only ├─ moves file to blobs/
└─ or deletes temp file
```
**Benefits**:
- Eliminates DOS vulnerability - nginx handles large files efficiently
- Fast auth validation - no waiting for full upload
- Leverages nginx strengths - what it's designed for
- Better scalability - memory usage independent of file size
**Implementation Requirements**:
- nginx upload module configuration
- Temp file cleanup handling
- Modified FastCGI code to process file paths instead of stdin
- Proper error handling for temp file operations
**Alternative Enhancement - HTTP 100 Continue**:
Could propose new Blossom BUD for two-phase upload:
1. Client sends headers with `Expect: 100-continue` + auth event
2. Server validates early (signature, expiration, pubkey)
3. Server responds `100 Continue` or `401 Unauthorized`
4. Client only sends file data if authorized
**Priority**: Implement after core BUD compliance is complete.
---
## Security Considerations
- [ ] Input validation on all endpoints
@@ -283,6 +483,7 @@ This document outlines the implementation plan for ginxsom, a FastCGI-based Blos
- [ ] Memory safety in C implementation
- [ ] Proper error message sanitization
- [ ] Log security (no sensitive data)
- [ ] **Upload DOS vulnerability** - Current implementation vulnerable to memory exhaustion attacks
---

121
README.md
View File

@@ -41,19 +41,52 @@ ginxsom is a Blossom protocol server implemented as a FastCGI application that i
ginxsom implements the following Blossom Upgrade Documents (BUDs):
- **BUD-01**: Server requirements and blob retrieval ✅
- **BUD-02**: Blob upload and management ✅
- **BUD-06**: Upload requirements
- **BUD-02**: Blob upload and management ✅ *(newly completed - includes DELETE endpoint)*
- **BUD-06**: Upload requirements *(planned - not yet implemented)*
### Supported Endpoints
| Endpoint | Method | Description | Handler |
|----------|---------|-------------|---------|
| `/<sha256>` | GET | Retrieve blob | nginx → disk |
| `/<sha256>` | HEAD | Check blob exists | nginx → disk |
| `/upload` | PUT | Upload new blob | nginx → FastCGI ginxsom |
| `/upload` | HEAD | Check upload requirements | nginx → FastCGI ginxsom |
| `/list/<pubkey>` | GET | List user's blobs | nginx → FastCGI ginxsom |
| `/<sha256>` | DELETE | Delete blob | nginx → FastCGI ginxsom |
| Endpoint | Method | Description | Handler | Status |
|----------|---------|-------------|---------|---------|
| `/<sha256>` | GET | Retrieve blob | nginx → disk |**Implemented** |
| `/<sha256>` | HEAD | Check blob exists | nginx → FastCGI ginxsom | ✅ **Implemented** |
| `/upload` | PUT | Upload new blob | nginx → FastCGI ginxsom |**Implemented** |
| `/upload` | HEAD | Check upload requirements | nginx → FastCGI ginxsom |**BUD-06 Planned** |
| `/list/<pubkey>` | GET | List user's blobs | nginx → FastCGI ginxsom |**Implemented** |
| `/<sha256>` | DELETE | Delete blob | nginx → FastCGI ginxsom |**Recently Added** |
## Recent Updates
### BUD-02 Completion: DELETE Endpoint Implementation
ginxsom now fully implements **BUD-02: Blob upload and management** with the recent addition of the DELETE endpoint. This completes the core blob management functionality:
**New DELETE Endpoint Features:**
- **Authenticated Deletion**: Requires valid nostr kind 24242 event with `t` tag set to `"delete"`
- **Hash Validation**: Must include `x` tag matching the blob's SHA-256 hash
- **Ownership Verification**: Only the original uploader can delete their blobs
- **Complete Cleanup**: Removes both file from disk and metadata from database
- **Error Handling**: Proper HTTP status codes for various failure scenarios
**Technical Implementation:**
```bash
# Delete a blob (requires nostr authorization)
curl -X DELETE http://localhost:9001/b1674191a88ec5cdd733e4240a81803105dc412d6c6708d53ab94fc248f4f553 \
-H "Authorization: Nostr eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9..."
# Successful deletion returns 200 OK
# Failed authorization returns 401 Unauthorized
# Blob not found returns 404 Not Found
# Wrong ownership returns 403 Forbidden
```
**Security Features:**
- Event signature validation using nostr cryptographic verification
- Expiration checking to prevent replay attacks
- Ownership validation via uploader_pubkey matching
- Atomic operations (both filesystem and database cleanup succeed or fail together)
This implementation makes ginxsom a fully functional Blossom server for core blob operations (upload, retrieve, list, delete) with the remaining BUD-06 (upload requirements) planned for the next development phase.
## Installation
@@ -111,6 +144,8 @@ rate_limit_uploads = 10 # per minute
### nginx Configuration
#### Production Configuration
Add to your nginx configuration:
```nginx
@@ -155,6 +190,72 @@ server {
}
```
#### Local Development Configuration
For local development, use the provided `config/local-nginx.conf`:
```nginx
# Local development server (runs on port 9001)
server {
listen 9001;
server_name localhost;
root blobs; # Relative to project directory
# FastCGI backend
upstream fastcgi_backend {
server unix:/tmp/ginxsom-fcgi.sock;
}
# DELETE endpoint - requires authentication
location ~ "^/([a-f0-9]{64}).*$" {
if ($request_method != DELETE) {
return 404;
}
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_pass fastcgi_backend;
}
# Static blob serving with extension fallback
location ~ "^/([a-f0-9]{64})(\.[a-zA-Z0-9]+)?$" {
limit_except HEAD GET { deny all; }
# HEAD requests go to FastCGI
if ($request_method = HEAD) {
rewrite ^/(.*)$ /fcgi-head/$1 last;
}
# GET requests served directly with extension fallback
try_files /$1.jpg /$1.jpeg /$1.png /$1.webp /$1.gif /$1.pdf /$1.mp4 /$1.mp3 /$1.txt /$1.md =404;
}
# Upload endpoint
location /upload {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_pass fastcgi_backend;
if ($request_method !~ ^(PUT)$ ) { return 405; }
}
# List blobs endpoint
location ~ "^/list/([a-f0-9]{64}).*$" {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_pass fastcgi_backend;
if ($request_method !~ ^(GET)$ ) { return 405; }
}
}
```
Start local development with:
```bash
# Start FastCGI daemon
./start-fcgi.sh
# Start nginx (uses local config)
./restart-nginx.sh
```
## Usage
### Starting the Server

Binary file not shown.

Binary file not shown.

View File

@@ -46,6 +46,20 @@ http {
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
# Delete blob endpoint - DELETE /<sha256> (must come first)
location ~ "^/([a-f0-9]{64}).*$" {
# Only handle DELETE method for this pattern
if ($request_method != DELETE) {
# Let other patterns handle non-DELETE requests for this path
return 404;
}
# Pass to FastCGI application for processing
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_pass fastcgi_backend;
}
# Old working regex pattern - testing rollback
location ~ "^/([a-f0-9]{64})(\.[a-zA-Z0-9]+)?$" {
limit_except HEAD GET {

View File

@@ -1,47 +0,0 @@
[Unit]
Description=Ginxsom Blossom Server FastCGI Application
After=network.target
Wants=network-online.target
After=network-online.target
[Service]
Type=notify
User=ginxsom
Group=ginxsom
WorkingDirectory=/var/lib/ginxsom
ExecStart=/usr/local/bin/ginxsom --fastcgi --socket /run/ginxsom/ginxsom.sock
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=5s
# Security settings
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/ginxsom /run/ginxsom /var/log/ginxsom
PrivateTmp=true
PrivateDevices=true
ProtectHostname=true
ProtectClock=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectKernelLogs=true
ProtectControlGroups=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
RestrictRealtime=true
RestrictSUIDSGID=true
LockPersonality=true
MemoryDenyWriteExecute=true
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
# Environment
Environment=GINXSOM_CONFIG=/etc/ginxsom/config.toml
Environment=GINXSOM_DATA_DIR=/var/lib/ginxsom
Environment=GINXSOM_LOG_LEVEL=info
[Install]
WantedBy=multi-user.target

Binary file not shown.

Binary file not shown.

View File

@@ -65,3 +65,173 @@ SELECT
FROM blobs
WHERE uploaded_at > (strftime('%s', 'now') - 86400)
ORDER BY uploaded_at DESC;
-- ============================================================================
-- AUTHENTICATION RULES SYSTEM
-- ============================================================================
-- Authentication rules table for flexible access control
CREATE TABLE IF NOT EXISTS auth_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rule_type TEXT NOT NULL, -- 'whitelist', 'blacklist', 'hash_blacklist', 'rate_limit', etc.
rule_target TEXT NOT NULL, -- pubkey, hash, IP, MIME type, etc.
rule_value TEXT, -- JSON for complex rules (optional)
operation TEXT NOT NULL DEFAULT '*', -- 'upload', 'delete', 'list', '*' (all operations)
enabled INTEGER NOT NULL DEFAULT 1, -- 0 = disabled, 1 = enabled
priority INTEGER NOT NULL DEFAULT 100, -- Lower numbers = higher priority (for conflict resolution)
expires_at INTEGER, -- Optional expiration timestamp
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
created_by TEXT, -- Admin pubkey who created this rule (optional)
description TEXT, -- Human-readable rule description
CHECK (enabled IN (0, 1)), -- Boolean constraint
CHECK (priority >= 0), -- Priority must be non-negative
CHECK (expires_at IS NULL OR expires_at > created_at) -- Expiration must be in future
);
-- Rule evaluation cache for performance optimization
CREATE TABLE IF NOT EXISTS auth_cache (
cache_key TEXT PRIMARY KEY, -- SHA-256 hash of request parameters
allowed INTEGER NOT NULL, -- 0 = denied, 1 = allowed
rule_id INTEGER, -- Which rule made the decision (optional)
rule_reason TEXT, -- Human-readable reason for decision
expires_at INTEGER NOT NULL, -- Cache entry expiration
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
CHECK (allowed IN (0, 1)), -- Boolean constraint
FOREIGN KEY (rule_id) REFERENCES auth_rules(id) ON DELETE SET NULL
);
-- Indexes for authentication system performance
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_target ON auth_rules(rule_type, rule_target);
CREATE INDEX IF NOT EXISTS idx_auth_rules_operation ON auth_rules(operation);
CREATE INDEX IF NOT EXISTS idx_auth_rules_enabled ON auth_rules(enabled);
CREATE INDEX IF NOT EXISTS idx_auth_rules_priority ON auth_rules(priority);
CREATE INDEX IF NOT EXISTS idx_auth_rules_expires ON auth_rules(expires_at);
CREATE INDEX IF NOT EXISTS idx_auth_cache_expires ON auth_cache(expires_at);
-- ============================================================================
-- ADMINISTRATIVE SYSTEM
-- ============================================================================
-- Administrators table for nostr-based server administration
CREATE TABLE IF NOT EXISTS administrators (
pubkey TEXT PRIMARY KEY NOT NULL, -- Nostr public key (64 hex chars)
permissions TEXT NOT NULL DEFAULT '[]', -- JSON array of permissions
added_by TEXT, -- Pubkey of admin who added this admin
added_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
expires_at INTEGER, -- Optional expiration timestamp
enabled INTEGER NOT NULL DEFAULT 1, -- 0 = disabled, 1 = enabled
description TEXT, -- Human-readable description
last_seen INTEGER, -- Last administrative action timestamp
CHECK (length(pubkey) = 64), -- Ensure valid pubkey length
CHECK (enabled IN (0, 1)), -- Boolean constraint
CHECK (expires_at IS NULL OR expires_at > added_at), -- Expiration must be in future
FOREIGN KEY (added_by) REFERENCES administrators(pubkey) ON DELETE SET NULL
);
-- Administrative actions audit log
CREATE TABLE IF NOT EXISTS admin_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
admin_pubkey TEXT NOT NULL, -- Which admin performed the action
command TEXT NOT NULL, -- Administrative command executed
parameters TEXT, -- JSON command parameters
result TEXT, -- Success/failure result and details
timestamp INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
event_id TEXT, -- Reference to nostr event (optional)
target_table TEXT, -- Which table was affected (optional)
target_id TEXT, -- Which record was affected (optional)
ip_address TEXT, -- Client IP address (optional)
user_agent TEXT, -- Client user agent (optional)
FOREIGN KEY (admin_pubkey) REFERENCES administrators(pubkey) ON DELETE CASCADE
);
-- Server identity and administrative configuration
INSERT OR IGNORE INTO server_config (key, value, description) VALUES
('server_pubkey', '', 'Server nostr public key (generated on first run)'),
('server_privkey_file', 'keys/server.key', 'Path to encrypted server private key file'),
('admin_relays', '[]', 'JSON array of relay URLs for administrative events'),
('admin_event_processing', 'true', 'Enable nostr-based administrative event processing'),
('require_admin_auth', 'true', 'Require admin authorization for sensitive operations'),
('auth_rules_enabled', 'true', 'Enable flexible authentication rules system'),
('auth_cache_ttl', '300', 'Authentication cache TTL in seconds (5 minutes)'),
('admin_session_timeout', '3600', 'Administrative session timeout in seconds (1 hour)'),
('max_admin_log_entries', '10000', 'Maximum administrative log entries to retain');
-- Indexes for administrative system performance
CREATE INDEX IF NOT EXISTS idx_administrators_enabled ON administrators(enabled);
CREATE INDEX IF NOT EXISTS idx_administrators_expires ON administrators(expires_at);
CREATE INDEX IF NOT EXISTS idx_admin_log_timestamp ON admin_log(timestamp);
CREATE INDEX IF NOT EXISTS idx_admin_log_admin_pubkey ON admin_log(admin_pubkey);
CREATE INDEX IF NOT EXISTS idx_admin_log_command ON admin_log(command);
-- ============================================================================
-- VIEWS FOR ADMINISTRATIVE QUERIES
-- ============================================================================
-- View for active authentication rules
CREATE VIEW IF NOT EXISTS active_auth_rules AS
SELECT
id,
rule_type,
rule_target,
rule_value,
operation,
priority,
expires_at,
created_at,
created_by,
description,
CASE
WHEN expires_at IS NULL THEN 'never'
WHEN expires_at > strftime('%s', 'now') THEN 'active'
ELSE 'expired'
END as status
FROM auth_rules
WHERE enabled = 1
ORDER BY priority ASC, created_at ASC;
-- View for active administrators
CREATE VIEW IF NOT EXISTS active_administrators AS
SELECT
pubkey,
permissions,
added_by,
added_at,
expires_at,
description,
last_seen,
CASE
WHEN expires_at IS NULL THEN 'never'
WHEN expires_at > strftime('%s', 'now') THEN 'active'
ELSE 'expired'
END as status,
datetime(added_at, 'unixepoch') as added_datetime,
datetime(last_seen, 'unixepoch') as last_seen_datetime
FROM administrators
WHERE enabled = 1;
-- View for recent administrative actions (last 7 days)
CREATE VIEW IF NOT EXISTS recent_admin_actions AS
SELECT
id,
admin_pubkey,
command,
parameters,
result,
timestamp,
event_id,
target_table,
target_id,
datetime(timestamp, 'unixepoch') as action_datetime
FROM admin_log
WHERE timestamp > (strftime('%s', 'now') - 604800) -- 7 days
ORDER BY timestamp DESC;
-- View for authentication statistics
CREATE VIEW IF NOT EXISTS auth_stats AS
SELECT
(SELECT COUNT(*) FROM auth_rules WHERE enabled = 1) as active_rules,
(SELECT COUNT(*) FROM auth_rules WHERE enabled = 1 AND expires_at > strftime('%s', 'now')) as non_expired_rules,
(SELECT COUNT(*) FROM auth_cache WHERE expires_at > strftime('%s', 'now')) as cached_decisions,
(SELECT COUNT(*) FROM administrators WHERE enabled = 1) as active_admins,
(SELECT COUNT(*) FROM admin_log WHERE timestamp > (strftime('%s', 'now') - 86400)) as daily_admin_actions;

250
file_put.sh Executable file
View File

@@ -0,0 +1,250 @@
#!/bin/bash
# put_test.sh - Test script for Ginxsom Blossom server upload functionality
# This script simulates a user uploading a blob to ginxsom using proper Blossom authentication
set -e # Exit on any error
# Configuration
SERVER_URL="http://localhost:9001"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
TEST_FILE="test_blob_$(date +%s).txt"
CLEANUP_FILES=()
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Cleanup function
cleanup() {
echo -e "${YELLOW}Cleaning up temporary files...${NC}"
for file in "${CLEANUP_FILES[@]}"; do
if [[ -f "$file" ]]; then
rm -f "$file"
echo "Removed: $file"
fi
done
}
# Set up cleanup on exit
trap cleanup EXIT
# Helper functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Check prerequisites
check_prerequisites() {
log_info "Checking prerequisites..."
# Check if nak is installed
if ! command -v nak &> /dev/null; then
log_error "nak command not found. Please install nak first."
log_info "Install with: go install github.com/fiatjaf/nak@latest"
exit 1
fi
log_success "nak is installed"
# Check if curl is available
if ! command -v curl &> /dev/null; then
log_error "curl command not found. Please install curl."
exit 1
fi
log_success "curl is available"
# Check if sha256sum is available
if ! command -v sha256sum &> /dev/null; then
log_error "sha256sum command not found."
exit 1
fi
log_success "sha256sum is available"
# Check if base64 is available
if ! command -v base64 &> /dev/null; then
log_error "base64 command not found."
exit 1
fi
log_success "base64 is available"
}
# Check if server is running
check_server() {
log_info "Checking if server is running..."
if curl -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
log_success "Server is running at ${SERVER_URL}"
else
log_error "Server is not responding at ${SERVER_URL}"
log_info "Please start the server with: ./scripts/start-fcgi.sh && nginx -p . -c config/local-nginx.conf"
exit 1
fi
}
# Create test file
create_test_file() {
log_info "Creating test file: ${TEST_FILE}"
# Create test content with timestamp and random data
cat > "${TEST_FILE}" << EOF
Test blob content for Ginxsom Blossom server
Timestamp: $(date -Iseconds)
Random data: $(openssl rand -hex 32)
Test message: Hello from put_test.sh!
This file is used to test the upload functionality
of the Ginxsom Blossom server implementation.
EOF
CLEANUP_FILES+=("${TEST_FILE}")
log_success "Created test file with $(wc -c < "${TEST_FILE}") bytes"
}
# Calculate file hash
calculate_hash() {
log_info "Calculating SHA-256 hash..."
HASH=$(sha256sum "${TEST_FILE}" | cut -d' ' -f1)
log_success "File hash: ${HASH}"
}
# Generate nostr event
generate_nostr_event() {
log_info "Generating kind 24242 nostr event with nak..."
# Calculate expiration time (1 hour from now)
EXPIRATION=$(date -d '+1 hour' +%s)
# Generate the event using nak
EVENT_JSON=$(nak event -k 24242 -c "" \
-t "t=upload" \
-t "x=${HASH}" \
-t "expiration=${EXPIRATION}")
if [[ -z "$EVENT_JSON" ]]; then
log_error "Failed to generate nostr event"
exit 1
fi
log_success "Generated nostr event"
echo "Event JSON: $EVENT_JSON"
}
# Create authorization header
create_auth_header() {
log_info "Creating authorization header..."
# Base64 encode the event (without newlines)
AUTH_B64=$(echo -n "$EVENT_JSON" | base64 -w 0)
AUTH_HEADER="Nostr ${AUTH_B64}"
log_success "Created authorization header"
echo "Auth header length: ${#AUTH_HEADER} characters"
}
# Perform upload
perform_upload() {
log_info "Performing upload to ${UPLOAD_ENDPOINT}..."
# Create temporary file for response
RESPONSE_FILE=$(mktemp)
CLEANUP_FILES+=("${RESPONSE_FILE}")
# Perform the upload with verbose output
HTTP_STATUS=$(curl -s -w "%{http_code}" \
-X PUT \
-H "Authorization: ${AUTH_HEADER}" \
-H "Content-Type: text/plain" \
-H "Content-Disposition: attachment; filename=\"${TEST_FILE}\"" \
--data-binary "@${TEST_FILE}" \
"${UPLOAD_ENDPOINT}" \
-o "${RESPONSE_FILE}")
echo "HTTP Status: ${HTTP_STATUS}"
echo "Response body:"
cat "${RESPONSE_FILE}"
echo
# Check response
case "${HTTP_STATUS}" in
200)
log_success "Upload successful!"
;;
201)
log_success "Upload successful (created)!"
;;
400)
log_error "Bad request - check the event format"
;;
401)
log_error "Unauthorized - authentication failed"
;;
405)
log_error "Method not allowed - check nginx configuration"
;;
413)
log_error "Payload too large"
;;
501)
log_warning "Upload endpoint not yet implemented (expected for now)"
;;
*)
log_error "Upload failed with HTTP status: ${HTTP_STATUS}"
;;
esac
}
# Test file retrieval
test_retrieval() {
log_info "Testing file retrieval..."
RETRIEVAL_URL="${SERVER_URL}/${HASH}"
if curl -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
log_success "File can be retrieved at: ${RETRIEVAL_URL}"
else
log_warning "File not yet available for retrieval (expected if upload processing not implemented)"
fi
}
# Main execution
main() {
echo "=== Ginxsom Blossom Upload Test ==="
echo "Timestamp: $(date -Iseconds)"
echo
# check_prerequisites
# check_server
create_test_file
calculate_hash
generate_nostr_event
create_auth_header
perform_upload
# test_retrieval
echo
log_info "Test completed!"
echo "Summary:"
echo " Test file: ${TEST_FILE}"
echo " File hash: ${HASH}"
echo " Server: ${SERVER_URL}"
echo " Upload endpoint: ${UPLOAD_ENDPOINT}"
}
# Run main function
main "$@"

View File

@@ -149,3 +149,33 @@
127.0.0.1 - - [19/Aug/2025:11:00:46 -0400] "GET /list/79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798 HTTP/1.1" 501 38 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:01:47 -0400] "GET /list/79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798 HTTP/1.1" 200 1984 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:02:33 -0400] "GET /list/79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798 HTTP/1.1" 200 1984 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:15:08 -0400] "DELETE /708d0e8226ec17b0585417c0ec9352ce5f52c3820c904b7066fe20b00f2d9cfe HTTP/1.1" 403 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:15:08 -0400] "DELETE /708d0e8226ec17b0585417c0ec9352ce5f52c3820c904b7066fe20b00f2d9cfe HTTP/1.1" 403 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:15:08 -0400] "DELETE /1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef HTTP/1.1" 403 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:15:08 -0400] "DELETE /708d0e8226ec17b0585417c0ec9352ce5f52c3820c904b7066fe20b00f2d9cfe HTTP/1.1" 403 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:16:37 -0400] "DELETE /708d0e8226ec17b0585417c0ec9352ce5f52c3820c904b7066fe20b00f2d9cfe HTTP/1.1" 401 266 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:16:38 -0400] "DELETE /708d0e8226ec17b0585417c0ec9352ce5f52c3820c904b7066fe20b00f2d9cfe HTTP/1.1" 401 272 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:16:38 -0400] "DELETE /1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef HTTP/1.1" 401 272 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:11:16:38 -0400] "DELETE /708d0e8226ec17b0585417c0ec9352ce5f52c3820c904b7066fe20b00f2d9cfe HTTP/1.1" 401 272 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:33:51 -0400] "PUT /upload HTTP/1.1" 401 269 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:33:51 -0400] "GET /ffadb0d45885063938584c0b2373bc81543ecc4c8703b376b6dea4cf7be41017 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:52:52 -0400] "PUT /upload HTTP/1.1" 401 269 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:52:52 -0400] "GET /878a63847120be9c2949845989144a0a1460b5f66a300fe04d0c7f9fa3906e75 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:55:05 -0400] "PUT /upload HTTP/1.1" 401 269 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:55:05 -0400] "GET /739bb1bc3f3c16eca0fcd336611a8e2166611bda0d477e6fc88f396758f3c4a6 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:56:03 -0400] "PUT /upload HTTP/1.1" 401 269 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:56:03 -0400] "GET /cec5ac288e7c2855df27d0907a7e05a6721f98837586a452dbc917d9494135cc HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:59:03 -0400] "PUT /upload HTTP/1.1" 401 269 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:13:59:03 -0400] "GET /0c4351ce17bf759fb328d6db050d4b73e6d204e8a2af82971cf5e4e3d0e44680 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:00:27 -0400] "PUT /upload HTTP/1.1" 401 269 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:00:27 -0400] "GET /813cdff4112f980d8dd28a32eb8a6ce33795a27c01112c18f94ca83a9c4a6c20 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:12:19 -0400] "PUT /upload HTTP/1.1" 401 242 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:12:19 -0400] "GET /95830dab9844cb68fae20017af01a4b9fcfebaeec9194249a185114eac75c689 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:24:00 -0400] "PUT /upload HTTP/1.1" 502 166 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:24:00 -0400] "GET /2dd6a29f22fab6e9848f45cfd723a542909fbcc060218d546e9210b3436dca34 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:25:33 -0400] "PUT /upload HTTP/1.1" 502 166 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:25:33 -0400] "GET /044331c6a984e58b258343a093a0a5b961ce6c8fc27ad6c1535144814b17cf04 HTTP/1.1" 404 162 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:29:42 -0400] "PUT /upload HTTP/1.1" 502 166 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:32:12 -0400] "PUT /upload HTTP/1.1" 401 242 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:37:30 -0400] "PUT /upload HTTP/1.1" 401 242 "-" "curl/8.15.0"
127.0.0.1 - - [19/Aug/2025:14:47:34 -0400] "PUT /upload HTTP/1.1" 200 224 "-" "curl/8.15.0"

File diff suppressed because it is too large Load Diff

View File

@@ -1 +1 @@
390406
596584

229
restart-all.sh Executable file
View File

@@ -0,0 +1,229 @@
#!/bin/bash
# Restart Ginxsom Development Environment
# Combines nginx and FastCGI restart operations for debugging
# Configuration
FCGI_BINARY="./build/ginxsom-fcgi"
SOCKET_PATH="/tmp/ginxsom-fcgi.sock"
PID_FILE="/tmp/ginxsom-fcgi.pid"
NGINX_CONFIG="config/local-nginx.conf"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${YELLOW}=== Ginxsom Development Environment Restart ===${NC}"
echo "Starting full restart sequence..."
# Function to check if a process is running
check_process() {
local pid=$1
kill -0 "$pid" 2>/dev/null
}
# Function to wait for process to stop
wait_for_stop() {
local pid=$1
local timeout=${2:-10}
local count=0
while check_process "$pid" && [ $count -lt $timeout ]; do
sleep 1
((count++))
done
if check_process "$pid"; then
echo -e "${RED}Warning: Process $pid still running after ${timeout}s${NC}"
return 1
fi
return 0
}
# Step 1: Stop nginx
echo -e "\n${YELLOW}1. Stopping nginx...${NC}"
if pgrep -f "nginx.*${NGINX_CONFIG}" > /dev/null; then
echo "Found running nginx processes, stopping..."
nginx -p . -c "${NGINX_CONFIG}" -s stop 2>/dev/null
sleep 2
# Force kill any remaining nginx processes
NGINX_PIDS=$(pgrep -f "nginx.*${NGINX_CONFIG}")
if [ ! -z "$NGINX_PIDS" ]; then
echo "Force killing remaining nginx processes: $NGINX_PIDS"
kill -9 $NGINX_PIDS 2>/dev/null
fi
echo -e "${GREEN}nginx stopped${NC}"
else
echo "nginx not running"
fi
# Step 2: Stop FastCGI
echo -e "\n${YELLOW}2. Stopping FastCGI application...${NC}"
# Method 1: Stop via PID file
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
echo "Found PID file with process $PID"
if check_process "$PID"; then
echo "Stopping FastCGI process $PID"
kill "$PID"
if wait_for_stop "$PID" 5; then
echo -e "${GREEN}FastCGI process stopped gracefully${NC}"
else
echo "Force killing FastCGI process $PID"
kill -9 "$PID" 2>/dev/null
fi
else
echo "PID $PID not running, cleaning up PID file"
fi
rm -f "$PID_FILE"
fi
# Method 2: Kill any remaining ginxsom-fcgi processes
FCGI_PIDS=$(pgrep -f "ginxsom-fcgi")
if [ ! -z "$FCGI_PIDS" ]; then
echo "Found additional FastCGI processes: $FCGI_PIDS"
kill $FCGI_PIDS 2>/dev/null
sleep 2
# Force kill if still running
FCGI_PIDS=$(pgrep -f "ginxsom-fcgi")
if [ ! -z "$FCGI_PIDS" ]; then
echo "Force killing FastCGI processes: $FCGI_PIDS"
kill -9 $FCGI_PIDS 2>/dev/null
fi
fi
# Method 3: Clean up socket
if [ -S "$SOCKET_PATH" ]; then
echo "Removing old socket: $SOCKET_PATH"
rm -f "$SOCKET_PATH"
fi
echo -e "${GREEN}FastCGI cleanup complete${NC}"
# Step 3: Check if binary exists and is up to date
echo -e "\n${YELLOW}3. Checking FastCGI binary...${NC}"
if [ ! -f "$FCGI_BINARY" ]; then
echo -e "${RED}Error: FastCGI binary not found at $FCGI_BINARY${NC}"
echo "Building application..."
make
if [ $? -ne 0 ]; then
echo -e "${RED}Build failed! Cannot continue.${NC}"
exit 1
fi
else
echo "FastCGI binary found: $FCGI_BINARY"
# Check if source is newer than binary
if [ "src/main.c" -nt "$FCGI_BINARY" ] || [ "Makefile" -nt "$FCGI_BINARY" ]; then
echo "Source files are newer than binary, rebuilding..."
make
if [ $? -ne 0 ]; then
echo -e "${RED}Build failed! Cannot continue.${NC}"
exit 1
fi
echo -e "${GREEN}Rebuild complete${NC}"
fi
fi
# Step 4: Start FastCGI
echo -e "\n${YELLOW}4. Starting FastCGI application...${NC}"
echo "Socket: $SOCKET_PATH"
echo "Binary: $FCGI_BINARY"
# Check if spawn-fcgi is available
if ! command -v spawn-fcgi &> /dev/null; then
echo -e "${RED}Error: spawn-fcgi not found. Please install it:${NC}"
echo " Ubuntu/Debian: sudo apt-get install spawn-fcgi"
echo " macOS: brew install spawn-fcgi"
exit 1
fi
# Start FastCGI application
spawn-fcgi -s "$SOCKET_PATH" -M 666 -u "$USER" -g "$USER" -f "$FCGI_BINARY" -P "$PID_FILE"
if [ $? -eq 0 ] && [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
echo -e "${GREEN}FastCGI application started successfully${NC}"
echo "PID: $PID"
# Verify it's actually running
if check_process "$PID"; then
echo -e "${GREEN}Process confirmed running${NC}"
else
echo -e "${RED}Warning: Process may have crashed immediately${NC}"
exit 1
fi
else
echo -e "${RED}Failed to start FastCGI application${NC}"
exit 1
fi
# Step 5: Start nginx
echo -e "\n${YELLOW}5. Starting nginx...${NC}"
if [ ! -f "$NGINX_CONFIG" ]; then
echo -e "${RED}Error: nginx config not found at $NGINX_CONFIG${NC}"
exit 1
fi
# Test nginx configuration first
nginx -p . -c "$NGINX_CONFIG" -t
if [ $? -ne 0 ]; then
echo -e "${RED}nginx configuration test failed!${NC}"
exit 1
fi
# Start nginx
nginx -p . -c "$NGINX_CONFIG"
if [ $? -eq 0 ]; then
echo -e "${GREEN}nginx started successfully${NC}"
# Verify nginx is running
sleep 1
if pgrep -f "nginx.*${NGINX_CONFIG}" > /dev/null; then
echo -e "${GREEN}nginx confirmed running${NC}"
else
echo -e "${RED}Warning: nginx may have crashed${NC}"
exit 1
fi
else
echo -e "${RED}Failed to start nginx${NC}"
exit 1
fi
# Step 6: Final status check
echo -e "\n${YELLOW}6. Final status check...${NC}"
# Check FastCGI
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if check_process "$PID"; then
echo -e "${GREEN}✓ FastCGI running (PID: $PID)${NC}"
else
echo -e "${RED}✗ FastCGI not running${NC}"
fi
else
echo -e "${RED}✗ FastCGI PID file missing${NC}"
fi
# Check nginx
if pgrep -f "nginx.*${NGINX_CONFIG}" > /dev/null; then
NGINX_PIDS=$(pgrep -f "nginx.*${NGINX_CONFIG}" | tr '\n' ' ')
echo -e "${GREEN}✓ nginx running (PIDs: $NGINX_PIDS)${NC}"
else
echo -e "${RED}✗ nginx not running${NC}"
fi
# Check socket
if [ -S "$SOCKET_PATH" ]; then
echo -e "${GREEN}✓ FastCGI socket exists: $SOCKET_PATH${NC}"
else
echo -e "${RED}✗ FastCGI socket missing: $SOCKET_PATH${NC}"
fi
echo -e "\n${GREEN}=== Restart sequence complete ===${NC}"
echo -e "${YELLOW}Server should be available at: http://localhost:9001${NC}"
echo -e "${YELLOW}To stop all processes, run: nginx -p . -c $NGINX_CONFIG -s stop && kill \$(cat $PID_FILE 2>/dev/null)${NC}"
echo -e "${YELLOW}To monitor logs, check: logs/error.log and logs/access.log${NC}"

1064
src/main.c

File diff suppressed because it is too large Load Diff