tests
This commit is contained in:
4
.admin_keys
Normal file
4
.admin_keys
Normal file
@@ -0,0 +1,4 @@
|
||||
ADMIN_PRIVKEY='31d3fd4bb38f4f6b60fb66e0a2e5063703bb3394579ce820d5aaf3773b96633f'
|
||||
ADMIN_PUBKEY='bd109762a8185716ec0fe0f887e911c30d40e36cf7b6bb99f6eef3301e9f6f99'
|
||||
SERVER_PRIVKEY='c4e0d2ed7d36277d6698650f68a6e9199f91f3abb476a67f07303e81309c48f1'
|
||||
SERVER_PUBKEY='52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a'
|
||||
16
.roo/rules-architect/AGENTS.md
Normal file
16
.roo/rules-architect/AGENTS.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# AGENTS.md
|
||||
|
||||
This file provides guidance to agents when working with code in this repository.
|
||||
|
||||
## Critical Architecture Rules (Non-Obvious Only)
|
||||
|
||||
- **Hybrid Request Handling**: GET requests served directly by nginx from disk, HEAD/PUT/DELETE go through FastCGI
|
||||
- **Database vs Filesystem**: Database is authoritative for blob existence - filesystem is just storage medium
|
||||
- **Two-Phase Authentication**: Nostr event validation PLUS Blossom protocol validation (kind 24242 + method tags)
|
||||
- **Config Architecture**: File-based signed events override database config - enables cryptographic config verification
|
||||
- **Memory-Only Secrets**: Server private keys never persisted to database - stored in process memory only
|
||||
- **Extension Decoupling**: File storage uses MIME-based extensions, URL serving accepts any extension via nginx wildcards
|
||||
- **FastCGI Socket Communication**: nginx communicates with C app via Unix socket, not TCP - affects deployment
|
||||
- **Authentication Rules Engine**: Optional rules system with priority-based evaluation and caching layer
|
||||
- **Blob Descriptor Format**: Returns NIP-94 compliant metadata with canonical URLs based on configured origin
|
||||
- **Admin API Isolation**: Admin endpoints use separate authentication from blob operations - different event structures
|
||||
16
.roo/rules-ask/AGENTS.md
Normal file
16
.roo/rules-ask/AGENTS.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# AGENTS.md
|
||||
|
||||
This file provides guidance to agents when working with code in this repository.
|
||||
|
||||
## Critical Documentation Context (Non-Obvious Only)
|
||||
|
||||
- **"FastCGI App"**: This is NOT a web server - it's a FastCGI application that nginx calls for dynamic operations
|
||||
- **Two Config Systems**: File-based config (XDG) is priority 1, database config is fallback - don't assume standard config locations
|
||||
- **Blob Storage Strategy**: Files stored WITH extensions but URLs accept any extension - counterintuitive to typical web serving
|
||||
- **Admin API Auth**: Uses Nostr cryptographic events (kind 24242) not standard bearer tokens or sessions
|
||||
- **Database Schema**: `blobs` table stores metadata, physical files in `blobs/` directory - database is authoritative
|
||||
- **Build Requirements**: Requires local SQLite build, nostr_core_lib submodule, and specific FastCGI libraries
|
||||
- **Testing Setup**: Tests require `nak` tool for Nostr event generation - not standard HTTP testing
|
||||
- **Development Ports**: Local development uses port 9001, production typically uses nginx proxy on standard ports
|
||||
- **Setup Wizard**: Interactive setup creates cryptographically signed config files - not typical config generation
|
||||
- **Extension Handling**: nginx config uses wildcards to serve files regardless of URL extension - Blossom protocol compliance
|
||||
18
.roo/rules-code/AGENTS.md
Normal file
18
.roo/rules-code/AGENTS.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# AGENTS.md
|
||||
|
||||
This file provides guidance to agents when working with code in this repository.
|
||||
|
||||
## Critical Coding Rules (Non-Obvious Only)
|
||||
|
||||
- **nostr_core_lib Integration**: Must use `nostr_sha256()` and `nostr_bytes_to_hex()` from nostr_core, NOT standard crypto libs
|
||||
- **Database Connection Pattern**: Always use `sqlite3_open_v2()` with `SQLITE_OPEN_READONLY` or `SQLITE_OPEN_READWRITE` flags
|
||||
- **Memory Management**: File data buffers must be freed after use - common pattern is `malloc()` for upload data, `free()` on all paths
|
||||
- **Error Handling**: FastCGI responses must use `printf("Status: XXX\r\n")` format, NOT standard HTTP response format
|
||||
- **String Safety**: Always null-terminate strings from SQLite results - use `strncpy()` with size-1 and explicit null termination
|
||||
- **Hash Validation**: SHA-256 hashes must be exactly 64 hex chars - validate with custom `validate_sha256_format()` function
|
||||
- **MIME Type Mapping**: Use centralized `mime_to_extension()` function - never hardcode file extensions
|
||||
- **Authentication**: Nostr event parsing uses cJSON - always call `cJSON_Delete()` after use to prevent memory leaks
|
||||
- **Configuration Loading**: File config takes priority over database - check XDG paths first, fallback to database
|
||||
- **Blob Metadata**: Database is single source of truth - use `get_blob_metadata()`, not filesystem checks
|
||||
- **nostr_core_lib Build**: Uses `build.sh` script, NOT `make` - run `./build.sh` to compile the library
|
||||
- **Server Testing**: Use `./restart-all.sh` to properly restart and test ginxsom server, NOT direct binary execution
|
||||
16
.roo/rules-debug/AGENTS.md
Normal file
16
.roo/rules-debug/AGENTS.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# AGENTS.md
|
||||
|
||||
This file provides guidance to agents when working with code in this repository.
|
||||
|
||||
## Critical Debug Rules (Non-Obvious Only)
|
||||
|
||||
- **FastCGI Socket Issues**: If socket `/tmp/ginxsom-fcgi.sock` exists but connection fails, remove it manually before restart
|
||||
- **Local SQLite Binary**: Debug with `./sqlite3-build/sqlite3 db/ginxsom.db`, NOT system sqlite3
|
||||
- **Authentication Debug**: Failed auth shows error codes in nostr_core format - use `nostr_strerror()` for meanings
|
||||
- **Memory Leaks**: cJSON objects MUST be deleted after use - common leak source in auth parsing
|
||||
- **File Permissions**: Blob files need 644 permissions or nginx can't serve them - check with `ls -la blobs/`
|
||||
- **Database Locks**: SQLite connection must be closed on ALL code paths or database locks occur
|
||||
- **Config Loading**: File config errors are silent - check stderr for "CONFIG:" messages during startup
|
||||
- **Admin Key Mismatch**: Database admin_pubkey vs .admin_keys file often cause auth failures
|
||||
- **Nginx Port Conflicts**: Local nginx on 9001 conflicts with system nginx on 80 - check with `netstat -tlnp`
|
||||
- **Hash Calculation**: File data buffer must be complete before `nostr_sha256()` call or hash is wrong
|
||||
43
AGENTS.md
Normal file
43
AGENTS.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# AGENTS.md
|
||||
|
||||
This file provides guidance to agents when working with code in this repository.
|
||||
|
||||
## Critical Project-Specific Rules
|
||||
|
||||
- **Database Path**: Always use `db/ginxsom.db` - this is hardcoded, not configurable
|
||||
- **SQLite Build**: Uses local SQLite 3.37.2 in `sqlite3-build/` directory, NOT system SQLite
|
||||
- **Local Development**: Everything runs locally on port 9001, never use system nginx on port 80
|
||||
- **FastCGI Socket**: Uses `/tmp/ginxsom-fcgi.sock` for FastCGI communication
|
||||
- **Config Priority**: File-based config (XDG locations) overrides database config
|
||||
- **Admin Auth**: Uses Nostr events for admin API authentication (kind 24242 with "admin" tag)
|
||||
- **Blob Storage**: Files stored as `blobs/<sha256>.<ext>` where extension comes from MIME type
|
||||
- **Build Directory**: Must create `build/` directory before compilation
|
||||
- **Test Files**: Pre-existing test files in `blobs/` with specific SHA-256 names
|
||||
- **Server Private Key**: Stored in memory only, never in database (security requirement)
|
||||
|
||||
## Non-Standard Commands
|
||||
|
||||
```bash
|
||||
# Restart nginx (local only)
|
||||
./restart-all.sh
|
||||
|
||||
# Start FastCGI daemon
|
||||
./scripts/start-fcgi.sh
|
||||
|
||||
# Test admin API with authentication
|
||||
source .admin_keys && ./scripts/test_admin.sh
|
||||
|
||||
# Setup wizard (creates signed config events)
|
||||
./scripts/setup.sh
|
||||
|
||||
# Local SQLite (not system)
|
||||
./sqlite3-build/sqlite3 db/ginxsom.db
|
||||
```
|
||||
|
||||
## Critical Architecture Notes
|
||||
|
||||
- FastCGI app handles HEAD/PUT/DELETE requests, nginx serves GET directly from disk
|
||||
- Two-tier config: File config (signed Nostr events) + database config (key-value)
|
||||
- Admin API requires Nostr event signatures with specific tag structure
|
||||
- Database is single source of truth for blob existence (not filesystem)
|
||||
- Extension handling: URLs work with any extension, files stored with correct extension
|
||||
491
AUTH_API.md
Normal file
491
AUTH_API.md
Normal file
@@ -0,0 +1,491 @@
|
||||
# Authentication API Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The nostr_core_lib unified request validation system provides a comprehensive authentication and authorization framework for Nostr-based applications. It combines Nostr event validation with flexible rule-based authentication in a single API call.
|
||||
|
||||
## Authentication Flow and Order of Operations
|
||||
|
||||
### Authentication Flow Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ Request Received │
|
||||
└──────────┬──────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┐ ╔═══════════════════╗
|
||||
│ Input Valid?├─No─►║ REJECT: Invalid ║
|
||||
└──────┬──────┘ ║ Input (~1μs) ║
|
||||
│Yes ╚═══════════════════╝
|
||||
▼
|
||||
┌─────────────┐ ╔═══════════════════╗
|
||||
│System Init? ├─No─►║ REJECT: Not ║
|
||||
└──────┬──────┘ ║ Initialized ║
|
||||
│Yes ╚═══════════════════╝
|
||||
▼
|
||||
┌─────────────┐
|
||||
│Auth Header? │
|
||||
└──────┬──────┘
|
||||
│Yes
|
||||
▼ ┌─────────────────────┐
|
||||
┌─────────────┐ No │ │
|
||||
│Parse Header ├────────────┤ Skip Nostr │
|
||||
└──────┬──────┘ │ Validation │
|
||||
│ │ │
|
||||
▼ └──────────┬──────────┘
|
||||
┌─────────────┐ ╔═══════════════════╗ │
|
||||
│Valid Base64?├─No─►║ REJECT: Malformed ║ │
|
||||
└──────┬──────┘ ║ Header (~10μs) ║ │
|
||||
│Yes ╚═══════════════════╝ │
|
||||
▼ │
|
||||
┌─────────────┐ ╔═══════════════════╗ │
|
||||
│Valid JSON? ├─No─►║ REJECT: Invalid ║ │
|
||||
└──────┬──────┘ ║ JSON (~50μs) ║ │
|
||||
│Yes ╚═══════════════════╝ │
|
||||
▼ │
|
||||
┌─────────────┐ ╔═══════════════════╗ │
|
||||
│Valid Struct?├─No─►║ REJECT: Invalid ║ │
|
||||
└──────┬──────┘ ║ Structure (~100μs)║ │
|
||||
│Yes ╚═══════════════════╝ │
|
||||
▼ │
|
||||
┌─────────────────┐ ╔═══════════════════╗ │
|
||||
│ ECDSA Signature │No ║ REJECT: Invalid ║ │
|
||||
│ Verify (~2ms) ├──►║ Signature (~2ms) ║ │
|
||||
└─────────┬───────┘ ╚═══════════════════╝ │
|
||||
│Yes │
|
||||
▼ │
|
||||
┌─────────────────┐ ╔═══════════════════╗ │
|
||||
│Operation Match? │No ║ REJECT: Unauth. ║ |
|
||||
└─────────┬───────┘ ║ Operation (~200μs)║ │
|
||||
│Yes ╚═══════════════════╝ │
|
||||
▼ │
|
||||
┌─────────────────┐ ╔═══════════════════╗ │
|
||||
│Event Expired? │Yes║ REJECT: Expired ║ │
|
||||
└─────────┬───────┘ ║ Event (~50μs) ║ │
|
||||
│No ╚═══════════════════╝ │
|
||||
▼ │
|
||||
┌─────────────────┐ │
|
||||
│Extract Pubkey │ │
|
||||
└─────────┬───────┘ │
|
||||
│ │
|
||||
▼◄───────────────────────────────────────┘
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│Auth Rules │ No ║ ALLOW: Rules ║
|
||||
│Enabled? ├────►║ Disabled ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│Yes
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│Generate Cache │
|
||||
│Key (SHA-256) │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│Cache Hit? │ Yes ║ RETURN: Cached ║
|
||||
│(~100μs lookup) ├────►║ Decision (~100μs) ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
╔═══════════════════════════════════════╗
|
||||
║ RULE EVALUATION ENGINE ║
|
||||
║ (Priority Order) ║
|
||||
╚═══════════════════════════════════════╝
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│1. Pubkey │ Yes ║ DENY: Pubkey ║
|
||||
│ Blacklisted? ├────►║ Blocked ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│2. Hash │ Yes ║ DENY: Hash ║
|
||||
│ Blacklisted? ├────►║ Blocked ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│3. MIME Type │ Yes ║ DENY: MIME ║
|
||||
│ Blacklisted? ├────►║ Blocked ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│4. Size Limit │ Yes ║ DENY: File ║
|
||||
│ Exceeded? ├────►║ Too Large ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│5. Pubkey │ Yes ║ ALLOW: Pubkey ║
|
||||
│ Whitelisted? ├────►║ Whitelisted ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│6. MIME Type │ Yes ║ ALLOW: MIME ║
|
||||
│ Whitelisted? ├────►║ Whitelisted ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
┌─────────────────┐ ╔═══════════════════╗
|
||||
│Whitelist Rules │ Yes ║ DENY: Not in ║
|
||||
│Exist? ├────►║ Whitelist ║
|
||||
└─────────┬───────┘ ╚═══════════════════╝
|
||||
│No
|
||||
▼
|
||||
╔═══════════════════╗
|
||||
║ ALLOW: Default ║
|
||||
║ Allow Policy ║
|
||||
╚═══════════════════╝
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Cache Decision │
|
||||
│ (5min TTL) │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Return Result │
|
||||
│ to Application │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Performance Timeline (ASCII)
|
||||
|
||||
```
|
||||
Fast Path (Cache Hit) - Total: ~101μs
|
||||
┌─────┬─────────────────────────────────────────────────────────────────┬──────┐
|
||||
│ 1μs │ 100μs Cache Lookup │ 1μs │
|
||||
└─────┴─────────────────────────────────────────────────────────────────┴──────┘
|
||||
Input │ │ Return
|
||||
Valid │ SQLite SELECT │ Result
|
||||
|
||||
Typical Path (Valid Request) - Total: ~2.4ms
|
||||
┌──┬───┬────┬─────────────────────────┬────────┬────┬──┐
|
||||
│1μ│50μ│100μ│ 2000μs │ 200μs │100μ│1μ│
|
||||
└──┴───┴────┴─────────────────────────┴────────┴────┴──┘
|
||||
│ │ │ │ │ │ │
|
||||
│ │ │ │ ECDSA Signature │ Rule │Cache│Return
|
||||
│ │ │ │ Verification │ Eval │Store│Result
|
||||
│ │ │ │ (Most Expensive) │ │ │
|
||||
│ │ │
|
||||
│ │ JSON Parse
|
||||
│ Header Parse
|
||||
Input Validation
|
||||
|
||||
Worst Case (Full Validation) - Total: ~2.7ms
|
||||
┌──┬───┬────┬─────────────────────────┬─────────┬────┬──┐
|
||||
│1μ│50μ│100μ│ 2000μs │ 500μs │100μ│1μ│
|
||||
└──┴───┴────┴─────────────────────────┴─────────┴────┴──┘
|
||||
│
|
||||
All 6 Rule Checks
|
||||
(Multiple DB Queries)
|
||||
```
|
||||
|
||||
### Request Processing Flow (DDoS-Optimized)
|
||||
|
||||
The authentication system is designed with **performance and DDoS protection** as primary concerns. Here's the exact order of operations:
|
||||
|
||||
#### Phase 1: Input Validation (Immediate Rejection)
|
||||
1. **Null Pointer Checks** - Reject malformed requests instantly (lines 122-128)
|
||||
2. **Initialization Check** - Verify system is properly initialized
|
||||
3. **Basic Structure Validation** - Ensure required fields are present
|
||||
|
||||
#### Phase 2: Nostr Event Validation (CPU Intensive)
|
||||
4. **Authorization Header Parsing** (lines 139-148)
|
||||
- Extract base64-encoded Nostr event from `Authorization: Nostr <base64>` header
|
||||
- Decode base64 to JSON (memory allocation + decoding)
|
||||
- **Early exit**: Invalid base64 or malformed header rejected immediately
|
||||
|
||||
5. **JSON Parsing** (lines 150-156)
|
||||
- Parse Nostr event JSON using cJSON
|
||||
- **Early exit**: Invalid JSON rejected before signature verification
|
||||
|
||||
6. **Nostr Event Structure Validation** (lines 159-166)
|
||||
- Validate event has required fields (kind, pubkey, sig, etc.)
|
||||
- Check event kind is 24242 for Blossom operations
|
||||
- **Early exit**: Invalid structure rejected before expensive crypto operations
|
||||
|
||||
7. **Cryptographic Signature Verification** (lines 159-166)
|
||||
- **Most CPU-intensive operation** - ECDSA signature verification
|
||||
- Validates event authenticity using secp256k1
|
||||
- **Early exit**: Invalid signatures rejected before database queries
|
||||
|
||||
8. **Operation-Specific Validation** (lines 169-178)
|
||||
- Verify event authorizes the requested operation (upload/delete/list)
|
||||
- Check required tags (t=operation, x=hash, expiration)
|
||||
- Validate timestamp and expiration
|
||||
- **Early exit**: Expired or mismatched events rejected
|
||||
|
||||
9. **Public Key Extraction** (lines 181-184)
|
||||
- Extract validated public key from event for rule evaluation
|
||||
|
||||
#### Phase 3: Authentication Rules (Database Queries)
|
||||
10. **Rules System Check** (line 191)
|
||||
- Quick config check if authentication rules are enabled
|
||||
- **Early exit**: If disabled, allow request immediately
|
||||
|
||||
11. **Cache Lookup** (lines 1051-1054)
|
||||
- Generate SHA-256 cache key from request parameters
|
||||
- Check SQLite cache for previous decision
|
||||
- **Early exit**: Cache hit returns cached decision (5-minute TTL)
|
||||
|
||||
12. **Rule Evaluation** (Priority Order - lines 1061-1094):
|
||||
- **a. Pubkey Blacklist** (highest priority) - Immediate denial if matched
|
||||
- **b. Hash Blacklist** - Block specific content hashes
|
||||
- **c. MIME Type Blacklist** - Block dangerous file types
|
||||
- **d. File Size Limits** - Enforce upload size restrictions
|
||||
- **e. Pubkey Whitelist** - Allow specific users (only if not denied above)
|
||||
- **f. MIME Type Whitelist** - Allow specific file types
|
||||
|
||||
13. **Whitelist Default Denial** (lines 1097-1121)
|
||||
- If whitelist rules exist but none matched, deny request
|
||||
- Prevents whitelist bypass attacks
|
||||
|
||||
14. **Cache Storage** (line 1124)
|
||||
- Store decision in cache for future requests (5-minute TTL)
|
||||
|
||||
### DDoS Protection Features
|
||||
|
||||
#### **Fail-Fast Design**
|
||||
- **Input validation** happens before any expensive operations
|
||||
- **Authorization header parsing** fails fast on malformed data
|
||||
- **JSON parsing** rejects invalid data before signature verification
|
||||
- **Structure validation** happens before cryptographic operations
|
||||
|
||||
#### **Expensive Operations Last**
|
||||
- **Signature verification** only after structure validation
|
||||
- **Database queries** only after successful Nostr validation
|
||||
- **Cache prioritized** over database queries
|
||||
|
||||
#### **Caching Strategy**
|
||||
- **SHA-256 cache keys** prevent cache pollution attacks
|
||||
- **5-minute TTL** balances performance with rule changes
|
||||
- **LRU eviction** prevents memory exhaustion
|
||||
- **Per-request caching** includes all parameters (pubkey, operation, hash, MIME, size)
|
||||
|
||||
#### **Resource Limits**
|
||||
- **JSON parsing** limited to 4KB buffer size
|
||||
- **Cache entries** limited to prevent memory exhaustion
|
||||
- **Database connection pooling** (single connection with proper cleanup)
|
||||
- **String length limits** on all inputs
|
||||
|
||||
#### **Attack Mitigation**
|
||||
- **Base64 bombs** - Limited decode buffer size (4KB)
|
||||
- **JSON bombs** - cJSON library handles malformed JSON safely
|
||||
- **Cache poisoning** - Cryptographic cache keys prevent collisions
|
||||
- **Rule bypass** - Whitelist default denial prevents unauthorized access
|
||||
- **Replay attacks** - Timestamp and expiration validation
|
||||
- **Hash collision attacks** - Full SHA-256 verification
|
||||
|
||||
### Performance Characteristics
|
||||
|
||||
#### **Best Case** (Cached Decision):
|
||||
1. Input validation: ~1μs
|
||||
2. Cache lookup: ~100μs (SQLite SELECT)
|
||||
3. **Total: ~101μs**
|
||||
|
||||
#### **Worst Case** (Full Validation + Rule Evaluation):
|
||||
1. Input validation: ~1μs
|
||||
2. Base64 decoding: ~50μs
|
||||
3. JSON parsing: ~100μs
|
||||
4. Signature verification: ~2000μs (ECDSA)
|
||||
5. Database queries: ~500μs (6 rule checks)
|
||||
6. Cache storage: ~100μs
|
||||
7. **Total: ~2751μs (~2.7ms)**
|
||||
|
||||
#### **Typical Case** (Valid Request, Rules Enabled):
|
||||
1. Full validation: ~2200μs
|
||||
2. Cache miss, 2-3 rule checks: ~200μs
|
||||
3. **Total: ~2400μs (~2.4ms)**
|
||||
|
||||
### Security Order Rationale
|
||||
|
||||
The rule evaluation order is specifically designed for security:
|
||||
|
||||
1. **Blacklists First** - Immediate denial of known bad actors
|
||||
2. **Resource Limits** - Prevent resource exhaustion attacks
|
||||
3. **Whitelists Last** - Only allow after passing all security checks
|
||||
4. **Default Deny** - If whitelists exist but don't match, deny
|
||||
|
||||
This ensures that even if an attacker bypasses one layer, subsequent layers will catch the attack.
|
||||
|
||||
## Core API
|
||||
|
||||
### Primary Function
|
||||
|
||||
```c
|
||||
int nostr_validate_request(nostr_request_t* request, nostr_request_result_t* result);
|
||||
```
|
||||
|
||||
This single function handles:
|
||||
- Nostr event signature validation
|
||||
- Event structure validation (required fields, timestamps)
|
||||
- Authentication rule evaluation
|
||||
- Public key extraction and validation
|
||||
|
||||
### Request Structure
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
const char* event_json; // Raw Nostr event JSON
|
||||
const char* app_id; // Application identifier ("ginxsom", "c-relay")
|
||||
const char* operation; // Operation type ("upload", "delete", "list")
|
||||
const char* content_hash; // SHA-256 hash for file operations (optional)
|
||||
const char* mime_type; // MIME type for upload operations (optional)
|
||||
size_t content_size; // File size for upload operations (0 if N/A)
|
||||
} nostr_request_t;
|
||||
```
|
||||
|
||||
### Result Structure
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
int is_valid; // 1 if request is valid, 0 otherwise
|
||||
int error_code; // Specific error code (see Error Codes)
|
||||
char error_message[512]; // Human-readable error description
|
||||
char pubkey[65]; // Extracted public key (hex, null-terminated)
|
||||
time_t timestamp; // Event timestamp
|
||||
char event_id[65]; // Event ID (hex, null-terminated)
|
||||
} nostr_request_result_t;
|
||||
```
|
||||
|
||||
## Authentication Rules System
|
||||
|
||||
The system supports priority-based authentication rules that are evaluated in order:
|
||||
|
||||
### Rule Types
|
||||
|
||||
1. **NOSTR_AUTH_RULE_PUBKEY_WHITELIST** - Allow specific public keys
|
||||
2. **NOSTR_AUTH_RULE_PUBKEY_BLACKLIST** - Block specific public keys
|
||||
3. **NOSTR_AUTH_RULE_HASH_BLACKLIST** - Block specific content hashes
|
||||
4. **NOSTR_AUTH_RULE_MIME_RESTRICTION** - Restrict allowed MIME types
|
||||
5. **NOSTR_AUTH_RULE_SIZE_LIMIT** - Enforce maximum file sizes
|
||||
|
||||
### Rule Evaluation
|
||||
|
||||
- Rules are processed by priority (lower numbers = higher priority)
|
||||
- First matching rule determines the outcome
|
||||
- ALLOW rules permit the request
|
||||
- DENY rules reject the request
|
||||
- If no rules match, the default action is ALLOW
|
||||
|
||||
### Rule Caching
|
||||
|
||||
The system includes an intelligent caching mechanism:
|
||||
- LRU (Least Recently Used) eviction policy
|
||||
- Configurable cache size (default: 1000 entries)
|
||||
- Cache keys based on pubkey + operation + content hash
|
||||
- Automatic cache invalidation when rules change
|
||||
|
||||
## Database Backend
|
||||
|
||||
### Pluggable Architecture
|
||||
|
||||
The system uses a pluggable database backend interface:
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
int (*init)(const char* connection_string, void** context);
|
||||
int (*get_rules)(void* context, const char* app_id,
|
||||
nostr_auth_rule_t** rules, int* count);
|
||||
int (*cleanup)(void* context);
|
||||
} nostr_db_backend_t;
|
||||
```
|
||||
|
||||
### SQLite Implementation
|
||||
|
||||
Default implementation uses SQLite with the following schema:
|
||||
|
||||
```sql
|
||||
-- Authentication rules table (per application)
|
||||
CREATE TABLE auth_rules_[APP_ID] (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
priority INTEGER NOT NULL,
|
||||
rule_type INTEGER NOT NULL,
|
||||
action INTEGER NOT NULL,
|
||||
pattern TEXT,
|
||||
value_int INTEGER,
|
||||
created_at INTEGER DEFAULT (strftime('%s', 'now')),
|
||||
updated_at INTEGER DEFAULT (strftime('%s', 'now'))
|
||||
);
|
||||
```
|
||||
|
||||
## Error Codes
|
||||
|
||||
The system uses specific error codes for different failure scenarios:
|
||||
|
||||
- **-50**: `NOSTR_AUTH_ERROR_INVALID_EVENT` - Malformed Nostr event
|
||||
- **-51**: `NOSTR_AUTH_ERROR_INVALID_SIGNATURE` - Invalid event signature
|
||||
- **-52**: `NOSTR_AUTH_ERROR_PUBKEY_BLOCKED` - Public key is blacklisted
|
||||
- **-53**: `NOSTR_AUTH_ERROR_HASH_BLOCKED` - Content hash is blacklisted
|
||||
- **-54**: `NOSTR_AUTH_ERROR_MIME_RESTRICTED` - MIME type not allowed
|
||||
- **-55**: `NOSTR_AUTH_ERROR_SIZE_EXCEEDED` - File size limit exceeded
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Validation
|
||||
|
||||
```c
|
||||
#include "nostr_core/request_validator.h"
|
||||
|
||||
// Initialize the system (once per application)
|
||||
int result = nostr_request_validator_init("db/myapp.db", "myapp");
|
||||
if (result != 0) {
|
||||
fprintf(stderr, "Failed to initialize validator: %d\n", result);
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Validate a request
|
||||
nostr_request_t request = {
|
||||
.event_json = "{\"kind\":24242,\"pubkey\":\"abc123...\",\"sig\":\"def456...\"}",
|
||||
.app_id = "myapp",
|
||||
.operation = "upload",
|
||||
.content_hash = "sha256hash...",
|
||||
.mime_type = "text/plain",
|
||||
.content_size = 1024
|
||||
};
|
||||
|
||||
nostr_request_result_t result;
|
||||
int status = nostr_validate_request(&request, &result);
|
||||
|
||||
if (result.is_valid) {
|
||||
printf("Request authorized for pubkey: %s\n", result.pubkey);
|
||||
} else {
|
||||
printf("Request denied: %s (code: %d)\n", result.error_message, result.error_code);
|
||||
}
|
||||
```
|
||||
|
||||
### Ginxsom Integration
|
||||
|
||||
The ginxsom application has been updated to use this system:
|
||||
|
||||
```c
|
||||
// Replace old authenticate_request_with_rules() calls with:
|
||||
nostr_request_t auth_request = {
|
||||
.event_json = event_json,
|
||||
.app_id = "ginxsom",
|
||||
.operation = "upload", // or "list", "delete"
|
||||
.content_hash = calculated_hash,
|
||||
.mime_type = detected_mime_type,
|
||||
.content_size = file_size
|
||||
};
|
||||
|
||||
nostr_request_result_t auth_result;
|
||||
int auth_status = nostr_validate_request(&auth_request, &auth_result);
|
||||
|
||||
if (!auth_result.is_valid) {
|
||||
printf("Status: 403\r\n");
|
||||
printf("Content-Type: application/json\r\n\r\n");
|
||||
printf("{\"error\":\"Authentication failed\",\"message\":\"%s\"}\n",
|
||||
auth_result.error_message);
|
||||
return;
|
||||
}
|
||||
|
||||
// Use auth_result.pubkey for the authenticated public key
|
||||
```
|
||||
|
||||
2
Makefile
2
Makefile
@@ -8,7 +8,7 @@ BUILDDIR = build
|
||||
TARGET = $(BUILDDIR)/ginxsom-fcgi
|
||||
|
||||
# Source files
|
||||
SOURCES = $(SRCDIR)/main.c
|
||||
SOURCES = $(SRCDIR)/main.c $(SRCDIR)/admin_api.c
|
||||
OBJECTS = $(SOURCES:$(SRCDIR)/%.c=$(BUILDDIR)/%.o)
|
||||
|
||||
# Default target
|
||||
|
||||
219
README_ADMIN_API.md
Normal file
219
README_ADMIN_API.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# Ginxsom Admin API
|
||||
|
||||
A Nostr-compliant admin interface for the Ginxsom Blossom server that provides programmatic access to server statistics, configuration, and file management operations.
|
||||
|
||||
## Overview
|
||||
|
||||
The admin API allows server administrators to:
|
||||
- View server statistics (file counts, storage usage, user metrics)
|
||||
- Retrieve and update server configuration settings
|
||||
- Browse recent uploaded files with pagination
|
||||
- Monitor server health and disk usage
|
||||
|
||||
All operations require Nostr authentication using admin-authorized public keys.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Configure Admin Access
|
||||
|
||||
Add your admin pubkey to the server configuration:
|
||||
|
||||
```bash
|
||||
# Generate admin keys (keep private key secure!)
|
||||
ADMIN_PRIVKEY=$(nak key generate)
|
||||
ADMIN_PUBKEY=$(echo "$ADMIN_PRIVKEY" | nak key public)
|
||||
|
||||
# Configure server
|
||||
sqlite3 db/ginxsom.db << EOF
|
||||
INSERT OR REPLACE INTO server_config (key, value, description) VALUES
|
||||
('admin_pubkey', '$ADMIN_PUBKEY', 'Nostr public key authorized for admin operations'),
|
||||
('admin_enabled', 'true', 'Enable admin interface');
|
||||
EOF
|
||||
```
|
||||
|
||||
### 2. Build and Start Server
|
||||
|
||||
```bash
|
||||
make clean && make
|
||||
spawn-fcgi -s /tmp/ginxsom-fcgi.sock -n ./build/ginxsom-fcgi
|
||||
nginx -c $(pwd)/config/local-nginx.conf
|
||||
```
|
||||
|
||||
### 3. Test the API
|
||||
|
||||
```bash
|
||||
# Run the complete test suite
|
||||
./tests/admin_test.sh
|
||||
|
||||
# Or test individual endpoints
|
||||
export ADMIN_PRIVKEY="your_private_key_here"
|
||||
./tests/admin_test.sh
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### GET /api/health
|
||||
System health check (no authentication required).
|
||||
```bash
|
||||
curl http://localhost:9001/api/health
|
||||
```
|
||||
|
||||
### GET /api/stats
|
||||
Server statistics and metrics.
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"total_files": 1234,
|
||||
"total_bytes": 104857600,
|
||||
"total_size_mb": 100.0,
|
||||
"unique_uploaders": 56,
|
||||
"avg_file_size": 85049,
|
||||
"file_types": {
|
||||
"image/png": 45,
|
||||
"image/jpeg": 32
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GET /api/config
|
||||
Current server configuration.
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"max_file_size": "104857600",
|
||||
"require_auth": "false",
|
||||
"server_name": "ginxsom",
|
||||
"nip94_enabled": "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### PUT /api/config
|
||||
Update server configuration.
|
||||
```json
|
||||
{
|
||||
"max_file_size": "209715200",
|
||||
"require_auth": "true",
|
||||
"nip94_enabled": "true"
|
||||
}
|
||||
```
|
||||
|
||||
### GET /api/files
|
||||
Recent uploaded files with pagination.
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"sha256": "abc123...",
|
||||
"size": 184292,
|
||||
"type": "application/pdf",
|
||||
"uploaded_at": 1725105921,
|
||||
"uploader_pubkey": "def456...",
|
||||
"url": "http://localhost:9001/abc123.pdf"
|
||||
}
|
||||
],
|
||||
"total": 1234,
|
||||
"limit": 50,
|
||||
"offset": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Manual API Usage with nak and curl
|
||||
|
||||
### Generate Admin Authentication Event
|
||||
|
||||
```bash
|
||||
# Create an authenticated event
|
||||
EVENT=$(nak event -k 24242 -c "admin_request" \
|
||||
--tag t="GET" \
|
||||
--tag expiration="$(date -d '+1 hour' +%s)" \
|
||||
--sec "$ADMIN_PRIVKEY")
|
||||
|
||||
# Send authenticated request
|
||||
AUTH_HEADER="Nostr $(echo "$EVENT" | base64 -w 0)"
|
||||
curl -H "Authorization: $AUTH_HEADER" http://localhost:9001/api/stats
|
||||
```
|
||||
|
||||
### Update Configuration
|
||||
|
||||
```bash
|
||||
# Create PUT event (method in tag)
|
||||
EVENT=$(nak event -k 24242 -c "admin_request" \
|
||||
--tag t="PUT" \
|
||||
--tag expiration="$(date -d '+1 hour' +%s)" \
|
||||
--sec "$ADMIN_PRIVKEY")
|
||||
|
||||
AUTH_HEADER="Nostr $(echo "$EVENT" | base64 -w 0)"
|
||||
|
||||
curl -X PUT -H "Authorization: $AUTH_HEADER" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"max_file_size": "209715200", "require_auth": "true"}' \
|
||||
http://localhost:9001/api/config
|
||||
```
|
||||
|
||||
## Security Features
|
||||
|
||||
- **Nostr Authentication**: All admin operations require valid Nostr kind 24242 events
|
||||
- **Pubkey Verification**: Only events signed by configured admin pubkeys are accepted
|
||||
- **Event Expiration**: Admin events must include expiration timestamps for security
|
||||
- **Access Control**: Separate enable/disable flag for admin interface
|
||||
|
||||
## Development and Testing
|
||||
|
||||
### Prerequisites
|
||||
- nak (https://github.com/fiatjaf/nak)
|
||||
- curl, jq
|
||||
- sqlite3
|
||||
|
||||
### Run Tests
|
||||
```bash
|
||||
# Make test script executable
|
||||
chmod +x tests/admin_test.sh
|
||||
|
||||
# Run complete test suite
|
||||
./tests/admin_test.sh
|
||||
|
||||
# Run with specific admin key
|
||||
export ADMIN_PRIVKEY="your_private_key"
|
||||
./tests/admin_test.sh
|
||||
```
|
||||
|
||||
### Build System
|
||||
```bash
|
||||
# Clean build
|
||||
make clean
|
||||
|
||||
# Build with debug info
|
||||
make debug
|
||||
|
||||
# Run FastCGI process
|
||||
make run
|
||||
```
|
||||
|
||||
## Files Added/Modified
|
||||
|
||||
- `src/admin_api.h` - Admin API function declarations
|
||||
- `src/admin_api.c` - Complete admin API implementation
|
||||
- `src/main.c` - Updated with admin API routing
|
||||
- `config/local-nginx.conf` - Updated with admin API routes
|
||||
- `tests/admin_test.sh` - Complete test suite
|
||||
- `Makefile` - Updated to compile admin_api.c
|
||||
- `README_ADMIN_API.md` - This documentation
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- **Nostr Relay Integration**: Automatic relay subscription for remote admin control
|
||||
- **Admin Pubkey Rotation**: Support for multiple admin keys and key rotation
|
||||
- **Audit Logging**: Detailed logging of admin operations
|
||||
- **Rate Limiting**: Prevent abuse of admin endpoints
|
||||
- **Web Dashboard**: Optional HTML/CSS/JavaScript frontend
|
||||
|
||||
---
|
||||
|
||||
The admin API provides a secure, Nostr-compliant interface for server management through command-line tools while maintaining full compatibility with the existing Blossom protocol implementation.
|
||||
266
Trash/auth_test_working.sh
Executable file
266
Trash/auth_test_working.sh
Executable file
@@ -0,0 +1,266 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Working Authentication System Test Suite
|
||||
# Tests the unified nostr_core_lib authentication system integrated into ginxsom
|
||||
|
||||
# Configuration
|
||||
SERVER_URL="http://localhost:9001"
|
||||
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
|
||||
LIST_ENDPOINT="${SERVER_URL}/list"
|
||||
DELETE_ENDPOINT="${SERVER_URL}/delete"
|
||||
DB_PATH="db/ginxsom.db"
|
||||
TEST_DIR="tests/auth_test_tmp"
|
||||
|
||||
# Test keys for different scenarios
|
||||
TEST_ADMIN_PRIVKEY="993bf9c54fc00bd32a5a1ce64b6d384a5fce109df1e9aee9be1052c1e5cd8120"
|
||||
TEST_ADMIN_PUBKEY="2ef05348f28d24e0f0ed0751278442c27b62c823c37af8d8d89d8592c6ee84e7"
|
||||
|
||||
TEST_USER1_PRIVKEY="5c0c523f52a5b6fad39ed2403092df8cebc36318b39383bca6c00808626fab3a"
|
||||
TEST_USER1_PUBKEY="79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798"
|
||||
|
||||
TEST_USER2_PRIVKEY="182c3a5e3b7a1b7e4f5c6b7c8b4a5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2"
|
||||
TEST_USER2_PUBKEY="c95195e5e7de1ad8c4d3c0ac4e8b5c0c4e0c4d3c1e5c8d4c2e7e9f4a5b6c7d8e"
|
||||
|
||||
# Test counters
|
||||
TESTS_PASSED=0
|
||||
TESTS_FAILED=0
|
||||
TESTS_TOTAL=0
|
||||
|
||||
echo "=== Ginxsom Authentication System Test Suite ==="
|
||||
echo "Testing unified nostr_core_lib authentication integration"
|
||||
echo "Timestamp: $(date -Iseconds)"
|
||||
echo
|
||||
|
||||
# Helper functions
|
||||
test_pass() {
|
||||
local test_name="$1"
|
||||
((TESTS_PASSED++))
|
||||
((TESTS_TOTAL++))
|
||||
echo "✓ $test_name"
|
||||
}
|
||||
|
||||
test_fail() {
|
||||
local test_name="$1"
|
||||
local reason="$2"
|
||||
((TESTS_FAILED++))
|
||||
((TESTS_TOTAL++))
|
||||
echo "✗ $test_name: $reason"
|
||||
}
|
||||
|
||||
# Check prerequisites
|
||||
echo "[INFO] Checking prerequisites..."
|
||||
for cmd in nak curl jq sqlite3; do
|
||||
if ! command -v $cmd &> /dev/null; then
|
||||
echo "[ERROR] $cmd command not found"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Check if server is running
|
||||
if ! curl -s -f "${SERVER_URL}/" > /dev/null 2>&1; then
|
||||
echo "[ERROR] Server not running at $SERVER_URL"
|
||||
echo "[INFO] Start with: ./restart-all.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if database exists
|
||||
if [[ ! -f "$DB_PATH" ]]; then
|
||||
echo "[ERROR] Database not found at $DB_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[SUCCESS] All prerequisites met"
|
||||
|
||||
# Setup test environment
|
||||
echo "[INFO] Setting up test environment..."
|
||||
mkdir -p "$TEST_DIR"
|
||||
|
||||
# Enable authentication rules in database
|
||||
sqlite3 "$DB_PATH" "INSERT OR REPLACE INTO auth_config (key, value) VALUES ('auth_rules_enabled', 'true');"
|
||||
|
||||
# Clear any existing test rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules WHERE description LIKE 'TEST_%' ESCAPE '\';" 2>/dev/null || true
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_cache;" 2>/dev/null || true
|
||||
|
||||
echo "[SUCCESS] Test environment ready"
|
||||
|
||||
# Generate test file
|
||||
create_test_file() {
|
||||
local filename="$1"
|
||||
local size="${2:-1024}"
|
||||
local filepath="$TEST_DIR/$filename"
|
||||
|
||||
if [[ $size -lt 100 ]]; then
|
||||
echo "Small test file $(date)" > "$filepath"
|
||||
else
|
||||
echo "Test file: $filename" > "$filepath"
|
||||
echo "Created: $(date -Iseconds)" >> "$filepath"
|
||||
echo "Size target: $size bytes" >> "$filepath"
|
||||
dd if=/dev/urandom bs=1 count=$((size - 200)) 2>/dev/null | base64 >> "$filepath"
|
||||
fi
|
||||
|
||||
echo "$filepath"
|
||||
}
|
||||
|
||||
# Generate nostr event for authentication
|
||||
create_auth_event() {
|
||||
local privkey="$1"
|
||||
local operation="$2"
|
||||
local hash="$3"
|
||||
local expiration_offset="${4:-3600}" # 1 hour default
|
||||
|
||||
local expiration=$(date -d "+${expiration_offset} seconds" +%s)
|
||||
|
||||
local event_args=(-k 24242 -c "" --tag "t=$operation" --tag "expiration=$expiration" --sec "$privkey")
|
||||
|
||||
if [[ -n "$hash" ]]; then
|
||||
event_args+=(--tag "x=$hash")
|
||||
fi
|
||||
|
||||
nak event "${event_args[@]}"
|
||||
}
|
||||
|
||||
# Test authenticated upload
|
||||
test_authenticated_upload() {
|
||||
local privkey="$1"
|
||||
local operation="$2"
|
||||
local file_path="$3"
|
||||
local expected_status="$4"
|
||||
local test_description="$5"
|
||||
|
||||
echo "[TEST] $test_description"
|
||||
|
||||
local file_hash=$(sha256sum "$file_path" | cut -d' ' -f1)
|
||||
local event=$(create_auth_event "$privkey" "$operation" "$file_hash")
|
||||
local auth_header="Nostr $(echo "$event" | base64 -w 0)"
|
||||
|
||||
local mime_type=$(file -b --mime-type "$file_path" 2>/dev/null || echo "application/octet-stream")
|
||||
|
||||
local response_file=$(mktemp)
|
||||
local http_status=$(curl -s -w "%{http_code}" \
|
||||
-H "Authorization: $auth_header" \
|
||||
-H "Content-Type: $mime_type" \
|
||||
--data-binary "@$file_path" \
|
||||
-X PUT "$UPLOAD_ENDPOINT" \
|
||||
-o "$response_file")
|
||||
|
||||
local response_body=$(cat "$response_file")
|
||||
rm -f "$response_file"
|
||||
|
||||
if [[ "$http_status" == "$expected_status" ]]; then
|
||||
test_pass "$test_description (HTTP $http_status)"
|
||||
return 0
|
||||
else
|
||||
test_fail "$test_description" "Expected HTTP $expected_status, got $http_status. Response: $response_body"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Add auth rule to database
|
||||
add_auth_rule() {
|
||||
local rule_type="$1"
|
||||
local target="$2"
|
||||
local operation="${3:-*}"
|
||||
local priority="${4:-100}"
|
||||
local description="${5:-TEST_RULE}"
|
||||
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description)
|
||||
VALUES ('$rule_type', '$target', '$operation', $priority, 1, '$description');"
|
||||
}
|
||||
|
||||
# Clear test rules
|
||||
clear_auth_rules() {
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules WHERE description LIKE 'TEST_%' ESCAPE '\';" 2>/dev/null || true
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_cache;" 2>/dev/null || true
|
||||
}
|
||||
|
||||
echo
|
||||
echo "=== Test 1: Basic Authentication (Disabled) ==="
|
||||
|
||||
# Disable auth rules temporarily
|
||||
sqlite3 "$DB_PATH" "INSERT OR REPLACE INTO auth_config (key, value) VALUES ('auth_rules_enabled', 'false');"
|
||||
|
||||
test_file1=$(create_test_file "basic_test.txt" 500)
|
||||
test_authenticated_upload "$TEST_USER1_PRIVKEY" "upload" "$test_file1" "200" "Upload without auth rules (disabled)"
|
||||
|
||||
# Re-enable auth rules for other tests
|
||||
sqlite3 "$DB_PATH" "INSERT OR REPLACE INTO auth_config (key, value) VALUES ('auth_rules_enabled', 'true');"
|
||||
|
||||
echo
|
||||
echo "=== Test 2: Pubkey Whitelist Rules ==="
|
||||
|
||||
clear_auth_rules
|
||||
add_auth_rule "pubkey_whitelist" "$TEST_USER1_PUBKEY" "upload" 10 "TEST_WHITELIST_USER1"
|
||||
|
||||
test_file2=$(create_test_file "whitelist_test.txt" 500)
|
||||
test_authenticated_upload "$TEST_USER1_PRIVKEY" "upload" "$test_file2" "200" "Whitelisted user upload"
|
||||
test_authenticated_upload "$TEST_USER2_PRIVKEY" "upload" "$test_file2" "403" "Non-whitelisted user upload"
|
||||
|
||||
echo
|
||||
echo "=== Test 3: Pubkey Blacklist Rules ==="
|
||||
|
||||
clear_auth_rules
|
||||
add_auth_rule "pubkey_blacklist" "$TEST_USER2_PUBKEY" "upload" 5 "TEST_BLACKLIST_USER2"
|
||||
|
||||
test_file3=$(create_test_file "blacklist_test.txt" 500)
|
||||
test_authenticated_upload "$TEST_USER1_PRIVKEY" "upload" "$test_file3" "200" "Non-blacklisted user upload"
|
||||
test_authenticated_upload "$TEST_USER2_PRIVKEY" "upload" "$test_file3" "403" "Blacklisted user upload"
|
||||
|
||||
echo
|
||||
echo "=== Test 4: Hash Blacklist Rules ==="
|
||||
|
||||
clear_auth_rules
|
||||
test_file4=$(create_test_file "hash_blacklist_test.txt" 500)
|
||||
file_hash4=$(sha256sum "$test_file4" | cut -d' ' -f1)
|
||||
|
||||
# Add hash to blacklist
|
||||
add_auth_rule "hash_blacklist" "$file_hash4" "upload" 5 "TEST_HASH_BLACKLIST"
|
||||
|
||||
test_authenticated_upload "$TEST_USER1_PRIVKEY" "upload" "$test_file4" "403" "Blacklisted hash upload"
|
||||
|
||||
# Upload of different hash should succeed
|
||||
test_file4b=$(create_test_file "hash_allowed_test.txt" 600)
|
||||
test_authenticated_upload "$TEST_USER1_PRIVKEY" "upload" "$test_file4b" "200" "Non-blacklisted hash upload"
|
||||
|
||||
echo
|
||||
echo "=== Test 5: Rule Priority Ordering ==="
|
||||
|
||||
clear_auth_rules
|
||||
|
||||
# Add conflicting rules with different priorities
|
||||
add_auth_rule "pubkey_blacklist" "$TEST_USER1_PUBKEY" "upload" 5 "TEST_PRIORITY_BLACKLIST" # Higher priority (lower number)
|
||||
add_auth_rule "pubkey_whitelist" "$TEST_USER1_PUBKEY" "upload" 10 "TEST_PRIORITY_WHITELIST" # Lower priority
|
||||
|
||||
test_file5=$(create_test_file "priority_test.txt" 500)
|
||||
test_authenticated_upload "$TEST_USER1_PRIVKEY" "upload" "$test_file5" "403" "Priority test (blacklist > whitelist)"
|
||||
|
||||
# Reverse priorities
|
||||
clear_auth_rules
|
||||
add_auth_rule "pubkey_whitelist" "$TEST_USER1_PUBKEY" "upload" 5 "TEST_PRIORITY_WHITELIST_HIGH"
|
||||
add_auth_rule "pubkey_blacklist" "$TEST_USER1_PUBKEY" "upload" 10 "TEST_PRIORITY_BLACKLIST_LOW"
|
||||
|
||||
test_authenticated_upload "$TEST_USER1_PRIVKEY" "upload" "$test_file5" "200" "Priority test (whitelist > blacklist)"
|
||||
|
||||
echo
|
||||
echo "=== Cleanup ==="
|
||||
|
||||
# Remove temporary files
|
||||
rm -rf "$TEST_DIR"
|
||||
|
||||
# Clean up test auth rules
|
||||
clear_auth_rules
|
||||
|
||||
echo
|
||||
echo "=== Test Results Summary ==="
|
||||
echo "Tests Passed: $TESTS_PASSED"
|
||||
echo "Tests Failed: $TESTS_FAILED"
|
||||
echo "Total Tests: $TESTS_TOTAL"
|
||||
echo
|
||||
|
||||
if [[ $TESTS_FAILED -eq 0 ]]; then
|
||||
echo "[SUCCESS] All tests passed! ✓"
|
||||
exit 0
|
||||
else
|
||||
echo "[ERROR] $TESTS_FAILED tests failed! ✗"
|
||||
exit 1
|
||||
fi
|
||||
1088
WEB_ADMIN_SPECIFICATION.md
Normal file
1088
WEB_ADMIN_SPECIFICATION.md
Normal file
File diff suppressed because it is too large
Load Diff
387
admin_specification.md
Normal file
387
admin_specification.md
Normal file
@@ -0,0 +1,387 @@
|
||||
# Ginxsom Admin System - Comprehensive Specification
|
||||
|
||||
## Overview
|
||||
|
||||
The Ginxsom admin system provides both programmatic (API-based) and interactive (web-based) administration capabilities for the Ginxsom Blossom server. The system is designed around Nostr-based authentication and supports multiple administration workflows including first-run setup, ongoing configuration management, and operational monitoring.
|
||||
|
||||
## Architecture Components
|
||||
|
||||
### 1. Configuration System
|
||||
- **File-based configuration**: Signed Nostr events stored as JSON files following XDG Base Directory specification
|
||||
- **Database configuration**: Key-value pairs stored in SQLite for runtime configuration
|
||||
- **Interactive setup**: Command-line wizard for initial server configuration
|
||||
- **Manual setup**: Scripts for generating signed configuration events
|
||||
|
||||
### 2. Authentication & Authorization
|
||||
- **Nostr-based auth**: All admin operations require valid Nostr event signatures
|
||||
- **Admin pubkey verification**: Only configured admin public keys can perform admin operations
|
||||
- **Event validation**: Full cryptographic verification of Nostr events including structure, signature, and expiration
|
||||
- **Method-specific authorization**: Different event types for different operations (upload, admin, delete, etc.)
|
||||
|
||||
### 3. API System
|
||||
- **RESTful endpoints**: `/api/*` routes for programmatic administration
|
||||
- **Command-line testing**: Complete test suite using `nak` and `curl`
|
||||
- **JSON responses**: Structured data for all admin operations
|
||||
- **CORS support**: Cross-origin requests for web admin interface
|
||||
|
||||
### 4. Web Interface (Future)
|
||||
- **Single-page application**: Self-contained HTML file with inline CSS/JS
|
||||
- **Real-time monitoring**: Statistics and system health dashboards
|
||||
- **Configuration management**: GUI for server settings
|
||||
- **File management**: Browse and manage uploaded blobs
|
||||
|
||||
## Configuration System Architecture
|
||||
|
||||
### File-based Configuration (Priority 1)
|
||||
|
||||
**Location**: Follows XDG Base Directory Specification
|
||||
- `$XDG_CONFIG_HOME/ginxsom/ginxsom_config_event.json`
|
||||
- Falls back to `$HOME/.config/ginxsom/ginxsom_config_event.json`
|
||||
|
||||
**Format**: Signed Nostr event containing server configuration
|
||||
```json
|
||||
{
|
||||
"kind": 33333,
|
||||
"created_at": 1704067200,
|
||||
"tags": [
|
||||
["server_privkey", "server_private_key_hex"],
|
||||
["cdn_origin", "https://cdn.example.com"],
|
||||
["max_file_size", "104857600"],
|
||||
["nip94_enabled", "true"]
|
||||
],
|
||||
"content": "Ginxsom server configuration",
|
||||
"pubkey": "admin_public_key_hex",
|
||||
"id": "event_id_hash",
|
||||
"sig": "event_signature"
|
||||
}
|
||||
```
|
||||
|
||||
**Loading Process**:
|
||||
1. Check for file-based config at XDG location
|
||||
2. Validate Nostr event structure and signature
|
||||
3. Extract configuration from event tags
|
||||
4. Apply settings to server (database storage)
|
||||
5. Fall back to database-only config if file missing/invalid
|
||||
|
||||
### Database Configuration (Priority 2)
|
||||
|
||||
**Table**: `server_config`
|
||||
```sql
|
||||
CREATE TABLE server_config (
|
||||
key TEXT PRIMARY KEY,
|
||||
value TEXT NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
```
|
||||
|
||||
**Key Configuration Items**:
|
||||
- `admin_pubkey`: Authorized admin public key
|
||||
- `admin_enabled`: Enable/disable admin interface
|
||||
- `cdn_origin`: Base URL for blob access
|
||||
- `max_file_size`: Maximum upload size in bytes
|
||||
- `nip94_enabled`: Enable NIP-94 metadata emission
|
||||
- `auth_rules_enabled`: Enable authentication rules system
|
||||
|
||||
### Setup Workflows
|
||||
|
||||
#### Interactive Setup (Command Line)
|
||||
```bash
|
||||
# First-run detection
|
||||
if [[ ! -f "$XDG_CONFIG_HOME/ginxsom/ginxsom_config_event.json" ]]; then
|
||||
echo "=== Ginxsom First-Time Setup Required ==="
|
||||
echo "1. Run interactive setup wizard"
|
||||
echo "2. Exit and create config manually"
|
||||
read -p "Choice (1/2): " choice
|
||||
|
||||
if [[ "$choice" == "1" ]]; then
|
||||
./scripts/setup.sh
|
||||
else
|
||||
echo "Manual setup: Run ./scripts/generate_config.sh"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
#### Manual Setup (Script-based)
|
||||
```bash
|
||||
# Generate configuration event
|
||||
./scripts/generate_config.sh --admin-key <admin_pubkey> \
|
||||
--server-key <server_privkey> \
|
||||
--cdn-origin "https://cdn.example.com" \
|
||||
--output "$XDG_CONFIG_HOME/ginxsom/ginxsom_config_event.json"
|
||||
```
|
||||
|
||||
### C Implementation Functions
|
||||
|
||||
#### Configuration Loading
|
||||
```c
|
||||
// Get XDG-compliant config file path
|
||||
int get_config_file_path(char* path, size_t path_size);
|
||||
|
||||
// Load and validate config event from file
|
||||
int load_server_config(const char* config_path);
|
||||
|
||||
// Extract config from validated event and apply to server
|
||||
int apply_config_from_event(cJSON* event);
|
||||
|
||||
// Interactive setup runner for first-run
|
||||
int run_interactive_setup(const char* config_path);
|
||||
```
|
||||
|
||||
#### Security Features
|
||||
- Server private key stored only in memory (never in database)
|
||||
- Config file must be signed Nostr event
|
||||
- Full cryptographic validation of config events
|
||||
- Admin pubkey verification for all operations
|
||||
|
||||
## Admin API Specification
|
||||
|
||||
### Authentication Model
|
||||
|
||||
All admin API endpoints (except `/api/health`) require Nostr authentication:
|
||||
|
||||
**Authorization Header Format**:
|
||||
```
|
||||
Authorization: Nostr <base64-encoded-event>
|
||||
```
|
||||
|
||||
**Required Event Structure**:
|
||||
```json
|
||||
{
|
||||
"kind": 24242,
|
||||
"created_at": 1704067200,
|
||||
"tags": [
|
||||
["t", "GET"],
|
||||
["expiration", "1704070800"]
|
||||
],
|
||||
"content": "admin_request",
|
||||
"pubkey": "admin_public_key",
|
||||
"id": "event_id",
|
||||
"sig": "event_signature"
|
||||
}
|
||||
```
|
||||
|
||||
### API Endpoints
|
||||
|
||||
#### GET /api/health
|
||||
**Purpose**: System health check (no authentication required)
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"database": "connected",
|
||||
"blob_directory": "accessible",
|
||||
"server_time": 1704067200,
|
||||
"uptime": 3600,
|
||||
"disk_usage": {
|
||||
"total_bytes": 1073741824,
|
||||
"used_bytes": 536870912,
|
||||
"available_bytes": 536870912,
|
||||
"usage_percent": 50.0
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/stats
|
||||
**Purpose**: Server statistics and metrics
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"total_files": 1234,
|
||||
"total_bytes": 104857600,
|
||||
"total_size_mb": 100.0,
|
||||
"unique_uploaders": 56,
|
||||
"first_upload": 1693929600,
|
||||
"last_upload": 1704067200,
|
||||
"avg_file_size": 85049,
|
||||
"file_types": {
|
||||
"image/png": 45,
|
||||
"image/jpeg": 32,
|
||||
"application/pdf": 12,
|
||||
"other": 8
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/config
|
||||
**Purpose**: Retrieve current server configuration
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"cdn_origin": "http://localhost:9001",
|
||||
"max_file_size": "104857600",
|
||||
"nip94_enabled": "true",
|
||||
"auth_rules_enabled": "false",
|
||||
"auth_cache_ttl": "300"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### PUT /api/config
|
||||
**Purpose**: Update server configuration
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Request Body**:
|
||||
```json
|
||||
{
|
||||
"max_file_size": "209715200",
|
||||
"nip94_enabled": "true",
|
||||
"cdn_origin": "https://cdn.example.com"
|
||||
}
|
||||
```
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"message": "Configuration updated successfully",
|
||||
"updated_keys": ["max_file_size", "cdn_origin"]
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/files
|
||||
**Purpose**: List recent files with pagination
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Parameters**:
|
||||
- `limit` (default: 50): Number of files to return
|
||||
- `offset` (default: 0): Pagination offset
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"sha256": "b1674191a88ec5cdd733e4240a81803105dc412d6c6708d53ab94fc248f4f553",
|
||||
"size": 184292,
|
||||
"type": "application/pdf",
|
||||
"uploaded_at": 1725105921,
|
||||
"uploader_pubkey": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798",
|
||||
"filename": "document.pdf",
|
||||
"url": "http://localhost:9001/b1674191a88ec5cdd733e4240a81803105dc412d6c6708d53ab94fc248f4f553.pdf"
|
||||
}
|
||||
],
|
||||
"total": 1234,
|
||||
"limit": 50,
|
||||
"offset": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Status
|
||||
|
||||
### ✅ Completed Components
|
||||
1. **Database-based configuration loading** - Implemented in main.c
|
||||
2. **Admin API authentication system** - Implemented in admin_api.c
|
||||
3. **Nostr event validation** - Full cryptographic verification
|
||||
4. **Admin pubkey verification** - Database-backed authorization
|
||||
5. **Basic API endpoints** - Health, stats, config, files
|
||||
|
||||
### ✅ Recently Completed Components
|
||||
1. **File-based configuration system** - Fully implemented in main.c with XDG compliance
|
||||
2. **Interactive setup wizard** - Complete shell script with guided setup process (`scripts/setup.sh`)
|
||||
3. **Manual config generation** - Full-featured command-line config generator (`scripts/generate_config.sh`)
|
||||
4. **Testing infrastructure** - Comprehensive admin API test suite (`scripts/test_admin.sh`)
|
||||
5. **Documentation system** - Complete setup and usage documentation (`scripts/README.md`)
|
||||
|
||||
### 📋 Planned Components
|
||||
1. **Web admin interface** - Single-page HTML application
|
||||
2. **Enhanced monitoring** - Real-time statistics dashboard
|
||||
3. **Bulk operations** - Multi-file management APIs
|
||||
4. **Configuration validation** - Advanced config checking
|
||||
5. **Audit logging** - Admin action tracking
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
### 1. Enable Admin Interface
|
||||
```bash
|
||||
# Configure admin pubkey and enable interface
|
||||
sqlite3 db/ginxsom.db << EOF
|
||||
INSERT OR REPLACE INTO server_config (key, value, description) VALUES
|
||||
('admin_pubkey', 'your_admin_public_key_here', 'Authorized admin public key'),
|
||||
('admin_enabled', 'true', 'Enable admin interface');
|
||||
EOF
|
||||
```
|
||||
|
||||
### 2. Test API Access
|
||||
```bash
|
||||
# Generate admin authentication event
|
||||
ADMIN_PRIVKEY="your_admin_private_key"
|
||||
EVENT=$(nak event -k 24242 -c "admin_request" \
|
||||
--tag t="GET" \
|
||||
--tag expiration="$(date -d '+1 hour' +%s)" \
|
||||
--sec "$ADMIN_PRIVKEY")
|
||||
|
||||
# Test admin API
|
||||
AUTH_HEADER="Nostr $(echo "$EVENT" | base64 -w 0)"
|
||||
curl -H "Authorization: $AUTH_HEADER" http://localhost:9001/api/stats
|
||||
```
|
||||
|
||||
### 3. Configure File-based Setup (Future)
|
||||
```bash
|
||||
# Create XDG config directory
|
||||
mkdir -p "$XDG_CONFIG_HOME/ginxsom"
|
||||
|
||||
# Generate signed config event
|
||||
./scripts/generate_config.sh \
|
||||
--admin-key "your_admin_pubkey" \
|
||||
--server-key "generated_server_privkey" \
|
||||
--output "$XDG_CONFIG_HOME/ginxsom/ginxsom_config_event.json"
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Authentication Security
|
||||
- **Event expiration**: All admin events must include expiration timestamps
|
||||
- **Signature validation**: Full secp256k1 cryptographic verification
|
||||
- **Replay protection**: Event IDs tracked to prevent reuse
|
||||
- **Admin key rotation**: Support for updating admin pubkeys
|
||||
|
||||
### Configuration Security
|
||||
- **File permissions**: Config files should be readable only by server user
|
||||
- **Private key handling**: Server private keys never stored in database
|
||||
- **Config validation**: All configuration changes validated before application
|
||||
- **Backup verification**: Config events cryptographically verifiable
|
||||
|
||||
### Operational Security
|
||||
- **Access logging**: All admin operations logged with timestamps
|
||||
- **Rate limiting**: API endpoints protected against abuse
|
||||
- **Input validation**: All user input sanitized and validated
|
||||
- **Database security**: Prepared statements prevent SQL injection
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### 1. Web Admin Interface
|
||||
- Self-contained HTML file with inline CSS/JavaScript
|
||||
- Real-time monitoring dashboards
|
||||
- Visual configuration management
|
||||
- File upload/management interface
|
||||
|
||||
### 2. Advanced Monitoring
|
||||
- Performance metrics collection
|
||||
- Alert system for critical events
|
||||
- Historical data trending
|
||||
- Resource usage tracking
|
||||
|
||||
### 3. Multi-admin Support
|
||||
- Multiple authorized admin pubkeys
|
||||
- Role-based permissions (read-only vs full admin)
|
||||
- Admin action audit trails
|
||||
- Delegation capabilities
|
||||
|
||||
### 4. Integration Features
|
||||
- Nostr relay integration for admin events
|
||||
- Webhook notifications for admin actions
|
||||
- External authentication providers
|
||||
- API key management for programmatic access
|
||||
|
||||
This specification represents the current understanding and planned development of the Ginxsom admin system, focusing on security, usability, and maintainability.
|
||||
BIN
build/admin_api.o
Normal file
BIN
build/admin_api.o
Normal file
Binary file not shown.
Binary file not shown.
BIN
build/main.o
BIN
build/main.o
Binary file not shown.
@@ -188,6 +188,54 @@ http {
|
||||
fastcgi_pass fastcgi_backend;
|
||||
}
|
||||
|
||||
# Admin API endpoints (/api/*)
|
||||
location /api/ {
|
||||
# Handle preflight OPTIONS requests for CORS
|
||||
if ($request_method = OPTIONS) {
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
add_header Access-Control-Allow-Methods "GET, PUT, OPTIONS";
|
||||
add_header Access-Control-Allow-Headers "Content-Type, Authorization";
|
||||
add_header Access-Control-Max-Age 86400;
|
||||
return 204;
|
||||
}
|
||||
|
||||
if ($request_method !~ ^(GET|PUT)$) {
|
||||
return 405;
|
||||
}
|
||||
|
||||
fastcgi_param QUERY_STRING $query_string;
|
||||
fastcgi_param REQUEST_METHOD $request_method;
|
||||
fastcgi_param CONTENT_TYPE $content_type;
|
||||
fastcgi_param CONTENT_LENGTH $content_length;
|
||||
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
|
||||
fastcgi_param REQUEST_URI $request_uri;
|
||||
fastcgi_param DOCUMENT_URI $document_uri;
|
||||
fastcgi_param DOCUMENT_ROOT $document_root;
|
||||
fastcgi_param SERVER_PROTOCOL $server_protocol;
|
||||
fastcgi_param REQUEST_SCHEME $scheme;
|
||||
fastcgi_param HTTPS $https if_not_empty;
|
||||
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
|
||||
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
|
||||
fastcgi_param REMOTE_ADDR $remote_addr;
|
||||
fastcgi_param REMOTE_PORT $remote_port;
|
||||
fastcgi_param SERVER_ADDR $server_addr;
|
||||
fastcgi_param SERVER_PORT $server_port;
|
||||
fastcgi_param SERVER_NAME $server_name;
|
||||
fastcgi_param REDIRECT_STATUS 200;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
|
||||
# Support for Authorization header
|
||||
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
|
||||
|
||||
fastcgi_pass fastcgi_backend;
|
||||
|
||||
# CORS headers for API access
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
add_header Access-Control-Allow-Methods "GET, PUT, OPTIONS";
|
||||
add_header Access-Control-Allow-Headers "Content-Type, Authorization";
|
||||
add_header Access-Control-Max-Age 86400;
|
||||
}
|
||||
|
||||
# 2. BLOB OPERATIONS (SHA256 patterns)
|
||||
|
||||
# GET/HEAD/DELETE /<sha256> (BUD-01) - Blob operations with optional file extensions
|
||||
|
||||
BIN
db/ginxsom.db
BIN
db/ginxsom.db
Binary file not shown.
BIN
db/ginxsom.db.backup.1756994126
Normal file
BIN
db/ginxsom.db.backup.1756994126
Normal file
Binary file not shown.
132
logs/access.log
132
logs/access.log
@@ -258,3 +258,135 @@
|
||||
127.0.0.1 - - [03/Sep/2025:15:17:48 -0400] "PUT /report HTTP/1.1" 415 150 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [03/Sep/2025:15:17:48 -0400] "PUT /report HTTP/1.1" 200 93 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [03/Sep/2025:15:17:48 -0400] "PUT /report HTTP/1.1" 200 93 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:50:20 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:50:48 -0400] "PUT /upload HTTP/1.1" 500 41 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:51:51 -0400] "PUT /upload HTTP/1.1" 500 41 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:52:08 -0400] "PUT /upload HTTP/1.1" 500 41 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:55:49 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:56:34 -0400] "GET /api/health HTTP/1.1" 503 95 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:56:34 -0400] "GET /api/stats HTTP/1.1" 503 95 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:57:29 -0400] "GET /api/health HTTP/1.1" 200 264 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:57:56 -0400] "GET /api/health HTTP/1.1" 200 266 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:57:56 -0400] "GET /api/stats HTTP/1.1" 401 108 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:09:58:20 -0400] "GET /api/stats HTTP/1.1" 401 108 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:00:29 -0400] "GET /api/stats HTTP/1.1" 401 108 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:00:44 -0400] "GET /api/health HTTP/1.1" 200 266 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:01:20 -0400] "GET /api/stats HTTP/1.1" 200 233 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:01:57 -0400] "GET /api/config HTTP/1.1" 200 221 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:02:31 -0400] "GET /api/health HTTP/1.1" 200 266 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:02:31 -0400] "GET /api/stats HTTP/1.1" 401 108 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:09:14 -0400] "GET /api/health HTTP/1.1" 200 264 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:09:15 -0400] "GET /api/stats HTTP/1.1" 200 233 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:09:15 -0400] "GET /api/config HTTP/1.1" 200 221 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:09:15 -0400] "PUT /api/config HTTP/1.1" 200 143 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:09:16 -0400] "GET /api/config HTTP/1.1" 200 243 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:09:16 -0400] "GET /api/files?limit=10&offset=0 HTTP/1.1" 200 440 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:36 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:42 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:50 -0400] "GET /api/health HTTP/1.1" 200 266 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:51 -0400] "GET /api/stats HTTP/1.1" 200 233 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:51 -0400] "GET /api/config HTTP/1.1" 200 243 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:51 -0400] "PUT /api/config HTTP/1.1" 200 143 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:52 -0400] "GET /api/config HTTP/1.1" 200 243 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:17:52 -0400] "GET /api/files?limit=10&offset=0 HTTP/1.1" 200 1152 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [04/Sep/2025:10:24:48 -0400] "GET /api/health HTTP/1.1" 200 266 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:09:39:54 -0400] "HEAD / HTTP/1.1" 200 0 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:09:40:01 -0400] "HEAD /upload HTTP/1.1" 400 0 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:09:40:52 -0400] "GET /list/79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798 HTTP/1.1" 200 945 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:09:41:11 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:16:43 -0400] "HEAD / HTTP/1.1" 200 0 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:16:51 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:16:51 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:17:07 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:17:44 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:18:53 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:18:53 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:19:32 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:19:33 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:19:50 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:19:51 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:21:29 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:21:59 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:22:00 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:22:41 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:22:42 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:25:01 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:25:01 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:17 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:17 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:17 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:18 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:18 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:18 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:19 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:19 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:19 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:27:20 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:14 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:15 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:15 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:15 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:16 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:16 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:16 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:16 -0400] "PUT /upload HTTP/1.1" 200 512 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:17 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:29:17 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:37:34 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:37:34 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:37:35 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:37:35 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:37:36 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:37:36 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:42:53 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:42:53 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:42:54 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:42:54 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:42:55 -0400] "PUT /upload HTTP/1.1" 401 176 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:42:55 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:47:53 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:48:51 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:48:52 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:48:52 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:48:53 -0400] "PUT /upload HTTP/1.1" 401 149 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:48:54 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:48:54 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:53:56 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:16 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:16 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:17 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:17 -0400] "PUT /upload HTTP/1.1" 401 149 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:18 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:18 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:18 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:18 -0400] "PUT /upload HTTP/1.1" 401 141 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:19 -0400] "PUT /upload HTTP/1.1" 401 141 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:19 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:19 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:19 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:19 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:19 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:19 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:20 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:20 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:20 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:54:21 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:31 -0400] "GET / HTTP/1.1" 200 101 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:32 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:32 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:33 -0400] "PUT /upload HTTP/1.1" 401 149 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:34 -0400] "PUT /upload HTTP/1.1" 401 168 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:34 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:34 -0400] "PUT /upload HTTP/1.1" 200 510 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:34 -0400] "PUT /upload HTTP/1.1" 401 141 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:34 -0400] "PUT /upload HTTP/1.1" 401 141 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:34 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:35 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:35 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:35 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:35 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:35 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:36 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:36 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:36 -0400] "PUT /upload HTTP/1.1" 401 144 "-" "curl/8.15.0"
|
||||
127.0.0.1 - - [07/Sep/2025:10:55:37 -0400] "PUT /upload HTTP/1.1" 401 134 "-" "curl/8.15.0"
|
||||
|
||||
40136
logs/error.log
40136
logs/error.log
File diff suppressed because it is too large
Load Diff
@@ -1 +1 @@
|
||||
FastCGI starting at Wed Sep 3 03:09:53 PM EDT 2025
|
||||
FastCGI starting at Sun Sep 7 10:47:32 AM EDT 2025
|
||||
|
||||
@@ -1 +1 @@
|
||||
253420
|
||||
1241997
|
||||
|
||||
260
scripts/README.md
Normal file
260
scripts/README.md
Normal file
@@ -0,0 +1,260 @@
|
||||
# Ginxsom Admin Scripts
|
||||
|
||||
This directory contains scripts for managing and testing the Ginxsom admin system.
|
||||
|
||||
## Scripts Overview
|
||||
|
||||
### 1. setup.sh - Interactive Setup Wizard
|
||||
**Purpose**: Complete first-time setup with guided configuration
|
||||
**Usage**:
|
||||
```bash
|
||||
./scripts/setup.sh [config_path]
|
||||
```
|
||||
|
||||
**Features**:
|
||||
- Generates admin and server key pairs
|
||||
- Collects server configuration interactively
|
||||
- Creates signed Nostr configuration event
|
||||
- Sets up database configuration
|
||||
- Provides setup completion summary
|
||||
|
||||
**Dependencies**: `nak`, `jq`, `sqlite3`
|
||||
|
||||
### 2. generate_config.sh - Manual Configuration Generator
|
||||
**Purpose**: Create signed configuration events with command-line options
|
||||
**Usage**:
|
||||
```bash
|
||||
./scripts/generate_config.sh [OPTIONS]
|
||||
```
|
||||
|
||||
**Common Examples**:
|
||||
```bash
|
||||
# Generate new keys and create config
|
||||
./scripts/generate_config.sh --generate-keys --output config.json
|
||||
|
||||
# Use existing keys
|
||||
./scripts/generate_config.sh \
|
||||
--admin-key abc123... \
|
||||
--server-key def456... \
|
||||
--output config.json
|
||||
|
||||
# Production server configuration
|
||||
./scripts/generate_config.sh \
|
||||
--admin-key abc123... \
|
||||
--server-key def456... \
|
||||
--cdn-origin https://cdn.example.com \
|
||||
--max-size 209715200 \
|
||||
--enable-auth-rules \
|
||||
--output /etc/ginxsom/config.json
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--admin-key KEY`: Admin private key (64 hex chars)
|
||||
- `--server-key KEY`: Server private key (64 hex chars)
|
||||
- `--cdn-origin URL`: CDN origin URL
|
||||
- `--max-size BYTES`: Maximum file size
|
||||
- `--enable-nip94` / `--disable-nip94`: NIP-94 metadata
|
||||
- `--enable-auth-rules` / `--disable-auth-rules`: Authentication rules
|
||||
- `--cache-ttl SECONDS`: Auth cache TTL
|
||||
- `--output FILE`: Output file path
|
||||
- `--generate-keys`: Generate new key pairs
|
||||
- `--help`: Show detailed help
|
||||
|
||||
**Dependencies**: `nak`, `jq`
|
||||
|
||||
### 3. test_admin.sh - Admin API Test Suite
|
||||
**Purpose**: Test admin API endpoints with Nostr authentication
|
||||
**Usage**:
|
||||
```bash
|
||||
# Load keys from .admin_keys file (created by setup.sh)
|
||||
./scripts/test_admin.sh
|
||||
|
||||
# Use environment variable
|
||||
export ADMIN_PRIVKEY="your_admin_private_key"
|
||||
./scripts/test_admin.sh
|
||||
```
|
||||
|
||||
**Tests**:
|
||||
- Health endpoint (no auth)
|
||||
- Statistics endpoint
|
||||
- Configuration get/put
|
||||
- Files listing
|
||||
- Authentication verification
|
||||
- Database configuration check
|
||||
|
||||
**Dependencies**: `nak`, `curl`, `jq`, `sqlite3`
|
||||
|
||||
## Quick Start Workflows
|
||||
|
||||
### First-Time Setup (Recommended)
|
||||
```bash
|
||||
# Run interactive setup wizard
|
||||
./scripts/setup.sh
|
||||
|
||||
# Test the configuration
|
||||
./scripts/test_admin.sh
|
||||
```
|
||||
|
||||
### Manual Setup for Advanced Users
|
||||
```bash
|
||||
# Generate configuration with specific settings
|
||||
./scripts/generate_config.sh \
|
||||
--generate-keys \
|
||||
--cdn-origin "https://myserver.com" \
|
||||
--max-size 209715200 \
|
||||
--enable-auth-rules
|
||||
|
||||
# Configure database manually
|
||||
sqlite3 db/ginxsom.db << EOF
|
||||
INSERT OR REPLACE INTO server_config (key, value, description) VALUES
|
||||
('admin_pubkey', 'your_admin_public_key', 'Admin authorized pubkey'),
|
||||
('admin_enabled', 'true', 'Enable admin interface');
|
||||
EOF
|
||||
|
||||
# Test configuration
|
||||
./scripts/test_admin.sh
|
||||
```
|
||||
|
||||
### Production Deployment
|
||||
```bash
|
||||
# Generate production config with existing keys
|
||||
./scripts/generate_config.sh \
|
||||
--admin-key "$ADMIN_PRIVATE_KEY" \
|
||||
--server-key "$SERVER_PRIVATE_KEY" \
|
||||
--cdn-origin "https://cdn.production.com" \
|
||||
--max-size 536870912 \
|
||||
--enable-auth-rules \
|
||||
--cache-ttl 1800 \
|
||||
--output "/etc/ginxsom/config.json"
|
||||
|
||||
# Verify configuration
|
||||
./scripts/test_admin.sh
|
||||
```
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### Generated Files
|
||||
- **Config file**: `$XDG_CONFIG_HOME/ginxsom/ginxsom_config_event.json` (or `~/.config/ginxsom/`)
|
||||
- **Admin keys**: `.admin_keys` (created by setup.sh)
|
||||
|
||||
### Config File Format
|
||||
The configuration file is a signed Nostr event (kind 33333) containing server settings:
|
||||
```json
|
||||
{
|
||||
"kind": 33333,
|
||||
"created_at": 1704067200,
|
||||
"tags": [
|
||||
["server_privkey", "server_private_key_hex"],
|
||||
["cdn_origin", "https://cdn.example.com"],
|
||||
["max_file_size", "104857600"],
|
||||
["nip94_enabled", "true"],
|
||||
["auth_rules_enabled", "false"],
|
||||
["auth_cache_ttl", "300"]
|
||||
],
|
||||
"content": "Ginxsom server configuration",
|
||||
"pubkey": "admin_public_key_hex",
|
||||
"id": "event_id_hash",
|
||||
"sig": "event_signature"
|
||||
}
|
||||
```
|
||||
|
||||
## Security Notes
|
||||
|
||||
### Key Management
|
||||
- **Admin private key**: Required for all admin operations
|
||||
- **Server private key**: Used for server identity, stored in memory only
|
||||
- **Key storage**: Keep `.admin_keys` file secure (600 permissions)
|
||||
- **Key rotation**: Generate new keys periodically
|
||||
|
||||
### Configuration Security
|
||||
- Config files contain server private keys - protect with appropriate permissions
|
||||
- Configuration events are cryptographically signed and verified
|
||||
- Event expiration prevents replay of old configurations
|
||||
- Database admin settings override file settings
|
||||
|
||||
### Production Considerations
|
||||
- Use strong, randomly generated keys
|
||||
- Set appropriate file permissions (600) on config files
|
||||
- Use HTTPS for CDN origins
|
||||
- Enable authentication rules for production deployments
|
||||
- Regular key rotation and config updates
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**"nak command not found"**
|
||||
```bash
|
||||
# Install nak from GitHub
|
||||
go install github.com/fiatjaf/nak@latest
|
||||
```
|
||||
|
||||
**"Admin authentication failed"**
|
||||
```bash
|
||||
# Check admin pubkey in database
|
||||
sqlite3 db/ginxsom.db "SELECT * FROM server_config WHERE key = 'admin_pubkey';"
|
||||
|
||||
# Verify keys match
|
||||
echo "$ADMIN_PRIVKEY" | nak key public
|
||||
```
|
||||
|
||||
**"Server not responding"**
|
||||
```bash
|
||||
# Check if server is running
|
||||
curl http://localhost:9001/api/health
|
||||
|
||||
# Start server if needed
|
||||
make run
|
||||
```
|
||||
|
||||
**"Invalid configuration event"**
|
||||
```bash
|
||||
# Verify event signature
|
||||
nak verify < config.json
|
||||
|
||||
# Check event expiration
|
||||
jq '.tags[][] | select(.[0] == "expiration") | .[1]' config.json
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
Most scripts support verbose output for debugging:
|
||||
```bash
|
||||
# Enable bash debug mode
|
||||
bash -x ./scripts/setup.sh
|
||||
|
||||
# Check individual functions
|
||||
source ./scripts/test_admin.sh
|
||||
check_dependencies
|
||||
load_admin_keys
|
||||
```
|
||||
|
||||
## Integration with Ginxsom
|
||||
|
||||
### Server Integration
|
||||
The server automatically:
|
||||
1. Checks for file-based config on startup
|
||||
2. Falls back to database config if file missing
|
||||
3. Validates Nostr events cryptographically
|
||||
4. Stores server private key in memory only
|
||||
|
||||
### API Integration
|
||||
Admin API endpoints require:
|
||||
- Valid Nostr event authorization (kind 24242)
|
||||
- Admin pubkey verification against database
|
||||
- Event expiration checking
|
||||
- Method-specific tags ('t' tag with HTTP method)
|
||||
|
||||
### Development Integration
|
||||
For development and testing:
|
||||
```bash
|
||||
# Generate development config
|
||||
./scripts/setup.sh
|
||||
|
||||
# Run server
|
||||
make run
|
||||
|
||||
# Test admin API
|
||||
./scripts/test_admin.sh
|
||||
```
|
||||
|
||||
This script collection provides a complete admin management system for Ginxsom, from initial setup through ongoing administration and testing.
|
||||
373
scripts/generate_config.sh
Executable file
373
scripts/generate_config.sh
Executable file
@@ -0,0 +1,373 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Ginxsom Manual Configuration Generator
|
||||
# Creates signed configuration events for server setup
|
||||
|
||||
set -e
|
||||
|
||||
# Default values
|
||||
ADMIN_KEY=""
|
||||
SERVER_KEY=""
|
||||
CDN_ORIGIN="http://localhost:9001"
|
||||
MAX_FILE_SIZE="104857600" # 100MB
|
||||
NIP94_ENABLED="true"
|
||||
AUTH_RULES_ENABLED="false"
|
||||
AUTH_CACHE_TTL="300"
|
||||
OUTPUT_FILE=""
|
||||
EXPIRATION_HOURS="8760" # 1 year
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
log_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
|
||||
|
||||
# Display usage
|
||||
usage() {
|
||||
cat << EOF
|
||||
Usage: $0 [OPTIONS]
|
||||
|
||||
Generate a signed Ginxsom configuration event.
|
||||
|
||||
OPTIONS:
|
||||
--admin-key KEY Admin private key (hex, 64 chars)
|
||||
--server-key KEY Server private key (hex, 64 chars)
|
||||
--cdn-origin URL CDN origin URL (default: http://localhost:9001)
|
||||
--max-size BYTES Maximum file size in bytes (default: 104857600)
|
||||
--enable-nip94 Enable NIP-94 metadata (default: true)
|
||||
--disable-nip94 Disable NIP-94 metadata
|
||||
--enable-auth-rules Enable authentication rules system
|
||||
--disable-auth-rules Disable authentication rules system (default)
|
||||
--cache-ttl SECONDS Auth cache TTL in seconds (default: 300)
|
||||
--expiration HOURS Config expiration in hours (default: 8760)
|
||||
--output FILE Output file path
|
||||
--generate-keys Generate new admin and server keys
|
||||
--help Show this help
|
||||
|
||||
EXAMPLES:
|
||||
# Generate keys and create config interactively
|
||||
$0 --generate-keys --output config.json
|
||||
|
||||
# Create config with existing keys
|
||||
$0 --admin-key abc123... --server-key def456... --output config.json
|
||||
|
||||
# Create config for production server
|
||||
$0 --admin-key abc123... --server-key def456... \\
|
||||
--cdn-origin https://cdn.example.com \\
|
||||
--max-size 209715200 \\
|
||||
--enable-auth-rules \\
|
||||
--output /etc/ginxsom/config.json
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
parse_args() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--admin-key)
|
||||
ADMIN_KEY="$2"
|
||||
shift 2
|
||||
;;
|
||||
--server-key)
|
||||
SERVER_KEY="$2"
|
||||
shift 2
|
||||
;;
|
||||
--cdn-origin)
|
||||
CDN_ORIGIN="$2"
|
||||
shift 2
|
||||
;;
|
||||
--max-size)
|
||||
MAX_FILE_SIZE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--enable-nip94)
|
||||
NIP94_ENABLED="true"
|
||||
shift
|
||||
;;
|
||||
--disable-nip94)
|
||||
NIP94_ENABLED="false"
|
||||
shift
|
||||
;;
|
||||
--enable-auth-rules)
|
||||
AUTH_RULES_ENABLED="true"
|
||||
shift
|
||||
;;
|
||||
--disable-auth-rules)
|
||||
AUTH_RULES_ENABLED="false"
|
||||
shift
|
||||
;;
|
||||
--cache-ttl)
|
||||
AUTH_CACHE_TTL="$2"
|
||||
shift 2
|
||||
;;
|
||||
--expiration)
|
||||
EXPIRATION_HOURS="$2"
|
||||
shift 2
|
||||
;;
|
||||
--output)
|
||||
OUTPUT_FILE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--generate-keys)
|
||||
GENERATE_KEYS="true"
|
||||
shift
|
||||
;;
|
||||
--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown option: $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
# Check dependencies
|
||||
check_dependencies() {
|
||||
log_info "Checking dependencies..."
|
||||
local missing_deps=()
|
||||
|
||||
for cmd in nak jq; do
|
||||
if ! command -v $cmd &> /dev/null; then
|
||||
missing_deps+=("$cmd")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#missing_deps[@]} -ne 0 ]; then
|
||||
log_error "Missing dependencies: ${missing_deps[*]}"
|
||||
echo ""
|
||||
echo "Please install the missing dependencies:"
|
||||
echo "- nak: https://github.com/fiatjaf/nak"
|
||||
echo "- jq: sudo apt install jq (or equivalent for your system)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "All dependencies found"
|
||||
}
|
||||
|
||||
# Generate new key pairs
|
||||
generate_keys() {
|
||||
log_info "Generating new key pairs..."
|
||||
|
||||
ADMIN_KEY=$(nak key generate)
|
||||
SERVER_KEY=$(nak key generate)
|
||||
|
||||
local admin_pubkey=$(echo "$ADMIN_KEY" | nak key public)
|
||||
local server_pubkey=$(echo "$SERVER_KEY" | nak key public)
|
||||
|
||||
log_success "New keys generated:"
|
||||
echo " Admin Private Key: $ADMIN_KEY"
|
||||
echo " Admin Public Key: $admin_pubkey"
|
||||
echo " Server Private Key: $SERVER_KEY"
|
||||
echo " Server Public Key: $server_pubkey"
|
||||
echo ""
|
||||
log_warning "Save these keys securely!"
|
||||
}
|
||||
|
||||
# Validate keys
|
||||
validate_keys() {
|
||||
log_info "Validating keys..."
|
||||
|
||||
if [ -z "$ADMIN_KEY" ]; then
|
||||
log_error "Admin key is required (use --admin-key or --generate-keys)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$SERVER_KEY" ]; then
|
||||
log_error "Server key is required (use --server-key or --generate-keys)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate key format (64 hex characters)
|
||||
if [[ ! "$ADMIN_KEY" =~ ^[a-fA-F0-9]{64}$ ]]; then
|
||||
log_error "Invalid admin key format (must be 64 hex characters)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! "$SERVER_KEY" =~ ^[a-fA-F0-9]{64}$ ]]; then
|
||||
log_error "Invalid server key format (must be 64 hex characters)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test key validity with nak
|
||||
local admin_pubkey=$(echo "$ADMIN_KEY" | nak key public 2>/dev/null)
|
||||
local server_pubkey=$(echo "$SERVER_KEY" | nak key public 2>/dev/null)
|
||||
|
||||
if [ -z "$admin_pubkey" ]; then
|
||||
log_error "Invalid admin private key"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$server_pubkey" ]; then
|
||||
log_error "Invalid server private key"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Keys validated"
|
||||
}
|
||||
|
||||
# Validate configuration values
|
||||
validate_config() {
|
||||
log_info "Validating configuration..."
|
||||
|
||||
# Validate URL format
|
||||
if [[ ! "$CDN_ORIGIN" =~ ^https?:// ]]; then
|
||||
log_error "CDN origin must be a valid HTTP/HTTPS URL"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate max file size
|
||||
if [[ ! "$MAX_FILE_SIZE" =~ ^[0-9]+$ ]]; then
|
||||
log_error "Max file size must be a number"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$MAX_FILE_SIZE" -lt 1024 ]; then
|
||||
log_error "Max file size must be at least 1024 bytes"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate boolean values
|
||||
if [[ ! "$NIP94_ENABLED" =~ ^(true|false)$ ]]; then
|
||||
log_error "NIP94 enabled must be 'true' or 'false'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! "$AUTH_RULES_ENABLED" =~ ^(true|false)$ ]]; then
|
||||
log_error "Auth rules enabled must be 'true' or 'false'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate cache TTL
|
||||
if [[ ! "$AUTH_CACHE_TTL" =~ ^[0-9]+$ ]]; then
|
||||
log_error "Cache TTL must be a number"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$AUTH_CACHE_TTL" -lt 60 ]; then
|
||||
log_error "Cache TTL must be at least 60 seconds"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Configuration validated"
|
||||
}
|
||||
|
||||
# Create configuration event
|
||||
create_config_event() {
|
||||
log_info "Creating signed configuration event..."
|
||||
|
||||
local expiration=$(($(date +%s) + (EXPIRATION_HOURS * 3600)))
|
||||
|
||||
# Create configuration event with all settings
|
||||
CONFIG_EVENT=$(nak event -k 33333 -c "Ginxsom server configuration" \
|
||||
--tag server_privkey="$SERVER_KEY" \
|
||||
--tag cdn_origin="$CDN_ORIGIN" \
|
||||
--tag max_file_size="$MAX_FILE_SIZE" \
|
||||
--tag nip94_enabled="$NIP94_ENABLED" \
|
||||
--tag auth_rules_enabled="$AUTH_RULES_ENABLED" \
|
||||
--tag auth_cache_ttl="$AUTH_CACHE_TTL" \
|
||||
--tag expiration="$expiration" \
|
||||
--sec "$ADMIN_KEY")
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
log_error "Failed to create configuration event"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Configuration event created and signed"
|
||||
}
|
||||
|
||||
# Save configuration
|
||||
save_config() {
|
||||
if [ -z "$OUTPUT_FILE" ]; then
|
||||
# Default output location
|
||||
if [ -n "$XDG_CONFIG_HOME" ]; then
|
||||
OUTPUT_FILE="$XDG_CONFIG_HOME/ginxsom/ginxsom_config_event.json"
|
||||
else
|
||||
OUTPUT_FILE="$HOME/.config/ginxsom/ginxsom_config_event.json"
|
||||
fi
|
||||
fi
|
||||
|
||||
log_info "Saving configuration to $OUTPUT_FILE"
|
||||
|
||||
# Create directory if needed
|
||||
local output_dir=$(dirname "$OUTPUT_FILE")
|
||||
mkdir -p "$output_dir"
|
||||
|
||||
# Save formatted JSON
|
||||
echo "$CONFIG_EVENT" | jq . > "$OUTPUT_FILE"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
log_error "Failed to save configuration file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
chmod 600 "$OUTPUT_FILE"
|
||||
log_success "Configuration saved to $OUTPUT_FILE"
|
||||
}
|
||||
|
||||
# Display summary
|
||||
show_summary() {
|
||||
local admin_pubkey=$(echo "$ADMIN_KEY" | nak key public)
|
||||
local server_pubkey=$(echo "$SERVER_KEY" | nak key public)
|
||||
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
echo " GINXSOM CONFIGURATION GENERATED"
|
||||
echo "================================================================="
|
||||
echo ""
|
||||
log_success "Configuration file: $OUTPUT_FILE"
|
||||
echo ""
|
||||
echo "Configuration summary:"
|
||||
echo " Admin Public Key: $admin_pubkey"
|
||||
echo " Server Public Key: $server_pubkey"
|
||||
echo " CDN Origin: $CDN_ORIGIN"
|
||||
echo " Max File Size: $(( MAX_FILE_SIZE / 1024 / 1024 ))MB"
|
||||
echo " NIP-94 Enabled: $NIP94_ENABLED"
|
||||
echo " Auth Rules Enabled: $AUTH_RULES_ENABLED"
|
||||
echo " Cache TTL: ${AUTH_CACHE_TTL}s"
|
||||
echo " Expires: $(date -d @$(($(date +%s) + (EXPIRATION_HOURS * 3600))))"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Place config file in server's config directory"
|
||||
echo "2. Set admin_pubkey in server database:"
|
||||
echo " sqlite3 db/ginxsom.db << EOF"
|
||||
echo " INSERT OR REPLACE INTO server_config (key, value) VALUES"
|
||||
echo " ('admin_pubkey', '$admin_pubkey'),"
|
||||
echo " ('admin_enabled', 'true');"
|
||||
echo " EOF"
|
||||
echo "3. Start ginxsom server"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
parse_args "$@"
|
||||
|
||||
if [ "$GENERATE_KEYS" = "true" ]; then
|
||||
check_dependencies
|
||||
generate_keys
|
||||
fi
|
||||
|
||||
validate_keys
|
||||
validate_config
|
||||
create_config_event
|
||||
save_config
|
||||
show_summary
|
||||
}
|
||||
|
||||
# Run main function if script is executed directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
342
scripts/setup.sh
Executable file
342
scripts/setup.sh
Executable file
@@ -0,0 +1,342 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Ginxsom Interactive Setup Wizard
|
||||
# Creates signed configuration events for first-run server setup
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
CONFIG_PATH="${1:-}"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
log_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
|
||||
|
||||
# Check dependencies
|
||||
check_dependencies() {
|
||||
log_info "Checking dependencies..."
|
||||
local missing_deps=()
|
||||
|
||||
for cmd in nak jq; do
|
||||
if ! command -v $cmd &> /dev/null; then
|
||||
missing_deps+=("$cmd")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#missing_deps[@]} -ne 0 ]; then
|
||||
log_error "Missing dependencies: ${missing_deps[*]}"
|
||||
echo ""
|
||||
echo "Please install the missing dependencies:"
|
||||
echo "- nak: https://github.com/fiatjaf/nak"
|
||||
echo "- jq: sudo apt install jq (or equivalent for your system)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "All dependencies found"
|
||||
}
|
||||
|
||||
# Validate private key format
|
||||
validate_private_key() {
|
||||
local key="$1"
|
||||
if [[ ! "$key" =~ ^[a-fA-F0-9]{64}$ ]]; then
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Setup key pairs with user choice
|
||||
setup_keys() {
|
||||
log_info "Setting up cryptographic key pairs..."
|
||||
echo ""
|
||||
echo "=== Admin Key Setup ==="
|
||||
echo "Choose an option for your admin private key:"
|
||||
echo "1. Generate a new random admin key"
|
||||
echo "2. Use an existing admin private key"
|
||||
echo ""
|
||||
|
||||
while true; do
|
||||
echo -n "Choice (1/2): "
|
||||
read -r ADMIN_KEY_CHOICE
|
||||
case "$ADMIN_KEY_CHOICE" in
|
||||
1)
|
||||
log_info "Generating new admin key pair..."
|
||||
ADMIN_PRIVKEY=$(nak key generate)
|
||||
ADMIN_PUBKEY=$(echo "$ADMIN_PRIVKEY" | nak key public)
|
||||
log_success "New admin key pair generated"
|
||||
break
|
||||
;;
|
||||
2)
|
||||
echo -n "Enter your admin private key (64 hex characters): "
|
||||
read -r ADMIN_PRIVKEY
|
||||
if validate_private_key "$ADMIN_PRIVKEY"; then
|
||||
ADMIN_PUBKEY=$(echo "$ADMIN_PRIVKEY" | nak key public)
|
||||
if [ $? -eq 0 ]; then
|
||||
log_success "Admin private key validated"
|
||||
break
|
||||
else
|
||||
log_error "Invalid private key format or nak error"
|
||||
fi
|
||||
else
|
||||
log_error "Invalid private key format (must be 64 hex characters)"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
log_error "Please choose 1 or 2"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "=== Server Key Setup ==="
|
||||
echo "Choose an option for your Ginxsom server private key:"
|
||||
echo "1. Generate a new random server key"
|
||||
echo "2. Use an existing server private key"
|
||||
echo ""
|
||||
|
||||
while true; do
|
||||
echo -n "Choice (1/2): "
|
||||
read -r SERVER_KEY_CHOICE
|
||||
case "$SERVER_KEY_CHOICE" in
|
||||
1)
|
||||
log_info "Generating new server key pair..."
|
||||
SERVER_PRIVKEY=$(nak key generate)
|
||||
SERVER_PUBKEY=$(echo "$SERVER_PRIVKEY" | nak key public)
|
||||
log_success "New server key pair generated"
|
||||
break
|
||||
;;
|
||||
2)
|
||||
echo -n "Enter your server private key (64 hex characters): "
|
||||
read -r SERVER_PRIVKEY
|
||||
if validate_private_key "$SERVER_PRIVKEY"; then
|
||||
SERVER_PUBKEY=$(echo "$SERVER_PRIVKEY" | nak key public)
|
||||
if [ $? -eq 0 ]; then
|
||||
log_success "Server private key validated"
|
||||
break
|
||||
else
|
||||
log_error "Invalid private key format or nak error"
|
||||
fi
|
||||
else
|
||||
log_error "Invalid private key format (must be 64 hex characters)"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
log_error "Please choose 1 or 2"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo ""
|
||||
log_success "Key pairs configured:"
|
||||
echo " Admin Public Key: $ADMIN_PUBKEY"
|
||||
echo " Server Public Key: $SERVER_PUBKEY"
|
||||
|
||||
# Save keys securely
|
||||
echo "ADMIN_PRIVKEY='$ADMIN_PRIVKEY'" > "$PROJECT_ROOT/.admin_keys"
|
||||
echo "ADMIN_PUBKEY='$ADMIN_PUBKEY'" >> "$PROJECT_ROOT/.admin_keys"
|
||||
echo "SERVER_PRIVKEY='$SERVER_PRIVKEY'" >> "$PROJECT_ROOT/.admin_keys"
|
||||
echo "SERVER_PUBKEY='$SERVER_PUBKEY'" >> "$PROJECT_ROOT/.admin_keys"
|
||||
chmod 600 "$PROJECT_ROOT/.admin_keys"
|
||||
|
||||
log_warning "Keys saved to $PROJECT_ROOT/.admin_keys (keep this file secure!)"
|
||||
}
|
||||
|
||||
# Collect server configuration
|
||||
collect_configuration() {
|
||||
log_info "Collecting server configuration..."
|
||||
|
||||
echo ""
|
||||
echo "=== Server Configuration Setup ==="
|
||||
|
||||
# CDN Origin
|
||||
echo -n "CDN Origin URL (default: http://localhost:9001): "
|
||||
read -r CDN_ORIGIN
|
||||
CDN_ORIGIN="${CDN_ORIGIN:-http://localhost:9001}"
|
||||
|
||||
# Max file size
|
||||
echo -n "Maximum file size in MB (default: 100): "
|
||||
read -r MAX_SIZE_MB
|
||||
MAX_SIZE_MB="${MAX_SIZE_MB:-100}"
|
||||
MAX_FILE_SIZE=$((MAX_SIZE_MB * 1024 * 1024))
|
||||
|
||||
# NIP-94 support
|
||||
echo -n "Enable NIP-94 metadata (y/n, default: y): "
|
||||
read -r ENABLE_NIP94
|
||||
case "$ENABLE_NIP94" in
|
||||
[Nn]*) NIP94_ENABLED="false" ;;
|
||||
*) NIP94_ENABLED="true" ;;
|
||||
esac
|
||||
|
||||
# Authentication rules
|
||||
echo -n "Enable authentication rules system (y/n, default: n): "
|
||||
read -r ENABLE_AUTH_RULES
|
||||
case "$ENABLE_AUTH_RULES" in
|
||||
[Yy]*) AUTH_RULES_ENABLED="true" ;;
|
||||
*) AUTH_RULES_ENABLED="false" ;;
|
||||
esac
|
||||
|
||||
# Cache TTL
|
||||
echo -n "Authentication cache TTL in seconds (default: 300): "
|
||||
read -r CACHE_TTL
|
||||
CACHE_TTL="${CACHE_TTL:-300}"
|
||||
|
||||
echo ""
|
||||
log_success "Configuration collected:"
|
||||
echo " CDN Origin: $CDN_ORIGIN"
|
||||
echo " Max File Size: ${MAX_SIZE_MB}MB"
|
||||
echo " NIP-94 Enabled: $NIP94_ENABLED"
|
||||
echo " Auth Rules Enabled: $AUTH_RULES_ENABLED"
|
||||
echo " Cache TTL: ${CACHE_TTL}s"
|
||||
}
|
||||
|
||||
# Create configuration event
|
||||
create_config_event() {
|
||||
log_info "Creating signed configuration event..."
|
||||
|
||||
local expiration=$(($(date +%s) + 31536000)) # 1 year from now
|
||||
|
||||
# Create configuration event with all settings
|
||||
CONFIG_EVENT=$(nak event -k 33333 -c "Ginxsom server configuration" \
|
||||
--tag server_privkey="$SERVER_PRIVKEY" \
|
||||
--tag cdn_origin="$CDN_ORIGIN" \
|
||||
--tag max_file_size="$MAX_FILE_SIZE" \
|
||||
--tag nip94_enabled="$NIP94_ENABLED" \
|
||||
--tag auth_rules_enabled="$AUTH_RULES_ENABLED" \
|
||||
--tag auth_cache_ttl="$CACHE_TTL" \
|
||||
--tag expiration="$expiration" \
|
||||
--sec "$ADMIN_PRIVKEY")
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
log_error "Failed to create configuration event"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Configuration event created and signed"
|
||||
}
|
||||
|
||||
# Save configuration file
|
||||
save_config_file() {
|
||||
local config_file="$1"
|
||||
|
||||
log_info "Saving configuration to $config_file"
|
||||
|
||||
# Create directory if it doesn't exist
|
||||
local config_dir=$(dirname "$config_file")
|
||||
mkdir -p "$config_dir"
|
||||
|
||||
# Save configuration event to file
|
||||
echo "$CONFIG_EVENT" | jq . > "$config_file"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
log_error "Failed to save configuration file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
chmod 600 "$config_file"
|
||||
log_success "Configuration saved to $config_file"
|
||||
}
|
||||
|
||||
# Setup database
|
||||
setup_database() {
|
||||
log_info "Setting up database configuration..."
|
||||
|
||||
local db_path="$PROJECT_ROOT/db/ginxsom.db"
|
||||
|
||||
if [ ! -f "$db_path" ]; then
|
||||
log_warning "Database not found at $db_path"
|
||||
log_warning "Please ensure the database is initialized before starting the server"
|
||||
return
|
||||
fi
|
||||
|
||||
# Insert admin configuration into database
|
||||
sqlite3 "$db_path" << EOF
|
||||
INSERT OR REPLACE INTO server_config (key, value, description) VALUES
|
||||
('admin_pubkey', '$ADMIN_PUBKEY', 'Admin public key from setup wizard'),
|
||||
('admin_enabled', 'true', 'Enable admin interface');
|
||||
EOF
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log_success "Database configuration updated"
|
||||
else
|
||||
log_warning "Failed to update database (this is OK if database doesn't exist yet)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Display setup summary
|
||||
show_setup_summary() {
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
echo " GINXSOM SETUP COMPLETE"
|
||||
echo "================================================================="
|
||||
echo ""
|
||||
log_success "Configuration file created: $CONFIG_PATH"
|
||||
log_success "Admin keys saved: $PROJECT_ROOT/.admin_keys"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Start the Ginxsom server:"
|
||||
echo " cd $PROJECT_ROOT"
|
||||
echo " make run"
|
||||
echo ""
|
||||
echo "2. Test admin API access:"
|
||||
echo " source .admin_keys"
|
||||
echo " ./scripts/test_admin.sh"
|
||||
echo ""
|
||||
echo "3. Access web admin (when implemented):"
|
||||
echo " http://localhost:9001/admin"
|
||||
echo ""
|
||||
log_warning "Keep the .admin_keys file secure - it contains your admin private key!"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Main setup workflow
|
||||
main() {
|
||||
echo "=== Ginxsom Interactive Setup Wizard ==="
|
||||
echo ""
|
||||
|
||||
# Validate config path
|
||||
if [ -z "$CONFIG_PATH" ]; then
|
||||
# Determine default config path
|
||||
if [ -n "$XDG_CONFIG_HOME" ]; then
|
||||
CONFIG_PATH="$XDG_CONFIG_HOME/ginxsom/ginxsom_config_event.json"
|
||||
else
|
||||
CONFIG_PATH="$HOME/.config/ginxsom/ginxsom_config_event.json"
|
||||
fi
|
||||
fi
|
||||
|
||||
log_info "Configuration will be saved to: $CONFIG_PATH"
|
||||
|
||||
# Check if config already exists
|
||||
if [ -f "$CONFIG_PATH" ]; then
|
||||
log_warning "Configuration file already exists at $CONFIG_PATH"
|
||||
echo -n "Overwrite existing configuration? (y/n): "
|
||||
read -r OVERWRITE
|
||||
case "$OVERWRITE" in
|
||||
[Yy]*) ;;
|
||||
*) log_info "Setup cancelled"; exit 0 ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Run setup steps
|
||||
check_dependencies
|
||||
setup_keys
|
||||
collect_configuration
|
||||
create_config_event
|
||||
save_config_file "$CONFIG_PATH"
|
||||
setup_database
|
||||
show_setup_summary
|
||||
}
|
||||
|
||||
# Allow sourcing for testing individual functions
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
296
scripts/test_admin.sh
Executable file
296
scripts/test_admin.sh
Executable file
@@ -0,0 +1,296 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Ginxsom Admin API Test Script
|
||||
# Tests admin API endpoints using nak (for Nostr events) and curl
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
GINXSOM_URL="http://localhost:9001"
|
||||
ADMIN_PRIVKEY="${ADMIN_PRIVKEY:-}"
|
||||
ADMIN_PUBKEY=""
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
log_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
|
||||
|
||||
check_dependencies() {
|
||||
log_info "Checking dependencies..."
|
||||
for cmd in nak curl jq; do
|
||||
if ! command -v $cmd &> /dev/null; then
|
||||
log_error "$cmd is not installed"
|
||||
echo ""
|
||||
echo "Please install missing dependencies:"
|
||||
echo "- nak: https://github.com/fiatjaf/nak"
|
||||
echo "- curl: standard HTTP client"
|
||||
echo "- jq: JSON processor (sudo apt install jq)"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
log_success "All dependencies found"
|
||||
}
|
||||
|
||||
load_admin_keys() {
|
||||
if [ -f ".admin_keys" ]; then
|
||||
log_info "Loading admin keys from .admin_keys file"
|
||||
source .admin_keys
|
||||
fi
|
||||
|
||||
if [ -z "$ADMIN_PRIVKEY" ]; then
|
||||
log_error "Admin private key not found"
|
||||
echo ""
|
||||
echo "Please set ADMIN_PRIVKEY environment variable or create .admin_keys file:"
|
||||
echo " export ADMIN_PRIVKEY='your_admin_private_key_here'"
|
||||
echo ""
|
||||
echo "Or run the setup wizard to generate keys:"
|
||||
echo " ./scripts/setup.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ADMIN_PUBKEY=$(echo "$ADMIN_PRIVKEY" | nak key public)
|
||||
log_info "Admin public key: $ADMIN_PUBKEY"
|
||||
}
|
||||
|
||||
create_admin_event() {
|
||||
local method="$1"
|
||||
local content="admin_request"
|
||||
local expiration=$(($(date +%s) + 3600)) # 1 hour from now
|
||||
|
||||
# Create Nostr event with nak
|
||||
local event=$(nak event -k 24242 -c "$content" \
|
||||
--tag t="$method" \
|
||||
--tag expiration="$expiration" \
|
||||
--sec "$ADMIN_PRIVKEY")
|
||||
|
||||
echo "$event"
|
||||
}
|
||||
|
||||
send_admin_request() {
|
||||
local method="$1"
|
||||
local endpoint="$2"
|
||||
local data="$3"
|
||||
|
||||
log_info "Testing $method $endpoint"
|
||||
|
||||
# Create authenticated Nostr event
|
||||
local event=$(create_admin_event "$method")
|
||||
local auth_header="Nostr $(echo "$event" | base64 -w 0)"
|
||||
|
||||
# Send request with curl
|
||||
local curl_args=(-s -w "%{http_code}" -H "Authorization: $auth_header")
|
||||
|
||||
if [[ "$method" == "PUT" && -n "$data" ]]; then
|
||||
curl_args+=(-H "Content-Type: application/json" -d "$data")
|
||||
fi
|
||||
|
||||
local response=$(curl "${curl_args[@]}" -X "$method" "$GINXSOM_URL$endpoint")
|
||||
local http_code="${response: -3}"
|
||||
local body="${response%???}"
|
||||
|
||||
if [[ "$http_code" =~ ^2 ]]; then
|
||||
log_success "$method $endpoint - HTTP $http_code"
|
||||
if [[ -n "$body" ]]; then
|
||||
echo "$body" | jq . 2>/dev/null || echo "$body"
|
||||
fi
|
||||
return 0
|
||||
else
|
||||
log_error "$method $endpoint - HTTP $http_code"
|
||||
echo "$body" | jq . 2>/dev/null || echo "$body"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
test_health_endpoint() {
|
||||
echo "================================================================="
|
||||
log_info "Testing Health Endpoint (no auth required)"
|
||||
echo "================================================================="
|
||||
local response=$(curl -s -w "%{http_code}" "$GINXSOM_URL/api/health")
|
||||
local http_code="${response: -3}"
|
||||
local body="${response%???}"
|
||||
|
||||
if [[ "$http_code" =~ ^2 ]]; then
|
||||
log_success "GET /api/health - HTTP $http_code"
|
||||
echo "$body" | jq .
|
||||
return 0
|
||||
else
|
||||
log_error "GET /api/health - HTTP $http_code"
|
||||
echo "$body"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
test_stats_endpoint() {
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
log_info "Testing Statistics Endpoint"
|
||||
echo "================================================================="
|
||||
send_admin_request "GET" "/api/stats"
|
||||
}
|
||||
|
||||
test_config_endpoints() {
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
log_info "Testing Configuration Endpoints"
|
||||
echo "================================================================="
|
||||
|
||||
# Get current config
|
||||
log_info "Getting current configuration..."
|
||||
send_admin_request "GET" "/api/config"
|
||||
|
||||
echo ""
|
||||
|
||||
# Update config
|
||||
log_info "Updating configuration..."
|
||||
local config_update='{
|
||||
"max_file_size": "209715200",
|
||||
"nip94_enabled": "true",
|
||||
"auth_cache_ttl": "600"
|
||||
}'
|
||||
|
||||
if send_admin_request "PUT" "/api/config" "$config_update"; then
|
||||
echo ""
|
||||
log_info "Verifying configuration update..."
|
||||
send_admin_request "GET" "/api/config"
|
||||
fi
|
||||
}
|
||||
|
||||
test_files_endpoint() {
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
log_info "Testing Files Endpoint"
|
||||
echo "================================================================="
|
||||
send_admin_request "GET" "/api/files?limit=10&offset=0"
|
||||
}
|
||||
|
||||
verify_server_status() {
|
||||
log_info "Checking if Ginxsom server is running..."
|
||||
|
||||
if curl -s --connect-timeout 5 "$GINXSOM_URL/api/health" >/dev/null 2>&1; then
|
||||
log_success "Server is responding"
|
||||
return 0
|
||||
else
|
||||
log_error "Server is not responding at $GINXSOM_URL"
|
||||
echo ""
|
||||
echo "Please ensure the Ginxsom server is running:"
|
||||
echo " make run"
|
||||
echo ""
|
||||
echo "Or check if the server is running on a different port."
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
verify_admin_config() {
|
||||
log_info "Verifying admin configuration in database..."
|
||||
|
||||
if [ ! -f "db/ginxsom.db" ]; then
|
||||
log_warning "Database not found at db/ginxsom.db"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local db_admin_pubkey=$(sqlite3 db/ginxsom.db "SELECT value FROM server_config WHERE key = 'admin_pubkey';" 2>/dev/null || echo "")
|
||||
local admin_enabled=$(sqlite3 db/ginxsom.db "SELECT value FROM server_config WHERE key = 'admin_enabled';" 2>/dev/null || echo "")
|
||||
|
||||
if [ -z "$db_admin_pubkey" ]; then
|
||||
log_warning "No admin_pubkey found in database"
|
||||
echo ""
|
||||
echo "Configure admin access with:"
|
||||
echo "sqlite3 db/ginxsom.db << EOF"
|
||||
echo "INSERT OR REPLACE INTO server_config (key, value, description) VALUES"
|
||||
echo " ('admin_pubkey', '$ADMIN_PUBKEY', 'Admin authorized pubkey'),"
|
||||
echo " ('admin_enabled', 'true', 'Enable admin interface');"
|
||||
echo "EOF"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ "$db_admin_pubkey" != "$ADMIN_PUBKEY" ]; then
|
||||
log_warning "Admin pubkey mismatch!"
|
||||
echo " Database: $db_admin_pubkey"
|
||||
echo " Current: $ADMIN_PUBKEY"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ "$admin_enabled" != "true" ]; then
|
||||
log_warning "Admin interface is disabled in database"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_success "Admin configuration verified"
|
||||
return 0
|
||||
}
|
||||
|
||||
show_test_summary() {
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
log_success "Admin API testing complete!"
|
||||
echo "================================================================="
|
||||
echo ""
|
||||
echo "Admin credentials:"
|
||||
echo " Private Key: $ADMIN_PRIVKEY"
|
||||
echo " Public Key: $ADMIN_PUBKEY"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Implement web admin interface"
|
||||
echo "2. Set up monitoring dashboards"
|
||||
echo "3. Configure additional admin features"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main() {
|
||||
echo "=== Ginxsom Admin API Test Suite ==="
|
||||
echo ""
|
||||
|
||||
check_dependencies
|
||||
load_admin_keys
|
||||
|
||||
if ! verify_server_status; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! verify_admin_config; then
|
||||
log_warning "Admin configuration issues detected - some tests may fail"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Run API tests
|
||||
local failed_tests=0
|
||||
|
||||
if ! test_health_endpoint; then
|
||||
((failed_tests++))
|
||||
fi
|
||||
|
||||
if ! test_stats_endpoint; then
|
||||
((failed_tests++))
|
||||
fi
|
||||
|
||||
if ! test_config_endpoints; then
|
||||
((failed_tests++))
|
||||
fi
|
||||
|
||||
if ! test_files_endpoint; then
|
||||
((failed_tests++))
|
||||
fi
|
||||
|
||||
show_test_summary
|
||||
|
||||
if [ $failed_tests -gt 0 ]; then
|
||||
log_warning "$failed_tests tests failed"
|
||||
exit 1
|
||||
else
|
||||
log_success "All tests passed!"
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Allow sourcing for individual function testing
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
688
src/admin_api.c
Normal file
688
src/admin_api.c
Normal file
@@ -0,0 +1,688 @@
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <time.h>
|
||||
#include <sys/stat.h>
|
||||
#ifdef __linux__
|
||||
#include <sys/statvfs.h>
|
||||
#else
|
||||
#include <sys/mount.h>
|
||||
#endif
|
||||
#include <unistd.h>
|
||||
#include "admin_api.h"
|
||||
#include "ginxsom.h"
|
||||
|
||||
// Database path (consistent with main.c)
|
||||
#define DB_PATH "db/ginxsom.db"
|
||||
|
||||
// Forward declarations for local utility functions
|
||||
static int admin_nip94_get_origin(char* out, size_t out_size);
|
||||
static void admin_nip94_build_blob_url(const char* origin, const char* sha256, const char* mime_type, char* out, size_t out_size);
|
||||
static const char* admin_mime_to_extension(const char* mime_type);
|
||||
|
||||
// Local utility functions (from main.c but implemented here for admin API)
|
||||
static int admin_nip94_get_origin(char* out, size_t out_size) {
|
||||
if (!out || out_size == 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc;
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc) {
|
||||
// Default on DB error
|
||||
strncpy(out, "http://localhost:9001", out_size - 1);
|
||||
out[out_size - 1] = '\0';
|
||||
return 1;
|
||||
}
|
||||
|
||||
const char* sql = "SELECT value FROM server_config WHERE key = 'cdn_origin'";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
const char* value = (const char*)sqlite3_column_text(stmt, 0);
|
||||
if (value) {
|
||||
strncpy(out, value, out_size - 1);
|
||||
out[out_size - 1] = '\0';
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_close(db);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
|
||||
sqlite3_close(db);
|
||||
|
||||
// Default fallback
|
||||
strncpy(out, "http://localhost:9001", out_size - 1);
|
||||
out[out_size - 1] = '\0';
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void admin_nip94_build_blob_url(const char* origin, const char* sha256,
|
||||
const char* mime_type, char* out, size_t out_size) {
|
||||
if (!origin || !sha256 || !out || out_size == 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Use local admin implementation for extension mapping
|
||||
const char* extension = admin_mime_to_extension(mime_type);
|
||||
snprintf(out, out_size, "%s/%s%s", origin, sha256, extension);
|
||||
}
|
||||
|
||||
// Centralized MIME type to file extension mapping (from main.c)
|
||||
static const char* admin_mime_to_extension(const char* mime_type) {
|
||||
if (!mime_type) {
|
||||
return ".bin";
|
||||
}
|
||||
|
||||
if (strstr(mime_type, "image/jpeg")) {
|
||||
return ".jpg";
|
||||
} else if (strstr(mime_type, "image/webp")) {
|
||||
return ".webp";
|
||||
} else if (strstr(mime_type, "image/png")) {
|
||||
return ".png";
|
||||
} else if (strstr(mime_type, "image/gif")) {
|
||||
return ".gif";
|
||||
} else if (strstr(mime_type, "video/mp4")) {
|
||||
return ".mp4";
|
||||
} else if (strstr(mime_type, "video/webm")) {
|
||||
return ".webm";
|
||||
} else if (strstr(mime_type, "audio/mpeg")) {
|
||||
return ".mp3";
|
||||
} else if (strstr(mime_type, "audio/ogg")) {
|
||||
return ".ogg";
|
||||
} else if (strstr(mime_type, "text/plain")) {
|
||||
return ".txt";
|
||||
} else if (strstr(mime_type, "application/pdf")) {
|
||||
return ".pdf";
|
||||
} else {
|
||||
return ".bin";
|
||||
}
|
||||
}
|
||||
|
||||
// Main API request handler
|
||||
void handle_admin_api_request(const char* method, const char* uri) {
|
||||
const char* path = uri + 4; // Skip "/api"
|
||||
|
||||
// Check if admin interface is enabled
|
||||
if (!is_admin_enabled()) {
|
||||
send_json_error(503, "admin_disabled", "Admin interface is disabled");
|
||||
return;
|
||||
}
|
||||
|
||||
// Authentication required for all admin operations except health check
|
||||
if (strcmp(path, "/health") != 0) {
|
||||
const char* auth_header = getenv("HTTP_AUTHORIZATION");
|
||||
if (!authenticate_admin_request(auth_header)) {
|
||||
send_json_error(401, "admin_auth_required", "Valid admin authentication required");
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Route to appropriate handler
|
||||
if (strcmp(method, "GET") == 0) {
|
||||
if (strcmp(path, "/stats") == 0) {
|
||||
handle_stats_api();
|
||||
} else if (strcmp(path, "/config") == 0) {
|
||||
handle_config_get_api();
|
||||
} else if (strncmp(path, "/files", 6) == 0) {
|
||||
handle_files_api();
|
||||
} else if (strcmp(path, "/health") == 0) {
|
||||
handle_health_api();
|
||||
} else {
|
||||
send_json_error(404, "not_found", "API endpoint not found");
|
||||
}
|
||||
} else if (strcmp(method, "PUT") == 0) {
|
||||
if (strcmp(path, "/config") == 0) {
|
||||
handle_config_put_api();
|
||||
} else {
|
||||
send_json_error(405, "method_not_allowed", "Method not allowed");
|
||||
}
|
||||
} else {
|
||||
send_json_error(405, "method_not_allowed", "Method not allowed");
|
||||
}
|
||||
}
|
||||
|
||||
// Admin authentication functions
|
||||
int authenticate_admin_request(const char* auth_header) {
|
||||
if (!auth_header) {
|
||||
return 0; // No auth header
|
||||
}
|
||||
|
||||
// Use existing authentication system with "admin" method
|
||||
int auth_result = authenticate_request(auth_header, "admin", NULL);
|
||||
if (auth_result != NOSTR_SUCCESS) {
|
||||
return 0; // Invalid Nostr event
|
||||
}
|
||||
|
||||
// Extract pubkey from validated event using existing parser
|
||||
char event_json[4096];
|
||||
int parse_result = parse_authorization_header(auth_header, event_json, sizeof(event_json));
|
||||
if (parse_result != NOSTR_SUCCESS) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
cJSON* event = cJSON_Parse(event_json);
|
||||
if (!event) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
cJSON* pubkey_json = cJSON_GetObjectItem(event, "pubkey");
|
||||
if (!pubkey_json || !cJSON_IsString(pubkey_json)) {
|
||||
cJSON_Delete(event);
|
||||
return 0;
|
||||
}
|
||||
|
||||
const char* event_pubkey = cJSON_GetStringValue(pubkey_json);
|
||||
int is_admin = verify_admin_pubkey(event_pubkey);
|
||||
|
||||
cJSON_Delete(event);
|
||||
return is_admin;
|
||||
}
|
||||
|
||||
int verify_admin_pubkey(const char* event_pubkey) {
|
||||
if (!event_pubkey) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc, is_admin = 0;
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
const char* sql = "SELECT value FROM server_config WHERE key = 'admin_pubkey'";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
const char* admin_pubkey = (const char*)sqlite3_column_text(stmt, 0);
|
||||
if (admin_pubkey && strcmp(event_pubkey, admin_pubkey) == 0) {
|
||||
is_admin = 1;
|
||||
}
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
sqlite3_close(db);
|
||||
|
||||
return is_admin;
|
||||
}
|
||||
|
||||
int is_admin_enabled(void) {
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc, enabled = 0;
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc) {
|
||||
return 0; // Default disabled if can't access DB
|
||||
}
|
||||
|
||||
const char* sql = "SELECT value FROM server_config WHERE key = 'admin_enabled'";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
const char* value = (const char*)sqlite3_column_text(stmt, 0);
|
||||
enabled = (value && strcmp(value, "true") == 0) ? 1 : 0;
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
sqlite3_close(db);
|
||||
|
||||
return enabled;
|
||||
}
|
||||
|
||||
// Individual endpoint handlers
|
||||
void handle_stats_api(void) {
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc;
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc) {
|
||||
send_json_error(500, "database_error", "Failed to open database");
|
||||
return;
|
||||
}
|
||||
|
||||
// Create consolidated statistics view if it doesn't exist
|
||||
const char* create_view =
|
||||
"CREATE VIEW IF NOT EXISTS storage_stats AS "
|
||||
"SELECT "
|
||||
" COUNT(*) as total_blobs, "
|
||||
" SUM(size) as total_bytes, "
|
||||
" AVG(size) as avg_blob_size, "
|
||||
" COUNT(DISTINCT uploader_pubkey) as unique_uploaders, "
|
||||
" MIN(uploaded_at) as first_upload, "
|
||||
" MAX(uploaded_at) as last_upload "
|
||||
"FROM blobs";
|
||||
|
||||
rc = sqlite3_exec(db, create_view, NULL, NULL, NULL);
|
||||
if (rc != SQLITE_OK) {
|
||||
sqlite3_close(db);
|
||||
send_json_error(500, "database_error", "Failed to create stats view");
|
||||
return;
|
||||
}
|
||||
|
||||
// Query storage_stats view
|
||||
const char* sql = "SELECT total_blobs, total_bytes, avg_blob_size, "
|
||||
"unique_uploaders, first_upload, last_upload FROM storage_stats";
|
||||
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc != SQLITE_OK) {
|
||||
sqlite3_close(db);
|
||||
send_json_error(500, "database_error", "Failed to prepare query");
|
||||
return;
|
||||
}
|
||||
|
||||
cJSON* response = cJSON_CreateObject();
|
||||
cJSON* data = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(response, "status", "success");
|
||||
cJSON_AddItemToObject(response, "data", data);
|
||||
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
int total_files = sqlite3_column_int(stmt, 0);
|
||||
long long total_bytes = sqlite3_column_int64(stmt, 1);
|
||||
double avg_size = sqlite3_column_double(stmt, 2);
|
||||
int unique_uploaders = sqlite3_column_int(stmt, 3);
|
||||
|
||||
cJSON_AddNumberToObject(data, "total_files", total_files);
|
||||
cJSON_AddNumberToObject(data, "total_bytes", (double)total_bytes);
|
||||
cJSON_AddNumberToObject(data, "total_size_mb", (double)total_bytes / (1024 * 1024));
|
||||
cJSON_AddNumberToObject(data, "unique_uploaders", unique_uploaders);
|
||||
cJSON_AddNumberToObject(data, "first_upload", sqlite3_column_int64(stmt, 4));
|
||||
cJSON_AddNumberToObject(data, "last_upload", sqlite3_column_int64(stmt, 5));
|
||||
cJSON_AddNumberToObject(data, "avg_file_size", avg_size);
|
||||
|
||||
// Get file type distribution
|
||||
sqlite3_stmt* type_stmt;
|
||||
const char* type_sql = "SELECT type, COUNT(*) FROM blobs GROUP BY type ORDER BY COUNT(*) DESC LIMIT 5";
|
||||
cJSON* file_types = cJSON_CreateObject();
|
||||
|
||||
rc = sqlite3_prepare_v2(db, type_sql, -1, &type_stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
while (sqlite3_step(type_stmt) == SQLITE_ROW) {
|
||||
const char* type_name = (const char*)sqlite3_column_text(type_stmt, 0);
|
||||
int count = sqlite3_column_int(type_stmt, 1);
|
||||
cJSON_AddNumberToObject(file_types, type_name ? type_name : "unknown", count);
|
||||
}
|
||||
sqlite3_finalize(type_stmt);
|
||||
}
|
||||
cJSON_AddItemToObject(data, "file_types", file_types);
|
||||
} else {
|
||||
// No data - return zeros
|
||||
cJSON_AddNumberToObject(data, "total_files", 0);
|
||||
cJSON_AddNumberToObject(data, "total_bytes", 0);
|
||||
cJSON_AddNumberToObject(data, "total_size_mb", 0.0);
|
||||
cJSON_AddNumberToObject(data, "unique_uploaders", 0);
|
||||
cJSON_AddNumberToObject(data, "avg_file_size", 0);
|
||||
cJSON_AddItemToObject(data, "file_types", cJSON_CreateObject());
|
||||
}
|
||||
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_close(db);
|
||||
|
||||
char* response_str = cJSON_PrintUnformatted(response);
|
||||
send_json_response(200, response_str);
|
||||
free(response_str);
|
||||
cJSON_Delete(response);
|
||||
}
|
||||
|
||||
void handle_config_get_api(void) {
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc;
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc) {
|
||||
send_json_error(500, "database_error", "Failed to open database");
|
||||
return;
|
||||
}
|
||||
|
||||
cJSON* response = cJSON_CreateObject();
|
||||
cJSON* data = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(response, "status", "success");
|
||||
cJSON_AddItemToObject(response, "data", data);
|
||||
|
||||
// Query all server config settings
|
||||
const char* sql = "SELECT key, value FROM server_config ORDER BY key";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
while (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
const char* key = (const char*)sqlite3_column_text(stmt, 0);
|
||||
const char* value = (const char*)sqlite3_column_text(stmt, 1);
|
||||
if (key && value) {
|
||||
cJSON_AddStringToObject(data, key, value);
|
||||
}
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
|
||||
sqlite3_close(db);
|
||||
|
||||
char* response_str = cJSON_PrintUnformatted(response);
|
||||
send_json_response(200, response_str);
|
||||
free(response_str);
|
||||
cJSON_Delete(response);
|
||||
}
|
||||
|
||||
void handle_config_put_api(void) {
|
||||
// Read request body
|
||||
const char* content_length_str = getenv("CONTENT_LENGTH");
|
||||
if (!content_length_str) {
|
||||
send_json_error(411, "length_required", "Content-Length header required");
|
||||
return;
|
||||
}
|
||||
|
||||
long content_length = atol(content_length_str);
|
||||
if (content_length <= 0 || content_length > 4096) {
|
||||
send_json_error(400, "invalid_content_length", "Invalid content length");
|
||||
return;
|
||||
}
|
||||
|
||||
char* json_body = malloc(content_length + 1);
|
||||
if (!json_body) {
|
||||
send_json_error(500, "memory_error", "Failed to allocate memory");
|
||||
return;
|
||||
}
|
||||
|
||||
size_t bytes_read = fread(json_body, 1, content_length, stdin);
|
||||
if (bytes_read != (size_t)content_length) {
|
||||
free(json_body);
|
||||
send_json_error(400, "incomplete_body", "Failed to read complete request body");
|
||||
return;
|
||||
}
|
||||
json_body[content_length] = '\0';
|
||||
|
||||
// Parse JSON
|
||||
cJSON* config_data = cJSON_Parse(json_body);
|
||||
if (!config_data) {
|
||||
free(json_body);
|
||||
send_json_error(400, "invalid_json", "Invalid JSON in request body");
|
||||
return;
|
||||
}
|
||||
|
||||
// Update database
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc;
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READWRITE, NULL);
|
||||
if (rc) {
|
||||
free(json_body);
|
||||
cJSON_Delete(config_data);
|
||||
send_json_error(500, "database_error", "Failed to open database");
|
||||
return;
|
||||
}
|
||||
|
||||
// Collect updated keys for response
|
||||
cJSON* updated_keys = cJSON_CreateArray();
|
||||
|
||||
// Update each config value
|
||||
const char* update_sql = "INSERT OR REPLACE INTO server_config (key, value) VALUES (?, ?)";
|
||||
|
||||
cJSON* item = NULL;
|
||||
cJSON_ArrayForEach(item, config_data) {
|
||||
if (cJSON_IsString(item) && item->string) {
|
||||
rc = sqlite3_prepare_v2(db, update_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, item->string, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, cJSON_GetStringValue(item), -1, SQLITE_STATIC);
|
||||
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_DONE) {
|
||||
cJSON_AddItemToArray(updated_keys, cJSON_CreateString(item->string));
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
free(json_body);
|
||||
cJSON_Delete(config_data);
|
||||
sqlite3_close(db);
|
||||
|
||||
// Send response
|
||||
cJSON* response = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(response, "status", "success");
|
||||
cJSON_AddStringToObject(response, "message", "Configuration updated successfully");
|
||||
cJSON_AddItemToObject(response, "updated_keys", updated_keys);
|
||||
|
||||
char* response_str = cJSON_PrintUnformatted(response);
|
||||
send_json_response(200, response_str);
|
||||
free(response_str);
|
||||
cJSON_Delete(response);
|
||||
}
|
||||
|
||||
void handle_files_api(void) {
|
||||
// Parse query parameters for pagination
|
||||
const char* query_string = getenv("QUERY_STRING");
|
||||
int limit = 50;
|
||||
int offset = 0;
|
||||
|
||||
if (query_string) {
|
||||
char params[10][256];
|
||||
int param_count = parse_query_params(query_string, params, 10);
|
||||
|
||||
for (int i = 0; i < param_count; i++) {
|
||||
char* key = params[i];
|
||||
char* value = strchr(key, '=');
|
||||
if (value) {
|
||||
*value++ = '\0';
|
||||
if (strcmp(key, "limit") == 0) {
|
||||
limit = atoi(value);
|
||||
if (limit <= 0 || limit > 200) limit = 50;
|
||||
} else if (strcmp(key, "offset") == 0) {
|
||||
offset = atoi(value);
|
||||
if (offset < 0) offset = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc;
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc) {
|
||||
send_json_error(500, "database_error", "Failed to open database");
|
||||
return;
|
||||
}
|
||||
|
||||
// Query recent files with pagination
|
||||
const char* sql = "SELECT sha256, size, type, uploaded_at, uploader_pubkey, filename "
|
||||
"FROM blobs ORDER BY uploaded_at DESC LIMIT ? OFFSET ?";
|
||||
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc != SQLITE_OK) {
|
||||
sqlite3_close(db);
|
||||
send_json_error(500, "database_error", "Failed to prepare query");
|
||||
return;
|
||||
}
|
||||
|
||||
sqlite3_bind_int(stmt, 1, limit);
|
||||
sqlite3_bind_int(stmt, 2, offset);
|
||||
|
||||
cJSON* response = cJSON_CreateObject();
|
||||
cJSON* data = cJSON_CreateObject();
|
||||
cJSON* files_array = cJSON_CreateArray();
|
||||
cJSON_AddStringToObject(response, "status", "success");
|
||||
cJSON_AddItemToObject(response, "data", data);
|
||||
cJSON_AddItemToObject(data, "files", files_array);
|
||||
cJSON_AddNumberToObject(data, "limit", limit);
|
||||
cJSON_AddNumberToObject(data, "offset", offset);
|
||||
|
||||
int total_count = 0;
|
||||
while (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
total_count++;
|
||||
|
||||
cJSON* file_obj = cJSON_CreateObject();
|
||||
cJSON_AddItemToArray(files_array, file_obj);
|
||||
|
||||
const char* sha256 = (const char*)sqlite3_column_text(stmt, 0);
|
||||
const char* type = (const char*)sqlite3_column_text(stmt, 2);
|
||||
const char* filename = (const char*)sqlite3_column_text(stmt, 5);
|
||||
|
||||
cJSON_AddStringToObject(file_obj, "sha256", sha256 ? sha256 : "");
|
||||
cJSON_AddNumberToObject(file_obj, "size", sqlite3_column_int64(stmt, 1));
|
||||
cJSON_AddStringToObject(file_obj, "type", type ? type : "");
|
||||
cJSON_AddNumberToObject(file_obj, "uploaded_at", sqlite3_column_int64(stmt, 3));
|
||||
|
||||
const char* uploader = (const char*)sqlite3_column_text(stmt, 4);
|
||||
cJSON_AddStringToObject(file_obj, "uploader_pubkey",
|
||||
uploader ? uploader : "");
|
||||
|
||||
cJSON_AddStringToObject(file_obj, "filename",
|
||||
filename ? filename : "");
|
||||
|
||||
// Build URL for file
|
||||
char url[1024];
|
||||
if (type && sha256) {
|
||||
// Use local admin implementation for URL building
|
||||
char origin_url[256];
|
||||
admin_nip94_get_origin(origin_url, sizeof(origin_url));
|
||||
admin_nip94_build_blob_url(origin_url, sha256, type, url, sizeof(url));
|
||||
cJSON_AddStringToObject(file_obj, "url", url);
|
||||
}
|
||||
}
|
||||
|
||||
// Get total count for pagination info
|
||||
const char* count_sql = "SELECT COUNT(*) FROM blobs";
|
||||
sqlite3_stmt* count_stmt;
|
||||
|
||||
rc = sqlite3_prepare_v2(db, count_sql, -1, &count_stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
rc = sqlite3_step(count_stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
int total = sqlite3_column_int(count_stmt, 0);
|
||||
cJSON_AddNumberToObject(data, "total", total);
|
||||
}
|
||||
sqlite3_finalize(count_stmt);
|
||||
}
|
||||
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_close(db);
|
||||
|
||||
char* response_str = cJSON_PrintUnformatted(response);
|
||||
send_json_response(200, response_str);
|
||||
free(response_str);
|
||||
cJSON_Delete(response);
|
||||
}
|
||||
|
||||
void handle_health_api(void) {
|
||||
cJSON* response = cJSON_CreateObject();
|
||||
cJSON* data = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(response, "status", "success");
|
||||
cJSON_AddItemToObject(response, "data", data);
|
||||
|
||||
// Check database connection
|
||||
sqlite3* db;
|
||||
int rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
cJSON_AddStringToObject(data, "database", "connected");
|
||||
sqlite3_close(db);
|
||||
} else {
|
||||
cJSON_AddStringToObject(data, "database", "disconnected");
|
||||
}
|
||||
|
||||
// Check blob directory
|
||||
struct stat st;
|
||||
if (stat("blobs", &st) == 0 && S_ISDIR(st.st_mode)) {
|
||||
cJSON_AddStringToObject(data, "blob_directory", "accessible");
|
||||
} else {
|
||||
cJSON_AddStringToObject(data, "blob_directory", "inaccessible");
|
||||
}
|
||||
|
||||
// Get disk usage
|
||||
cJSON* disk_usage = cJSON_CreateObject();
|
||||
struct statvfs vfs;
|
||||
if (statvfs(".", &vfs) == 0) {
|
||||
unsigned long long total_bytes = (unsigned long long)vfs.f_blocks * vfs.f_frsize;
|
||||
unsigned long long free_bytes = (unsigned long long)vfs.f_bavail * vfs.f_frsize;
|
||||
unsigned long long used_bytes = total_bytes - free_bytes;
|
||||
double usage_percent = (double)used_bytes / (double)total_bytes * 100.0;
|
||||
|
||||
cJSON_AddNumberToObject(disk_usage, "total_bytes", (double)total_bytes);
|
||||
cJSON_AddNumberToObject(disk_usage, "used_bytes", (double)used_bytes);
|
||||
cJSON_AddNumberToObject(disk_usage, "available_bytes", (double)free_bytes);
|
||||
cJSON_AddNumberToObject(disk_usage, "usage_percent", usage_percent);
|
||||
}
|
||||
cJSON_AddItemToObject(data, "disk_usage", disk_usage);
|
||||
|
||||
// Add server info
|
||||
cJSON_AddNumberToObject(data, "server_time", (double)time(NULL));
|
||||
cJSON_AddNumberToObject(data, "uptime", 0); // Would need to track process start time
|
||||
|
||||
char* response_str = cJSON_PrintUnformatted(response);
|
||||
send_json_response(200, response_str);
|
||||
free(response_str);
|
||||
cJSON_Delete(response);
|
||||
}
|
||||
|
||||
// Utility functions
|
||||
void send_json_response(int status, const char* json_content) {
|
||||
printf("Status: %d OK\r\n", status);
|
||||
printf("Content-Type: application/json\r\n");
|
||||
printf("Cache-Control: no-cache\r\n");
|
||||
printf("\r\n");
|
||||
printf("%s\n", json_content);
|
||||
}
|
||||
|
||||
void send_json_error(int status, const char* error, const char* message) {
|
||||
cJSON* response = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(response, "status", "error");
|
||||
cJSON_AddStringToObject(response, "error", error);
|
||||
cJSON_AddStringToObject(response, "message", message);
|
||||
|
||||
char* response_str = cJSON_PrintUnformatted(response);
|
||||
printf("Status: %d %s\r\n",
|
||||
status,
|
||||
status == 400 ? "Bad Request" :
|
||||
status == 401 ? "Unauthorized" :
|
||||
status == 403 ? "Forbidden" :
|
||||
status == 404 ? "Not Found" :
|
||||
status == 500 ? "Internal Server Error" :
|
||||
status == 503 ? "Service Unavailable" :
|
||||
"Error");
|
||||
printf("Content-Type: application/json\r\n");
|
||||
printf("Cache-Control: no-cache\r\n");
|
||||
printf("\r\n");
|
||||
printf("%s\n", response_str);
|
||||
|
||||
free(response_str);
|
||||
cJSON_Delete(response);
|
||||
}
|
||||
|
||||
int parse_query_params(const char* query_string, char params[][256], int max_params) {
|
||||
if (!query_string || !params) return 0;
|
||||
|
||||
size_t query_len = strlen(query_string);
|
||||
char* query_copy = malloc(query_len + 1);
|
||||
if (!query_copy) return 0;
|
||||
memcpy(query_copy, query_string, query_len + 1);
|
||||
|
||||
int count = 0;
|
||||
char* token = strtok(query_copy, "&");
|
||||
|
||||
while (token && count < max_params) {
|
||||
if (strlen(token) >= sizeof(params[0])) {
|
||||
token[sizeof(params[0]) - 1] = '\0';
|
||||
}
|
||||
strcpy(params[count], token);
|
||||
count++;
|
||||
token = strtok(NULL, "&");
|
||||
}
|
||||
|
||||
free(query_copy);
|
||||
return count;
|
||||
}
|
||||
26
src/admin_api.h
Normal file
26
src/admin_api.h
Normal file
@@ -0,0 +1,26 @@
|
||||
#ifndef ADMIN_API_H
|
||||
#define ADMIN_API_H
|
||||
|
||||
#include "ginxsom.h"
|
||||
|
||||
// Main API request handler
|
||||
void handle_admin_api_request(const char* method, const char* uri);
|
||||
|
||||
// Individual endpoint handlers
|
||||
void handle_stats_api(void);
|
||||
void handle_config_get_api(void);
|
||||
void handle_config_put_api(void);
|
||||
void handle_files_api(void);
|
||||
void handle_health_api(void);
|
||||
|
||||
// Admin authentication functions
|
||||
int authenticate_admin_request(const char* auth_header);
|
||||
int is_admin_enabled(void);
|
||||
int verify_admin_pubkey(const char* event_pubkey);
|
||||
|
||||
// Utility functions
|
||||
void send_json_response(int status, const char* json_content);
|
||||
void send_json_error(int status, const char* error, const char* message);
|
||||
int parse_query_params(const char* query_string, char params[][256], int max_params);
|
||||
|
||||
#endif
|
||||
135
src/main.c
135
src/main.c
@@ -1528,7 +1528,26 @@ void handle_head_upload_request(void) {
|
||||
int auth_result = nostr_validate_request(&request, &result);
|
||||
|
||||
if (auth_result != NOSTR_SUCCESS || !result.valid) {
|
||||
send_upload_error_response(401, "authentication_failed", "Invalid or expired authentication", XREASON_AUTH_INVALID);
|
||||
const char* error_type = "authentication_failed";
|
||||
const char* message = "Invalid or expired authentication";
|
||||
const char* details = result.reason[0] ? result.reason : "Authentication validation failed";
|
||||
|
||||
// Provide more specific error messages based on the reason
|
||||
if (strstr(result.reason, "whitelist")) {
|
||||
error_type = "pubkey_not_whitelisted";
|
||||
message = "Public key not authorized";
|
||||
details = result.reason;
|
||||
} else if (strstr(result.reason, "blacklist")) {
|
||||
error_type = "access_denied";
|
||||
message = "Access denied by policy";
|
||||
details = result.reason;
|
||||
} else if (strstr(result.reason, "size")) {
|
||||
error_type = "file_too_large";
|
||||
message = "File size exceeds policy limits";
|
||||
details = result.reason;
|
||||
}
|
||||
|
||||
send_upload_error_response(401, error_type, message, details);
|
||||
log_request("HEAD", "/upload", "auth_failed", 401);
|
||||
return;
|
||||
}
|
||||
@@ -1915,8 +1934,20 @@ void handle_list_request(const char* pubkey) {
|
||||
int auth_result = nostr_validate_request(&request, &result);
|
||||
|
||||
if (auth_result != NOSTR_SUCCESS || !result.valid) {
|
||||
send_error_response(401, "authentication_failed", "Invalid or expired authentication",
|
||||
"The provided Nostr event is invalid, expired, or does not authorize this operation");
|
||||
const char* error_type = "authentication_failed";
|
||||
const char* message = "Invalid or expired authentication";
|
||||
const char* details = result.reason[0] ? result.reason : "The provided Nostr event is invalid, expired, or does not authorize this operation";
|
||||
|
||||
// Provide more specific error messages based on the reason
|
||||
if (strstr(result.reason, "whitelist")) {
|
||||
error_type = "pubkey_not_whitelisted";
|
||||
message = "Public key not authorized";
|
||||
} else if (strstr(result.reason, "blacklist")) {
|
||||
error_type = "access_denied";
|
||||
message = "Access denied by policy";
|
||||
}
|
||||
|
||||
send_error_response(401, error_type, message, details);
|
||||
log_request("GET", "/list", "failed", 401);
|
||||
return;
|
||||
}
|
||||
@@ -2382,8 +2413,20 @@ void handle_delete_request(const char* sha256) {
|
||||
int auth_result = nostr_validate_request(&request, &result);
|
||||
|
||||
if (auth_result != NOSTR_SUCCESS || !result.valid) {
|
||||
send_error_response(401, "authentication_failed", "Invalid or expired authentication",
|
||||
"The provided Nostr event is invalid, expired, or does not authorize this operation");
|
||||
const char* error_type = "authentication_failed";
|
||||
const char* message = "Invalid or expired authentication";
|
||||
const char* details = result.reason[0] ? result.reason : "The provided Nostr event is invalid, expired, or does not authorize this operation";
|
||||
|
||||
// Provide more specific error messages based on the reason
|
||||
if (strstr(result.reason, "whitelist")) {
|
||||
error_type = "pubkey_not_whitelisted";
|
||||
message = "Public key not authorized";
|
||||
} else if (strstr(result.reason, "blacklist")) {
|
||||
error_type = "access_denied";
|
||||
message = "Access denied by policy";
|
||||
}
|
||||
|
||||
send_error_response(401, error_type, message, details);
|
||||
log_request("DELETE", "/delete", "failed", 401);
|
||||
return;
|
||||
}
|
||||
@@ -2669,57 +2712,43 @@ void handle_upload_request(void) {
|
||||
auth_result, result.valid, result.reason);
|
||||
|
||||
if (auth_result == NOSTR_SUCCESS && !result.valid) {
|
||||
auth_result = result.error_code;
|
||||
if (auth_result != NOSTR_SUCCESS) {
|
||||
free(file_data);
|
||||
free(file_data);
|
||||
|
||||
// Provide specific error messages based on the authentication failure type
|
||||
const char* error_type = "authentication_failed";
|
||||
const char* message = "Authentication failed";
|
||||
const char* details = "The request failed nostr authentication";
|
||||
// Use the detailed reason from the authentication system
|
||||
const char* error_type = "authentication_failed";
|
||||
const char* message = "Authentication failed";
|
||||
const char* details = result.reason[0] ? result.reason : "The request failed authentication";
|
||||
|
||||
switch (auth_result) {
|
||||
case NOSTR_ERROR_EVENT_INVALID_CONTENT:
|
||||
error_type = "event_expired";
|
||||
message = "Authentication event expired";
|
||||
details = "The provided nostr event has expired and is no longer valid";
|
||||
break;
|
||||
case NOSTR_ERROR_EVENT_INVALID_SIGNATURE:
|
||||
error_type = "invalid_signature";
|
||||
message = "Invalid cryptographic signature";
|
||||
details = "The event signature verification failed";
|
||||
break;
|
||||
case NOSTR_ERROR_EVENT_INVALID_PUBKEY:
|
||||
error_type = "invalid_pubkey";
|
||||
message = "Invalid public key";
|
||||
details = "The event contains an invalid or malformed public key";
|
||||
break;
|
||||
case NOSTR_ERROR_EVENT_INVALID_ID:
|
||||
error_type = "invalid_event_id";
|
||||
message = "Invalid event ID";
|
||||
details = "The event ID does not match the calculated hash";
|
||||
break;
|
||||
case NOSTR_ERROR_INVALID_INPUT:
|
||||
error_type = "invalid_format";
|
||||
message = "Invalid authorization format";
|
||||
details = "The authorization header format is invalid or malformed";
|
||||
break;
|
||||
default:
|
||||
error_type = "authentication_failed";
|
||||
message = "Authentication failed";
|
||||
// Use C-style string formatting for error details
|
||||
static char error_details_buffer[256];
|
||||
snprintf(error_details_buffer, sizeof(error_details_buffer),
|
||||
"The request failed nostr authentication (error code: %d - %s)",
|
||||
auth_result, nostr_strerror(auth_result));
|
||||
details = error_details_buffer;
|
||||
break;
|
||||
}
|
||||
|
||||
send_error_response(401, error_type, message, details);
|
||||
log_request("PUT", "/upload", "auth_failed", 401);
|
||||
return;
|
||||
// Provide more specific error types based on the reason content
|
||||
if (strstr(result.reason, "whitelist")) {
|
||||
error_type = "pubkey_not_whitelisted";
|
||||
message = "Public key not authorized";
|
||||
} else if (strstr(result.reason, "blacklist")) {
|
||||
error_type = "access_denied";
|
||||
message = "Access denied by policy";
|
||||
} else if (strstr(result.reason, "expired")) {
|
||||
error_type = "event_expired";
|
||||
message = "Authentication event expired";
|
||||
} else if (strstr(result.reason, "signature")) {
|
||||
error_type = "invalid_signature";
|
||||
message = "Invalid cryptographic signature";
|
||||
} else if (strstr(result.reason, "size")) {
|
||||
error_type = "file_too_large";
|
||||
message = "File size exceeds policy limits";
|
||||
} else if (strstr(result.reason, "MIME") || strstr(result.reason, "mime")) {
|
||||
error_type = "unsupported_type";
|
||||
message = "File type not allowed by policy";
|
||||
} else if (strstr(result.reason, "hash")) {
|
||||
error_type = "hash_blocked";
|
||||
message = "File hash blocked by policy";
|
||||
} else if (strstr(result.reason, "format") || strstr(result.reason, "invalid")) {
|
||||
error_type = "invalid_format";
|
||||
message = "Invalid authorization format";
|
||||
}
|
||||
|
||||
send_error_response(401, error_type, message, details);
|
||||
log_request("PUT", "/upload", "auth_failed", 401);
|
||||
return;
|
||||
}
|
||||
|
||||
// Extract uploader pubkey from validation result if auth was provided
|
||||
|
||||
234
tests/admin_test.sh
Executable file
234
tests/admin_test.sh
Executable file
@@ -0,0 +1,234 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Ginxsom Admin API Test Script
|
||||
# Tests admin API endpoints using nak (for Nostr events) and curl
|
||||
#
|
||||
# Prerequisites:
|
||||
# - nak: https://github.com/fiatjaf/nak
|
||||
# - curl
|
||||
# - jq (for JSON parsing)
|
||||
# - Admin pubkey configured in ginxsom server_config
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
GINXSOM_URL="http://localhost:9001"
|
||||
|
||||
# Test admin keys (for development/testing only - DO NOT USE IN PRODUCTION)
|
||||
TEST_ADMIN_PRIVKEY="993bf9c54fc00bd32a5a1ce64b6d384a5fce109df1e9aee9be1052c1e5cd8120"
|
||||
TEST_ADMIN_PUBKEY="2ef05348f28d24e0f0ed0751278442c27b62c823c37af8d8d89d8592c6ee84e7"
|
||||
|
||||
ADMIN_PRIVKEY="${ADMIN_PRIVKEY:-${TEST_ADMIN_PRIVKEY}}"
|
||||
ADMIN_PUBKEY=""
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
check_dependencies() {
|
||||
log_info "Checking dependencies..."
|
||||
|
||||
for cmd in nak curl jq; do
|
||||
if ! command -v $cmd &> /dev/null; then
|
||||
log_error "$cmd is not installed"
|
||||
case $cmd in
|
||||
nak)
|
||||
echo "Install from: https://github.com/fiatjaf/nak"
|
||||
;;
|
||||
jq)
|
||||
echo "Install jq for JSON processing"
|
||||
;;
|
||||
curl)
|
||||
echo "curl should be available in most systems"
|
||||
;;
|
||||
esac
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
log_success "All dependencies found"
|
||||
}
|
||||
|
||||
generate_admin_keys() {
|
||||
if [[ -z "$ADMIN_PRIVKEY" ]]; then
|
||||
log_info "Generating new admin key pair..."
|
||||
ADMIN_PRIVKEY=$(nak key generate)
|
||||
log_warning "Generated new admin private key: $ADMIN_PRIVKEY"
|
||||
log_warning "Save this key for future use: export ADMIN_PRIVKEY='$ADMIN_PRIVKEY'"
|
||||
fi
|
||||
|
||||
ADMIN_PUBKEY=$(echo "$ADMIN_PRIVKEY" | nak key public)
|
||||
log_info "Admin public key: $ADMIN_PUBKEY"
|
||||
}
|
||||
|
||||
create_admin_event() {
|
||||
local method="$1"
|
||||
local content="admin_request"
|
||||
local expiration=$(($(date +%s) + 3600)) # 1 hour from now
|
||||
|
||||
# Create Nostr event with nak - always use "admin" tag for admin operations
|
||||
local event=$(nak event -k 24242 -c "$content" \
|
||||
--tag t="admin" \
|
||||
--tag expiration="$expiration" \
|
||||
--sec "$ADMIN_PRIVKEY")
|
||||
|
||||
echo "$event"
|
||||
}
|
||||
|
||||
send_admin_request() {
|
||||
local method="$1"
|
||||
local endpoint="$2"
|
||||
local data="$3"
|
||||
|
||||
log_info "Testing $method $endpoint"
|
||||
|
||||
# Create authenticated Nostr event
|
||||
local event=$(create_admin_event "$method")
|
||||
local auth_header="Nostr $(echo "$event" | base64 -w 0)"
|
||||
|
||||
# Send request with curl
|
||||
local curl_args=(-s -w "%{http_code}" -H "Authorization: $auth_header")
|
||||
|
||||
if [[ "$method" == "PUT" && -n "$data" ]]; then
|
||||
curl_args+=(-H "Content-Type: application/json" -d "$data")
|
||||
fi
|
||||
|
||||
local response=$(curl "${curl_args[@]}" -X "$method" "$GINXSOM_URL$endpoint")
|
||||
local http_code="${response: -3}"
|
||||
local body="${response%???}"
|
||||
|
||||
if [[ "$http_code" =~ ^2 ]]; then
|
||||
log_success "$method $endpoint - HTTP $http_code"
|
||||
if [[ -n "$body" ]]; then
|
||||
echo "$body" | jq . 2>/dev/null || echo "$body"
|
||||
fi
|
||||
else
|
||||
log_error "$method $endpoint - HTTP $http_code"
|
||||
echo "$body" | jq . 2>/dev/null || echo "$body"
|
||||
fi
|
||||
|
||||
return $([[ "$http_code" =~ ^2 ]])
|
||||
}
|
||||
|
||||
test_health_endpoint() {
|
||||
log_info "=== Testing Health Endpoint (no auth required) ==="
|
||||
|
||||
local response=$(curl -s -w "%{http_code}" "$GINXSOM_URL/api/health")
|
||||
local http_code="${response: -3}"
|
||||
local body="${response%???}"
|
||||
|
||||
if [[ "$http_code" =~ ^2 ]]; then
|
||||
log_success "GET /api/health - HTTP $http_code"
|
||||
echo "$body" | jq .
|
||||
else
|
||||
log_error "GET /api/health - HTTP $http_code"
|
||||
echo "$body"
|
||||
fi
|
||||
}
|
||||
|
||||
test_stats_endpoint() {
|
||||
log_info "=== Testing Statistics Endpoint ==="
|
||||
send_admin_request "GET" "/api/stats"
|
||||
}
|
||||
|
||||
test_config_endpoints() {
|
||||
log_info "=== Testing Configuration Endpoints ==="
|
||||
|
||||
# Get current config
|
||||
send_admin_request "GET" "/api/config"
|
||||
|
||||
# Update config
|
||||
local config_update='{
|
||||
"max_file_size": "209715200",
|
||||
"require_auth": "true",
|
||||
"nip94_enabled": "true"
|
||||
}'
|
||||
|
||||
send_admin_request "PUT" "/api/config" "$config_update"
|
||||
|
||||
# Get config again to verify
|
||||
send_admin_request "GET" "/api/config"
|
||||
}
|
||||
|
||||
test_files_endpoint() {
|
||||
log_info "=== Testing Files Endpoint ==="
|
||||
send_admin_request "GET" "/api/files?limit=10&offset=0"
|
||||
}
|
||||
|
||||
configure_server_admin() {
|
||||
log_warning "=== Server Configuration Required ==="
|
||||
log_warning "To use this admin interface, add the following to your ginxsom database:"
|
||||
log_warning ""
|
||||
log_warning "sqlite3 db/ginxsom.db << EOF"
|
||||
log_warning "INSERT OR REPLACE INTO server_config (key, value, description) VALUES"
|
||||
log_warning " ('admin_pubkey', '$ADMIN_PUBKEY', 'Nostr public key authorized for admin operations'),"
|
||||
log_warning " ('admin_enabled', 'true', 'Enable admin interface');"
|
||||
log_warning "EOF"
|
||||
log_warning ""
|
||||
log_warning "Then restart ginxsom server."
|
||||
|
||||
echo ""
|
||||
log_warning "Or use the Nak utility to interact with the API:"
|
||||
echo ""
|
||||
log_warning " # Create an event"
|
||||
echo " EVENT=\$(nak event -k 24242 -c 'admin_request' --tag t='GET' --tag expiration=\$(date -d '+1 hour' +%s) --sec '$ADMIN_PRIVKEY')"
|
||||
echo ""
|
||||
log_warning " # Send authenticated request"
|
||||
echo " curl -H \"Authorization: Nostr \$(echo \"\$EVENT\" | base64 -w 0)\" http://localhost:9001/api/stats"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main() {
|
||||
echo "=== Ginxsom Admin API Test Suite ==="
|
||||
echo ""
|
||||
|
||||
check_dependencies
|
||||
generate_admin_keys
|
||||
|
||||
# Setup admin configuration automatically
|
||||
echo ""
|
||||
log_info "Setting up admin configuration..."
|
||||
./tests/init_admin.sh
|
||||
echo ""
|
||||
|
||||
# Test endpoints
|
||||
test_health_endpoint
|
||||
echo ""
|
||||
|
||||
test_stats_endpoint
|
||||
echo ""
|
||||
|
||||
test_config_endpoints
|
||||
echo ""
|
||||
|
||||
test_files_endpoint
|
||||
echo ""
|
||||
|
||||
log_success "Admin API testing complete!"
|
||||
log_info "Admin pubkey for server config: $ADMIN_PUBKEY"
|
||||
}
|
||||
|
||||
# Allow sourcing for individual function testing
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
360
tests/auth_test.sh
Executable file
360
tests/auth_test.sh
Executable file
@@ -0,0 +1,360 @@
|
||||
#!/bin/bash
|
||||
|
||||
# auth_test.sh - Authentication System Test Suite
|
||||
# Tests the unified nostr_core_lib authentication system integrated into ginxsom
|
||||
|
||||
# Configuration
|
||||
SERVER_URL="http://localhost:9001"
|
||||
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
|
||||
DB_PATH="db/ginxsom.db"
|
||||
TEST_DIR="tests/auth_test_tmp"
|
||||
|
||||
# Test keys for different scenarios
|
||||
TEST_USER1_PRIVKEY="5c0c523f52a5b6fad39ed2403092df8cebc36318b39383bca6c00808626fab3a"
|
||||
TEST_USER1_PUBKEY="79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798"
|
||||
|
||||
TEST_USER2_PRIVKEY="182c3a5e3b7a1b7e4f5c6b7c8b4a5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2"
|
||||
TEST_USER2_PUBKEY="c95195e5e7de1ad8c4d3c0ac4e8b5c0c4e0c4d3c1e5c8d4c2e7e9f4a5b6c7d8e"
|
||||
|
||||
echo "=== Ginxsom Authentication System Test Suite ==="
|
||||
echo "Testing unified nostr_core_lib authentication integration"
|
||||
echo "Timestamp: $(date -Iseconds)"
|
||||
echo
|
||||
|
||||
# Check prerequisites
|
||||
echo "[INFO] Checking prerequisites..."
|
||||
for cmd in nak curl jq sqlite3; do
|
||||
if ! command -v $cmd &> /dev/null; then
|
||||
echo "[ERROR] $cmd command not found"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Check if server is running
|
||||
if ! curl -s -f "${SERVER_URL}/" > /dev/null 2>&1; then
|
||||
echo "[ERROR] Server not running at $SERVER_URL"
|
||||
echo "[INFO] Start with: ./restart-all.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if database exists
|
||||
if [[ ! -f "$DB_PATH" ]]; then
|
||||
echo "[ERROR] Database not found at $DB_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[SUCCESS] All prerequisites met"
|
||||
echo
|
||||
|
||||
# Setup test environment and auth rules ONCE at the beginning
|
||||
echo "=== Setting up authentication rules ==="
|
||||
mkdir -p "$TEST_DIR"
|
||||
|
||||
# Enable authentication rules
|
||||
sqlite3 "$DB_PATH" "INSERT OR REPLACE INTO auth_config (key, value) VALUES ('auth_rules_enabled', 'true');"
|
||||
|
||||
# Delete ALL existing auth rules and cache (clean slate)
|
||||
echo "Deleting all existing auth rules..."
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_cache;"
|
||||
|
||||
# Set up all test rules at once
|
||||
echo "Creating test auth rules..."
|
||||
|
||||
# 1. Whitelist for TEST_USER1 for upload operations (priority 10)
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description)
|
||||
VALUES ('pubkey_whitelist', '$TEST_USER1_PUBKEY', 'upload', 10, 1, 'TEST_WHITELIST_USER1');"
|
||||
|
||||
# 2. Blacklist for TEST_USER2 for upload operations (priority 5 - higher priority)
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description)
|
||||
VALUES ('pubkey_blacklist', '$TEST_USER2_PUBKEY', 'upload', 5, 1, 'TEST_BLACKLIST_USER2');"
|
||||
|
||||
# 3. Hash blacklist (will be set after we create a test file)
|
||||
echo "test content for hash blacklist" > "$TEST_DIR/blacklisted_file.txt"
|
||||
BLACKLISTED_HASH=$(sha256sum "$TEST_DIR/blacklisted_file.txt" | cut -d' ' -f1)
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description)
|
||||
VALUES ('hash_blacklist', '$BLACKLISTED_HASH', 'upload', 5, 1, 'TEST_HASH_BLACKLIST');"
|
||||
|
||||
echo "Hash blacklisted: $BLACKLISTED_HASH"
|
||||
|
||||
# Display the rules we created
|
||||
echo
|
||||
echo "Auth rules created:"
|
||||
sqlite3 "$DB_PATH" -header -column "SELECT rule_type, rule_target, operation, priority, enabled, description FROM auth_rules WHERE description LIKE 'TEST_%' ORDER BY priority;"
|
||||
echo
|
||||
|
||||
# Helper functions
|
||||
create_test_file() {
|
||||
local filename="$1"
|
||||
local content="${2:-test content for $filename}"
|
||||
local filepath="$TEST_DIR/$filename"
|
||||
echo "$content" > "$filepath"
|
||||
echo "$filepath"
|
||||
}
|
||||
|
||||
create_auth_event() {
|
||||
local privkey="$1"
|
||||
local operation="$2"
|
||||
local hash="$3"
|
||||
local expiration_offset="${4:-3600}" # 1 hour default
|
||||
|
||||
local expiration=$(date -d "+${expiration_offset} seconds" +%s)
|
||||
|
||||
local event_args=(-k 24242 -c "" --tag "t=$operation" --tag "expiration=$expiration" --sec "$privkey")
|
||||
|
||||
if [[ -n "$hash" ]]; then
|
||||
event_args+=(--tag "x=$hash")
|
||||
fi
|
||||
|
||||
nak event "${event_args[@]}"
|
||||
}
|
||||
|
||||
test_upload() {
|
||||
local test_name="$1"
|
||||
local privkey="$2"
|
||||
local file_path="$3"
|
||||
local expected_status="${4:-ANY}"
|
||||
|
||||
echo "=== $test_name ==="
|
||||
|
||||
local file_hash=$(sha256sum "$file_path" | cut -d' ' -f1)
|
||||
echo "File: $(basename "$file_path")"
|
||||
echo "Hash: $file_hash"
|
||||
echo "User pubkey: $(echo "$privkey" | nak key public)"
|
||||
|
||||
# Create auth event
|
||||
local event=$(create_auth_event "$privkey" "upload" "$file_hash")
|
||||
local auth_header="Nostr $(echo "$event" | base64 -w 0)"
|
||||
|
||||
# Make upload request
|
||||
local response_file=$(mktemp)
|
||||
local http_status=$(curl -s -w "%{http_code}" \
|
||||
-H "Authorization: $auth_header" \
|
||||
-H "Content-Type: text/plain" \
|
||||
--data-binary "@$file_path" \
|
||||
-X PUT "$UPLOAD_ENDPOINT" \
|
||||
-o "$response_file")
|
||||
|
||||
echo "HTTP Status: $http_status"
|
||||
echo "Server Response:"
|
||||
cat "$response_file" | jq . 2>/dev/null || cat "$response_file"
|
||||
echo
|
||||
|
||||
rm -f "$response_file"
|
||||
|
||||
if [[ "$expected_status" != "ANY" ]]; then
|
||||
if [[ "$http_status" == "$expected_status" ]]; then
|
||||
echo "✓ Expected HTTP $expected_status - PASSED"
|
||||
else
|
||||
echo "✗ Expected HTTP $expected_status, got $http_status - FAILED"
|
||||
fi
|
||||
fi
|
||||
echo
|
||||
}
|
||||
|
||||
# Run the tests
|
||||
echo "=== Running Authentication Tests ==="
|
||||
echo
|
||||
|
||||
# Test 1: Whitelisted user (should succeed)
|
||||
test_file1=$(create_test_file "whitelisted_upload.txt" "Content from whitelisted user")
|
||||
test_upload "Test 1: Whitelisted User Upload" "$TEST_USER1_PRIVKEY" "$test_file1" "200"
|
||||
|
||||
# Test 2: Blacklisted user (should fail)
|
||||
test_file2=$(create_test_file "blacklisted_upload.txt" "Content from blacklisted user")
|
||||
test_upload "Test 2: Blacklisted User Upload" "$TEST_USER2_PRIVKEY" "$test_file2" "403"
|
||||
|
||||
# Test 3: Whitelisted user uploading blacklisted hash (blacklist should win due to higher priority)
|
||||
test_upload "Test 3: Whitelisted User + Blacklisted Hash" "$TEST_USER1_PRIVKEY" "$TEST_DIR/blacklisted_file.txt" "403"
|
||||
|
||||
# Test 4: Random user with no specific rules (should be allowed since no restrictive whitelist applies to all users)
|
||||
test_file4=$(create_test_file "random_upload.txt" "Content from random user")
|
||||
# Use a different private key that's not in any rules
|
||||
RANDOM_PRIVKEY="abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234"
|
||||
test_upload "Test 4: Random User (No Rules)" "$RANDOM_PRIVKEY" "$test_file4" "ANY"
|
||||
|
||||
# Test 5: Test with authentication disabled
|
||||
echo "=== Test 5: Authentication Disabled ==="
|
||||
echo "Disabling authentication rules..."
|
||||
sqlite3 "$DB_PATH" "INSERT OR REPLACE INTO auth_config (key, value) VALUES ('auth_rules_enabled', 'false');"
|
||||
|
||||
test_file5=$(create_test_file "auth_disabled.txt" "Upload with auth disabled")
|
||||
test_upload "Test 5: Upload with Authentication Disabled" "$TEST_USER2_PRIVKEY" "$test_file5" "200"
|
||||
|
||||
# Re-enable authentication
|
||||
echo "Re-enabling authentication rules..."
|
||||
sqlite3 "$DB_PATH" "INSERT OR REPLACE INTO auth_config (key, value) VALUES ('auth_rules_enabled', 'true');"
|
||||
echo
|
||||
|
||||
# Test failure modes - comprehensive edge case testing
|
||||
echo "=== Test 6: Invalid Authorization Header Formats ==="
|
||||
|
||||
# Helper function for failure mode tests
|
||||
test_failure_mode() {
|
||||
local test_name="$1"
|
||||
local auth_header="$2"
|
||||
local file_content="${3:-failure_test_content}"
|
||||
local expected_status="${4:-401}"
|
||||
|
||||
echo "=== $test_name ==="
|
||||
|
||||
local test_file=$(mktemp)
|
||||
echo "$file_content" > "$test_file"
|
||||
|
||||
local response_file=$(mktemp)
|
||||
local http_status=$(curl -s -w "%{http_code}" \
|
||||
${auth_header:+-H "Authorization: $auth_header"} \
|
||||
-H "Content-Type: text/plain" \
|
||||
--data-binary "@$test_file" \
|
||||
-X PUT "$UPLOAD_ENDPOINT" \
|
||||
-o "$response_file")
|
||||
|
||||
echo "HTTP Status: $http_status"
|
||||
echo "Server Response:"
|
||||
cat "$response_file" | jq . 2>/dev/null || cat "$response_file"
|
||||
echo
|
||||
|
||||
rm -f "$test_file" "$response_file"
|
||||
|
||||
if [[ "$http_status" == "$expected_status" ]]; then
|
||||
echo "✓ Expected HTTP $expected_status - PASSED"
|
||||
else
|
||||
echo "✗ Expected HTTP $expected_status, got $http_status - FAILED"
|
||||
fi
|
||||
echo
|
||||
}
|
||||
|
||||
# Test 6a: Missing Authorization Header
|
||||
test_failure_mode "Test 6a: Missing Authorization Header" ""
|
||||
|
||||
# Test 6b: Invalid Authorization Prefix
|
||||
test_failure_mode "Test 6b: Invalid Authorization Prefix" "Bearer invalidtoken123"
|
||||
|
||||
# Test 6c: Invalid Base64 in Authorization
|
||||
test_failure_mode "Test 6c: Invalid Base64 in Authorization" "Nostr invalid!@#base64"
|
||||
|
||||
echo "=== Test 7: Malformed JSON Events ==="
|
||||
|
||||
# Test 7a: Invalid JSON Structure
|
||||
malformed_json='{"kind":24242,"content":"","created_at":' # Incomplete JSON
|
||||
malformed_b64=$(echo -n "$malformed_json" | base64 -w 0)
|
||||
test_failure_mode "Test 7a: Invalid JSON Structure" "Nostr $malformed_b64"
|
||||
|
||||
# Test 7b: Missing Required Fields
|
||||
missing_fields_json='{"kind":24242,"content":"","created_at":1234567890,"tags":[]}'
|
||||
missing_fields_b64=$(echo -n "$missing_fields_json" | base64 -w 0)
|
||||
test_failure_mode "Test 7b: Missing Required Fields (no pubkey)" "Nostr $missing_fields_b64"
|
||||
|
||||
echo "=== Test 8: Invalid Key Formats ==="
|
||||
|
||||
# Test 8a: Short Public Key
|
||||
echo "Test 8a: Short Public Key (32 chars instead of 64)"
|
||||
echo "short_key_test" > "$TEST_DIR/short_key.txt"
|
||||
file_hash=$(sha256sum "$TEST_DIR/short_key.txt" | cut -d' ' -f1)
|
||||
short_pubkey="1234567890abcdef1234567890abcdef" # 32 chars instead of 64
|
||||
short_key_event=$(cat << EOF
|
||||
{
|
||||
"kind": 24242,
|
||||
"content": "",
|
||||
"created_at": $(date +%s),
|
||||
"pubkey": "$short_pubkey",
|
||||
"tags": [["t", "upload"], ["x", "$file_hash"]],
|
||||
"id": "invalid_id",
|
||||
"sig": "invalid_signature"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
short_key_b64=$(echo -n "$short_key_event" | base64 -w 0)
|
||||
test_failure_mode "Test 8a: Short Public Key" "Nostr $short_key_b64"
|
||||
|
||||
# Test 8b: Non-hex Public Key
|
||||
echo "Test 8b: Non-hex Public Key"
|
||||
echo "nonhex_key_test" > "$TEST_DIR/nonhex_key.txt"
|
||||
file_hash=$(sha256sum "$TEST_DIR/nonhex_key.txt" | cut -d' ' -f1)
|
||||
nonhex_pubkey="gggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg" # Invalid hex
|
||||
nonhex_key_event=$(cat << EOF
|
||||
{
|
||||
"kind": 24242,
|
||||
"content": "",
|
||||
"created_at": $(date +%s),
|
||||
"pubkey": "$nonhex_pubkey",
|
||||
"tags": [["t", "upload"], ["x", "$file_hash"]],
|
||||
"id": "invalid_id",
|
||||
"sig": "invalid_signature"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
nonhex_key_b64=$(echo -n "$nonhex_key_event" | base64 -w 0)
|
||||
test_failure_mode "Test 8b: Non-hex Public Key" "Nostr $nonhex_key_b64"
|
||||
|
||||
echo "=== Test 9: Wrong Event Kind ==="
|
||||
|
||||
# Test 9a: Wrong Kind (1 instead of 24242)
|
||||
echo "Test 9a: Wrong Kind (kind 1 instead of 24242)"
|
||||
echo "wrong_kind_test" > "$TEST_DIR/wrong_kind.txt"
|
||||
file_hash=$(sha256sum "$TEST_DIR/wrong_kind.txt" | cut -d' ' -f1)
|
||||
wrong_kind_event=$(nak event -k 1 -c "wrong kind test" --tag "t=upload" --tag "x=$file_hash" --sec "$TEST_USER1_PRIVKEY")
|
||||
wrong_kind_b64=$(echo -n "$wrong_kind_event" | base64 -w 0)
|
||||
test_failure_mode "Test 9a: Wrong Event Kind" "Nostr $wrong_kind_b64"
|
||||
|
||||
echo "=== Test 10: Missing or Invalid Tags ==="
|
||||
|
||||
# Test 10a: Missing 't' tag
|
||||
echo "Test 10a: Missing 't' (method) tag"
|
||||
echo "missing_t_tag_test" > "$TEST_DIR/missing_t_tag.txt"
|
||||
file_hash=$(sha256sum "$TEST_DIR/missing_t_tag.txt" | cut -d' ' -f1)
|
||||
missing_t_event=$(nak event -k 24242 -c "" --tag "x=$file_hash" --sec "$TEST_USER1_PRIVKEY")
|
||||
missing_t_b64=$(echo -n "$missing_t_event" | base64 -w 0)
|
||||
test_failure_mode "Test 10a: Missing 't' tag" "Nostr $missing_t_b64"
|
||||
|
||||
# Test 10b: Missing 'x' tag
|
||||
echo "Test 10b: Missing 'x' (hash) tag"
|
||||
echo "missing_x_tag_test" > "$TEST_DIR/missing_x_tag.txt"
|
||||
missing_x_event=$(nak event -k 24242 -c "" --tag "t=upload" --sec "$TEST_USER1_PRIVKEY")
|
||||
missing_x_b64=$(echo -n "$missing_x_event" | base64 -w 0)
|
||||
test_failure_mode "Test 10b: Missing 'x' tag" "Nostr $missing_x_b64"
|
||||
|
||||
# Test 10c: Hash mismatch in 'x' tag
|
||||
echo "Test 10c: Hash mismatch in 'x' tag"
|
||||
echo "hash_mismatch_test" > "$TEST_DIR/hash_mismatch.txt"
|
||||
wrong_hash="0000000000000000000000000000000000000000000000000000000000000000"
|
||||
hash_mismatch_event=$(nak event -k 24242 -c "" --tag "t=upload" --tag "x=$wrong_hash" --sec "$TEST_USER1_PRIVKEY")
|
||||
hash_mismatch_b64=$(echo -n "$hash_mismatch_event" | base64 -w 0)
|
||||
test_failure_mode "Test 10c: Hash mismatch" "Nostr $hash_mismatch_b64"
|
||||
|
||||
echo "=== Test 11: Expired Events ==="
|
||||
|
||||
# Test 11a: Event with past expiration
|
||||
echo "Test 11a: Event with past expiration"
|
||||
echo "expired_event_test" > "$TEST_DIR/expired_event.txt"
|
||||
file_hash=$(sha256sum "$TEST_DIR/expired_event.txt" | cut -d' ' -f1)
|
||||
past_time=$(($(date +%s) - 3600)) # 1 hour ago
|
||||
expired_event=$(nak event -k 24242 -c "" --tag "t=upload" --tag "x=$file_hash" --tag "expiration=$past_time" --sec "$TEST_USER1_PRIVKEY")
|
||||
expired_b64=$(echo -n "$expired_event" | base64 -w 0)
|
||||
test_failure_mode "Test 11a: Expired Event" "Nostr $expired_b64"
|
||||
|
||||
echo "=== Test 12: Invalid Signatures ==="
|
||||
|
||||
# Test 12a: Corrupted signature
|
||||
echo "Test 12a: Corrupted signature"
|
||||
echo "corrupted_sig_test" > "$TEST_DIR/corrupted_sig.txt"
|
||||
file_hash=$(sha256sum "$TEST_DIR/corrupted_sig.txt" | cut -d' ' -f1)
|
||||
valid_event=$(nak event -k 24242 -c "" --tag "t=upload" --tag "x=$file_hash" --sec "$TEST_USER1_PRIVKEY")
|
||||
# Corrupt the signature by changing the last character
|
||||
corrupted_event=$(echo "$valid_event" | sed 's/.\{1\}$/x/') # Replace last char with 'x'
|
||||
corrupted_b64=$(echo -n "$corrupted_event" | base64 -w 0)
|
||||
test_failure_mode "Test 12a: Corrupted Signature" "Nostr $corrupted_b64"
|
||||
|
||||
# Show final state
|
||||
echo "=== Final Database State ==="
|
||||
echo "Authentication rules left in database:"
|
||||
sqlite3 "$DB_PATH" -header -column "SELECT rule_type, rule_target, operation, priority, enabled, description FROM auth_rules WHERE description LIKE 'TEST_%' ORDER BY priority;"
|
||||
echo
|
||||
echo "Auth config:"
|
||||
sqlite3 "$DB_PATH" -header -column "SELECT key, value FROM auth_config WHERE key = 'auth_rules_enabled';"
|
||||
echo
|
||||
|
||||
echo "=== Test Suite Completed ==="
|
||||
echo "Comprehensive authentication and failure mode testing completed."
|
||||
echo "Auth rules have been left in the database for inspection."
|
||||
echo "To clean up, run: sqlite3 $DB_PATH \"DELETE FROM auth_rules WHERE description LIKE 'TEST_%';\""
|
||||
1
tests/auth_test_tmp/auth_disabled.txt
Normal file
1
tests/auth_test_tmp/auth_disabled.txt
Normal file
@@ -0,0 +1 @@
|
||||
Upload with auth disabled
|
||||
1
tests/auth_test_tmp/blacklisted_file.txt
Normal file
1
tests/auth_test_tmp/blacklisted_file.txt
Normal file
@@ -0,0 +1 @@
|
||||
test content for hash blacklist
|
||||
1
tests/auth_test_tmp/blacklisted_upload.txt
Normal file
1
tests/auth_test_tmp/blacklisted_upload.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from blacklisted user
|
||||
1
tests/auth_test_tmp/corrupted_sig.txt
Normal file
1
tests/auth_test_tmp/corrupted_sig.txt
Normal file
@@ -0,0 +1 @@
|
||||
corrupted_sig_test
|
||||
1
tests/auth_test_tmp/expired_event.txt
Normal file
1
tests/auth_test_tmp/expired_event.txt
Normal file
@@ -0,0 +1 @@
|
||||
expired_event_test
|
||||
1
tests/auth_test_tmp/hash_mismatch.txt
Normal file
1
tests/auth_test_tmp/hash_mismatch.txt
Normal file
@@ -0,0 +1 @@
|
||||
hash_mismatch_test
|
||||
1
tests/auth_test_tmp/missing_t_tag.txt
Normal file
1
tests/auth_test_tmp/missing_t_tag.txt
Normal file
@@ -0,0 +1 @@
|
||||
missing_t_tag_test
|
||||
1
tests/auth_test_tmp/missing_x_tag.txt
Normal file
1
tests/auth_test_tmp/missing_x_tag.txt
Normal file
@@ -0,0 +1 @@
|
||||
missing_x_tag_test
|
||||
1
tests/auth_test_tmp/nonhex_key.txt
Normal file
1
tests/auth_test_tmp/nonhex_key.txt
Normal file
@@ -0,0 +1 @@
|
||||
nonhex_key_test
|
||||
1
tests/auth_test_tmp/random_upload.txt
Normal file
1
tests/auth_test_tmp/random_upload.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from random user
|
||||
1
tests/auth_test_tmp/short_key.txt
Normal file
1
tests/auth_test_tmp/short_key.txt
Normal file
@@ -0,0 +1 @@
|
||||
short_key_test
|
||||
1
tests/auth_test_tmp/whitelisted_upload.txt
Normal file
1
tests/auth_test_tmp/whitelisted_upload.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from whitelisted user
|
||||
1
tests/auth_test_tmp/wrong_kind.txt
Normal file
1
tests/auth_test_tmp/wrong_kind.txt
Normal file
@@ -0,0 +1 @@
|
||||
wrong_kind_test
|
||||
33
tests/init_admin.sh
Executable file
33
tests/init_admin.sh
Executable file
@@ -0,0 +1,33 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Admin Initialization Script for Ginxsom Testing
|
||||
# Sets up the test admin key in the database
|
||||
|
||||
set -e
|
||||
|
||||
# Test admin public key (must match TEST_ADMIN_PUBKEY from admin_test.sh)
|
||||
TEST_ADMIN_PUBKEY="2ef05348f28d24e0f0ed0751278442c27b62c823c37af8d8d89d8592c6ee84e7"
|
||||
|
||||
echo "Initializing admin access for testing..."
|
||||
|
||||
# Check if database exists
|
||||
if [ ! -f "db/ginxsom.db" ]; then
|
||||
echo "Error: Database db/ginxsom.db not found. Run ./db/init.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Configure admin settings
|
||||
sqlite3 db/ginxsom.db << EOF
|
||||
INSERT OR REPLACE INTO server_config (key, value, description) VALUES
|
||||
('admin_pubkey', '$TEST_ADMIN_PUBKEY', 'Nostr public key authorized for admin operations (test key)'),
|
||||
('admin_enabled', 'true', 'Enable admin interface');
|
||||
EOF
|
||||
|
||||
echo "Admin access configured successfully!"
|
||||
echo "Test admin public key: $TEST_ADMIN_PUBKEY"
|
||||
echo "Use private key from admin_test.sh to generate authentication tokens"
|
||||
|
||||
# Verify configuration
|
||||
echo ""
|
||||
echo "Current admin configuration:"
|
||||
sqlite3 db/ginxsom.db "SELECT key, value FROM server_config WHERE key IN ('admin_pubkey', 'admin_enabled');"
|
||||
49
tests/simple_auth_test.sh
Executable file
49
tests/simple_auth_test.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Simple authentication test
|
||||
set -e
|
||||
|
||||
SERVER_URL="http://localhost:9001"
|
||||
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
|
||||
TEST_USER1_PRIVKEY="5c0c523f52a5b6fad39ed2403092df8cebc36318b39383bca6c00808626fab3a"
|
||||
|
||||
echo "=== Simple Authentication Test ==="
|
||||
|
||||
# Create a small test file
|
||||
echo "Test file content $(date)" > /tmp/simple_test.txt
|
||||
FILE_HASH=$(sha256sum /tmp/simple_test.txt | cut -d' ' -f1)
|
||||
|
||||
echo "Test file hash: $FILE_HASH"
|
||||
|
||||
# Create auth event
|
||||
EVENT=$(nak event -k 24242 -c "" \
|
||||
--tag "t=upload" \
|
||||
--tag "x=${FILE_HASH}" \
|
||||
--tag "expiration=$(date -d '+1 hour' +%s)" \
|
||||
--sec "$TEST_USER1_PRIVKEY")
|
||||
|
||||
echo "Generated event: $EVENT"
|
||||
|
||||
# Create auth header
|
||||
AUTH_HEADER="Nostr $(echo "$EVENT" | base64 -w 0)"
|
||||
|
||||
echo "Auth header length: ${#AUTH_HEADER}"
|
||||
|
||||
# Test upload
|
||||
echo "Testing upload..."
|
||||
HTTP_STATUS=$(curl -s -w "%{http_code}" \
|
||||
-H "Authorization: $AUTH_HEADER" \
|
||||
-H "Content-Type: text/plain" \
|
||||
--data-binary "@/tmp/simple_test.txt" \
|
||||
-X PUT "$UPLOAD_ENDPOINT" \
|
||||
-o /tmp/upload_response.txt)
|
||||
|
||||
echo "HTTP Status: $HTTP_STATUS"
|
||||
echo "Response:"
|
||||
cat /tmp/upload_response.txt
|
||||
echo
|
||||
|
||||
# Cleanup
|
||||
rm -f /tmp/simple_test.txt /tmp/upload_response.txt
|
||||
|
||||
echo "Test completed with status: $HTTP_STATUS"
|
||||
62
tests/simple_comprehensive_test.sh
Executable file
62
tests/simple_comprehensive_test.sh
Executable file
@@ -0,0 +1,62 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Simple comprehensive auth test
|
||||
SERVER_URL="http://localhost:9001"
|
||||
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
|
||||
DB_PATH="../db/ginxsom.db"
|
||||
|
||||
# Test keys
|
||||
TEST_USER1_PRIVKEY="5c0c523f52a5b6fad39ed2403092df8cebc36318b39383bca6c00808626fab3a"
|
||||
TEST_USER1_PUBKEY="79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798"
|
||||
|
||||
echo "=== Simple Authentication Test ==="
|
||||
|
||||
# Test 1: Basic upload
|
||||
echo "Test 1: Basic upload"
|
||||
echo "test content" > test1.txt
|
||||
file_hash=$(sha256sum test1.txt | cut -d" " -f1)
|
||||
|
||||
# Create auth event
|
||||
event=$(nak event -k 24242 -c "" --tag "t=upload" --tag "expiration=$(date -d "+1 hour" +%s)" --tag "x=$file_hash" --sec "$TEST_USER1_PRIVKEY")
|
||||
auth_header="Nostr $(echo "$event" | base64 -w 0)"
|
||||
|
||||
# Make upload request
|
||||
response=$(curl -s -w "%{http_code}" -H "Authorization: $auth_header" -H "Content-Type: text/plain" --data-binary "@test1.txt" -X PUT "$UPLOAD_ENDPOINT" -o response1.json)
|
||||
|
||||
if [ "$response" = "200" ]; then
|
||||
echo "✓ Basic upload test PASSED (HTTP $response)"
|
||||
else
|
||||
echo "✗ Basic upload test FAILED (HTTP $response)"
|
||||
cat response1.json
|
||||
fi
|
||||
|
||||
# Test 2: Whitelist rule
|
||||
echo
|
||||
echo "Test 2: Pubkey whitelist"
|
||||
|
||||
# Clear rules and add whitelist
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules WHERE description LIKE %TEST_%;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_cache;"
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description) VALUES (pubkey_whitelist, , upload, 10, 1, TEST_WHITELIST);"
|
||||
|
||||
echo "test content 2" > test2.txt
|
||||
file_hash2=$(sha256sum test2.txt | cut -d" " -f1)
|
||||
|
||||
event2=$(nak event -k 24242 -c "" --tag "t=upload" --tag "expiration=$(date -d "+1 hour" +%s)" --tag "x=$file_hash2" --sec "$TEST_USER1_PRIVKEY")
|
||||
auth_header2="Nostr $(echo "$event2" | base64 -w 0)"
|
||||
|
||||
response2=$(curl -s -w "%{http_code}" -H "Authorization: $auth_header2" -H "Content-Type: text/plain" --data-binary "@test2.txt" -X PUT "$UPLOAD_ENDPOINT" -o response2.json)
|
||||
|
||||
if [ "$response2" = "200" ]; then
|
||||
echo "✓ Whitelist test PASSED (HTTP $response2)"
|
||||
else
|
||||
echo "✗ Whitelist test FAILED (HTTP $response2)"
|
||||
cat response2.json
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
rm -f test1.txt test2.txt response1.json response2.json
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules WHERE description LIKE %TEST_%;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_cache;"
|
||||
|
||||
echo "=== Tests completed ==="
|
||||
1
tests/tests/auth_test_tmp/auth_disabled.txt
Normal file
1
tests/tests/auth_test_tmp/auth_disabled.txt
Normal file
@@ -0,0 +1 @@
|
||||
Upload with auth disabled
|
||||
1
tests/tests/auth_test_tmp/blacklisted_file.txt
Normal file
1
tests/tests/auth_test_tmp/blacklisted_file.txt
Normal file
@@ -0,0 +1 @@
|
||||
test content for hash blacklist
|
||||
1
tests/tests/auth_test_tmp/blacklisted_upload.txt
Normal file
1
tests/tests/auth_test_tmp/blacklisted_upload.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from blacklisted user
|
||||
1
tests/tests/auth_test_tmp/corrupted_sig.txt
Normal file
1
tests/tests/auth_test_tmp/corrupted_sig.txt
Normal file
@@ -0,0 +1 @@
|
||||
corrupted_sig_test
|
||||
1
tests/tests/auth_test_tmp/expired_event.txt
Normal file
1
tests/tests/auth_test_tmp/expired_event.txt
Normal file
@@ -0,0 +1 @@
|
||||
expired_event_test
|
||||
1
tests/tests/auth_test_tmp/hash_mismatch.txt
Normal file
1
tests/tests/auth_test_tmp/hash_mismatch.txt
Normal file
@@ -0,0 +1 @@
|
||||
hash_mismatch_test
|
||||
1
tests/tests/auth_test_tmp/missing_t_tag.txt
Normal file
1
tests/tests/auth_test_tmp/missing_t_tag.txt
Normal file
@@ -0,0 +1 @@
|
||||
missing_t_tag_test
|
||||
1
tests/tests/auth_test_tmp/missing_x_tag.txt
Normal file
1
tests/tests/auth_test_tmp/missing_x_tag.txt
Normal file
@@ -0,0 +1 @@
|
||||
missing_x_tag_test
|
||||
1
tests/tests/auth_test_tmp/nonhex_key.txt
Normal file
1
tests/tests/auth_test_tmp/nonhex_key.txt
Normal file
@@ -0,0 +1 @@
|
||||
nonhex_key_test
|
||||
1
tests/tests/auth_test_tmp/random_upload.txt
Normal file
1
tests/tests/auth_test_tmp/random_upload.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from random user
|
||||
1
tests/tests/auth_test_tmp/short_key.txt
Normal file
1
tests/tests/auth_test_tmp/short_key.txt
Normal file
@@ -0,0 +1 @@
|
||||
short_key_test
|
||||
1
tests/tests/auth_test_tmp/whitelisted_upload.txt
Normal file
1
tests/tests/auth_test_tmp/whitelisted_upload.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from whitelisted user
|
||||
1
tests/tests/auth_test_tmp/wrong_kind.txt
Normal file
1
tests/tests/auth_test_tmp/wrong_kind.txt
Normal file
@@ -0,0 +1 @@
|
||||
wrong_kind_test
|
||||
Reference in New Issue
Block a user