Compare commits

..

19 Commits

Author SHA1 Message Date
Your Name
80b15e16e2 v0.4.3 - feat: Implement dynamic configuration updates without restart
- Add cache refresh mechanism for config updates
- Implement selective re-initialization for NIP-11 relay info changes
- Categorize configs as dynamic vs restart-required using requires_restart field
- Enhance admin API responses with restart requirement information
- Add comprehensive test for dynamic config updates
- Update documentation for dynamic configuration capabilities

Most relay settings can now be updated via admin API without requiring restart, improving operational flexibility while maintaining stability for critical changes.
2025-10-02 15:53:26 -04:00
Your Name
cfacedbb1a v0.4.2 - Implement NIP-11 Relay Information Document with event-based configuration - make relay info dynamically configurable via admin API 2025-10-02 11:38:28 -04:00
Your Name
c3bab033ed v0.4.1 - Fixed startup bug 2025-10-01 17:23:50 -04:00
Your Name
524f9bd84f Last push before major bug fixes 2025-10-01 14:53:20 -04:00
Your Name
4658ede9d6 feat: Implement auth rules enforcement and fix subscription filtering issues
- **Auth Rules Implementation**: Added blacklist/whitelist enforcement in websockets.c
  - Events are now checked against auth_rules table before acceptance
  - Blacklist blocks specific pubkeys, whitelist enables allow-only mode
  - Made check_database_auth_rules() public for cross-module access

- **Subscription Filtering Fixes**:
  - Added missing 'ids' filter support in SQL query building
  - Fixed test expectations to not require exact event counts for kind filters
  - Improved filter validation and error handling

- **Ephemeral Events Compliance**:
  - Modified SQL queries to exclude kinds 20000-29999 from historical queries
  - Maintains broadcasting to active subscribers while preventing storage/retrieval
  - Ensures NIP-01 compliance for ephemeral event handling

- **Comprehensive Testing**:
  - Created white_black_test.sh with full blacklist/whitelist functionality testing
  - Tests verify blocked posting for blacklisted users
  - Tests verify whitelist-only mode when whitelist rules exist
  - Includes proper auth rule clearing between test phases

- **Code Quality**:
  - Added proper function declarations to websockets.h
  - Improved error handling and logging throughout
  - Enhanced test script with clear pass/fail reporting
2025-09-30 15:17:59 -04:00
Your Name
f7b463aca1 Fixing whitelist and blacklist functionality 2025-09-30 15:02:49 -04:00
Your Name
c1a6e92b1d v0.3.19 - last save before major refactoring 2025-09-30 10:47:11 -04:00
Your Name
eefb0e427e v0.3.18 - index.html improvements 2025-09-30 07:51:23 -04:00
Your Name
c23d81b740 v0.3.17 - Embedded login button 2025-09-30 06:47:09 -04:00
Your Name
6dac231040 v0.3.16 - Admin system getting better 2025-09-30 05:32:23 -04:00
Your Name
6fd3e531c3 v0.3.15 - How can administration take so long 2025-09-27 15:50:42 -04:00
Your Name
c1c05991cf v0.3.14 - I think the admin api is finally working 2025-09-27 14:08:45 -04:00
Your Name
ab378e14d1 v0.3.13 - Working on admin system 2025-09-27 13:32:21 -04:00
Your Name
c0f9bf9ef5 v0.3.12 - Working through auth still 2025-09-25 17:33:38 -04:00
Your Name
bc6a7b3f20 Working on API 2025-09-25 16:35:16 -04:00
Your Name
036b0823b9 v0.3.11 - Working on admin api 2025-09-25 11:25:50 -04:00
Your Name
be99595bde v0.3.10 - . 2025-09-24 10:49:48 -04:00
Your Name
01836a4b4c v0.3.9 - API work 2025-09-21 15:53:03 -04:00
Your Name
9f3b3dd773 v0.3.8 - safety push 2025-09-18 10:18:15 -04:00
33 changed files with 16509 additions and 4665 deletions

3
.gitignore vendored
View File

@@ -8,4 +8,5 @@ src/version.h
dev-config/
db/
copy_executable_local.sh
nostr_login_lite/
nostr_login_lite/
style_guide/

32
07.md
View File

@@ -1,32 +0,0 @@
NIP-07
======
`window.nostr` capability for web browsers
------------------------------------------
`draft` `optional`
The `window.nostr` object may be made available by web browsers or extensions and websites or web-apps may make use of it after checking its availability.
That object must define the following methods:
```
async window.nostr.getPublicKey(): string // returns a public key as hex
async window.nostr.signEvent(event: { created_at: number, kind: number, tags: string[][], content: string }): Event // takes an event object, adds `id`, `pubkey` and `sig` and returns it
```
Aside from these two basic above, the following functions can also be implemented optionally:
```
async window.nostr.nip04.encrypt(pubkey, plaintext): string // returns ciphertext and iv as specified in nip-04 (deprecated)
async window.nostr.nip04.decrypt(pubkey, ciphertext): string // takes ciphertext and iv as specified in nip-04 (deprecated)
async window.nostr.nip44.encrypt(pubkey, plaintext): string // returns ciphertext as specified in nip-44
async window.nostr.nip44.decrypt(pubkey, ciphertext): string // takes ciphertext as specified in nip-44
```
### Recommendation to Extension Authors
To make sure that the `window.nostr` is available to nostr clients on page load, the authors who create Chromium and Firefox extensions should load their scripts by specifying `"run_at": "document_end"` in the extension's manifest.
### Implementation
See https://github.com/aljazceru/awesome-nostr#nip-07-browser-extensions.

View File

@@ -27,7 +27,7 @@
## Critical Integration Issues
### Event-Based Configuration System
- **No traditional config files** - all configuration stored as kind 33334 Nostr events
- **No traditional config files** - all configuration stored in config table
- Admin private key shown **only once** on first startup
- Configuration changes require cryptographically signed events
- Database path determined by generated relay pubkey
@@ -35,7 +35,7 @@
### First-Time Startup Sequence
1. Relay generates admin keypair and relay keypair
2. Creates database file with relay pubkey as filename
3. Stores default configuration as kind 33334 event
3. Stores default configuration in config table
4. **CRITICAL**: Admin private key displayed once and never stored on disk
### Port Management
@@ -48,20 +48,30 @@
- Schema version 4 with JSON tag storage
- **Critical**: Event expiration filtering done at application level, not SQL level
### Configuration Event Structure
### Admin API Event Structure
```json
{
"kind": 33334,
"content": "C Nostr Relay Configuration",
"kind": 23456,
"content": "base64_nip44_encrypted_command_array",
"tags": [
["d", "<relay_pubkey>"],
["relay_description", "value"],
["max_subscriptions_per_client", "25"],
["pow_min_difficulty", "16"]
["p", "<relay_pubkey>"]
]
}
```
**Configuration Commands** (encrypted in content):
- `["relay_description", "My Relay"]`
- `["max_subscriptions_per_client", "25"]`
- `["pow_min_difficulty", "16"]`
**Auth Rule Commands** (encrypted in content):
- `["blacklist", "pubkey", "hex_pubkey_value"]`
- `["whitelist", "pubkey", "hex_pubkey_value"]`
**Query Commands** (encrypted in content):
- `["auth_query", "all"]`
- `["system_command", "system_status"]`
### Process Management
```bash
# Kill existing relay processes

120
Makefile
View File

@@ -9,7 +9,7 @@ LIBS = -lsqlite3 -lwebsockets -lz -ldl -lpthread -lm -L/usr/local/lib -lsecp256k
BUILD_DIR = build
# Source files
MAIN_SRC = src/main.c src/config.c src/request_validator.c
MAIN_SRC = src/main.c src/config.c src/request_validator.c src/nip009.c src/nip011.c src/nip013.c src/nip040.c src/nip042.c src/websockets.c src/subscriptions.c
NOSTR_CORE_LIB = nostr_core_lib/libnostr_core_x64.a
# Architecture detection
@@ -36,10 +36,10 @@ $(NOSTR_CORE_LIB):
@echo "Building nostr_core_lib..."
cd nostr_core_lib && ./build.sh
# Generate version.h from git tags
src/version.h:
# Generate main.h from git tags
src/main.h:
@if [ -d .git ]; then \
echo "Generating version.h from git tags..."; \
echo "Generating main.h from git tags..."; \
RAW_VERSION=$$(git describe --tags --always 2>/dev/null || echo "unknown"); \
if echo "$$RAW_VERSION" | grep -q "^v[0-9]"; then \
CLEAN_VERSION=$$(echo "$$RAW_VERSION" | sed 's/^v//' | cut -d- -f1); \
@@ -51,54 +51,98 @@ src/version.h:
VERSION="v0.0.0"; \
MAJOR=0; MINOR=0; PATCH=0; \
fi; \
echo "/* Auto-generated version information */" > src/version.h; \
echo "#ifndef VERSION_H" >> src/version.h; \
echo "#define VERSION_H" >> src/version.h; \
echo "" >> src/version.h; \
echo "#define VERSION \"$$VERSION\"" >> src/version.h; \
echo "#define VERSION_MAJOR $$MAJOR" >> src/version.h; \
echo "#define VERSION_MINOR $$MINOR" >> src/version.h; \
echo "#define VERSION_PATCH $$PATCH" >> src/version.h; \
echo "" >> src/version.h; \
echo "#endif /* VERSION_H */" >> src/version.h; \
echo "Generated version.h with clean version: $$VERSION"; \
elif [ ! -f src/version.h ]; then \
echo "Git not available and version.h missing, creating fallback version.h..."; \
echo "/*" > src/main.h; \
echo " * C-Relay Main Header - Version and Metadata Information" >> src/main.h; \
echo " *" >> src/main.h; \
echo " * This header contains version information and relay metadata that is" >> src/main.h; \
echo " * automatically updated by the build system (build_and_push.sh)." >> src/main.h; \
echo " *" >> src/main.h; \
echo " * The build_and_push.sh script updates VERSION and related macros when" >> src/main.h; \
echo " * creating new releases." >> src/main.h; \
echo " */" >> src/main.h; \
echo "" >> src/main.h; \
echo "#ifndef MAIN_H" >> src/main.h; \
echo "#define MAIN_H" >> src/main.h; \
echo "" >> src/main.h; \
echo "// Version information (auto-updated by build_and_push.sh)" >> src/main.h; \
echo "#define VERSION \"$$VERSION\"" >> src/main.h; \
echo "#define VERSION_MAJOR $$MAJOR" >> src/main.h; \
echo "#define VERSION_MINOR $$MINOR" >> src/main.h; \
echo "#define VERSION_PATCH $$PATCH" >> src/main.h; \
echo "" >> src/main.h; \
echo "// Relay metadata (authoritative source for NIP-11 information)" >> src/main.h; \
echo "#define RELAY_NAME \"C-Relay\"" >> src/main.h; \
echo "#define RELAY_DESCRIPTION \"High-performance C Nostr relay with SQLite storage\"" >> src/main.h; \
echo "#define RELAY_CONTACT \"\"" >> src/main.h; \
echo "#define RELAY_SOFTWARE \"https://git.laantungir.net/laantungir/c-relay.git\"" >> src/main.h; \
echo "#define RELAY_VERSION VERSION // Use the same version as the build" >> src/main.h; \
echo "#define SUPPORTED_NIPS \"1,2,4,9,11,12,13,15,16,20,22,33,40,42\"" >> src/main.h; \
echo "#define LANGUAGE_TAGS \"\"" >> src/main.h; \
echo "#define RELAY_COUNTRIES \"\"" >> src/main.h; \
echo "#define POSTING_POLICY \"\"" >> src/main.h; \
echo "#define PAYMENTS_URL \"\"" >> src/main.h; \
echo "" >> src/main.h; \
echo "#endif /* MAIN_H */" >> src/main.h; \
echo "Generated main.h with clean version: $$VERSION"; \
elif [ ! -f src/main.h ]; then \
echo "Git not available and main.h missing, creating fallback main.h..."; \
VERSION="v0.0.0"; \
echo "/* Auto-generated version information */" > src/version.h; \
echo "#ifndef VERSION_H" >> src/version.h; \
echo "#define VERSION_H" >> src/version.h; \
echo "" >> src/version.h; \
echo "#define VERSION \"$$VERSION\"" >> src/version.h; \
echo "#define VERSION_MAJOR 0" >> src/version.h; \
echo "#define VERSION_MINOR 0" >> src/version.h; \
echo "#define VERSION_PATCH 0" >> src/version.h; \
echo "" >> src/version.h; \
echo "#endif /* VERSION_H */" >> src/version.h; \
echo "Created fallback version.h with version: $$VERSION"; \
echo "/*" > src/main.h; \
echo " * C-Relay Main Header - Version and Metadata Information" >> src/main.h; \
echo " *" >> src/main.h; \
echo " * This header contains version information and relay metadata that is" >> src/main.h; \
echo " * automatically updated by the build system (build_and_push.sh)." >> src/main.h; \
echo " *" >> src/main.h; \
echo " * The build_and_push.sh script updates VERSION and related macros when" >> src/main.h; \
echo " * creating new releases." >> src/main.h; \
echo " */" >> src/main.h; \
echo "" >> src/main.h; \
echo "#ifndef MAIN_H" >> src/main.h; \
echo "#define MAIN_H" >> src/main.h; \
echo "" >> src/main.h; \
echo "// Version information (auto-updated by build_and_push.sh)" >> src/main.h; \
echo "#define VERSION \"$$VERSION\"" >> src/main.h; \
echo "#define VERSION_MAJOR 0" >> src/main.h; \
echo "#define VERSION_MINOR 0" >> src/main.h; \
echo "#define VERSION_PATCH 0" >> src/main.h; \
echo "" >> src/main.h; \
echo "// Relay metadata (authoritative source for NIP-11 information)" >> src/main.h; \
echo "#define RELAY_NAME \"C-Relay\"" >> src/main.h; \
echo "#define RELAY_DESCRIPTION \"High-performance C Nostr relay with SQLite storage\"" >> src/main.h; \
echo "#define RELAY_CONTACT \"\"" >> src/main.h; \
echo "#define RELAY_SOFTWARE \"https://git.laantungir.net/laantungir/c-relay.git\"" >> src/main.h; \
echo "#define RELAY_VERSION VERSION // Use the same version as the build" >> src/main.h; \
echo "#define SUPPORTED_NIPS \"1,2,4,9,11,12,13,15,16,20,22,33,40,42\"" >> src/main.h; \
echo "#define LANGUAGE_TAGS \"\"" >> src/main.h; \
echo "#define RELAY_COUNTRIES \"\"" >> src/main.h; \
echo "#define POSTING_POLICY \"\"" >> src/main.h; \
echo "#define PAYMENTS_URL \"\"" >> src/main.h; \
echo "" >> src/main.h; \
echo "#endif /* MAIN_H */" >> src/main.h; \
echo "Created fallback main.h with version: $$VERSION"; \
else \
echo "Git not available, preserving existing version.h"; \
echo "Git not available, preserving existing main.h"; \
fi
# Force version.h regeneration (useful for development)
# Force main.h regeneration (useful for development)
force-version:
@echo "Force regenerating version.h..."
@rm -f src/version.h
@$(MAKE) src/version.h
@echo "Force regenerating main.h..."
@rm -f src/main.h
@$(MAKE) src/main.h
# Build the relay
$(TARGET): $(BUILD_DIR) src/version.h src/sql_schema.h $(MAIN_SRC) $(NOSTR_CORE_LIB)
$(TARGET): $(BUILD_DIR) src/main.h src/sql_schema.h $(MAIN_SRC) $(NOSTR_CORE_LIB)
@echo "Compiling C-Relay for architecture: $(ARCH)"
$(CC) $(CFLAGS) $(INCLUDES) $(MAIN_SRC) -o $(TARGET) $(NOSTR_CORE_LIB) $(LIBS)
@echo "Build complete: $(TARGET)"
# Build for specific architectures
x86: $(BUILD_DIR) src/version.h src/sql_schema.h $(MAIN_SRC) $(NOSTR_CORE_LIB)
x86: $(BUILD_DIR) src/main.h src/sql_schema.h $(MAIN_SRC) $(NOSTR_CORE_LIB)
@echo "Building C-Relay for x86_64..."
$(CC) $(CFLAGS) $(INCLUDES) $(MAIN_SRC) -o $(BUILD_DIR)/c_relay_x86 $(NOSTR_CORE_LIB) $(LIBS)
@echo "Build complete: $(BUILD_DIR)/c_relay_x86"
arm64: $(BUILD_DIR) src/version.h src/sql_schema.h $(MAIN_SRC) $(NOSTR_CORE_LIB)
arm64: $(BUILD_DIR) src/main.h src/sql_schema.h $(MAIN_SRC) $(NOSTR_CORE_LIB)
@echo "Cross-compiling C-Relay for ARM64..."
@if ! command -v aarch64-linux-gnu-gcc >/dev/null 2>&1; then \
echo "ERROR: ARM64 cross-compiler not found."; \
@@ -171,7 +215,7 @@ init-db:
# Clean build artifacts
clean:
rm -rf $(BUILD_DIR)
rm -f src/version.h
rm -f src/main.h
@echo "Clean complete"
# Clean everything including nostr_core_lib
@@ -210,6 +254,6 @@ help:
@echo " make check-toolchain # Check what compilers are available"
@echo " make test # Run tests"
@echo " make init-db # Set up database"
@echo " make force-version # Force regenerate version.h from git"
@echo " make force-version # Force regenerate main.h from git"
.PHONY: all x86 arm64 test init-db clean clean-all install-deps install-cross-tools install-arm64-deps check-toolchain help force-version

206
README.md
View File

@@ -22,4 +22,210 @@ Do NOT modify the formatting, add emojis, or change the text. Keep the simple fo
- [ ] NIP-50: Keywords filter
- [ ] NIP-70: Protected Events
## 🔧 Administrator API
C-Relay uses an innovative **event-based administration system** where all configuration and management commands are sent as signed Nostr events using the admin private key generated during first startup. All admin commands use **NIP-44 encrypted command arrays** for security and compatibility.
### Authentication
All admin commands require signing with the admin private key displayed during first-time startup. **Save this key securely** - it cannot be recovered and is needed for all administrative operations.
### Event Structure
All admin commands use the same unified event structure with NIP-44 encrypted content:
**Admin Command Event:**
```json
{
"id": "event_id",
"pubkey": "admin_public_key",
"created_at": 1234567890,
"kind": 23456,
"content": "AqHBUgcM7dXFYLQuDVzGwMST1G8jtWYyVvYxXhVGEu4nAb4LVw...",
"tags": [
["p", "relay_public_key"]
],
"sig": "event_signature"
}
```
The `content` field contains a NIP-44 encrypted JSON array representing the command.
**Admin Response Event:**
```json
["EVENT", "temp_sub_id", {
"id": "response_event_id",
"pubkey": "relay_public_key",
"created_at": 1234567890,
"kind": 23457,
"content": "BpKCVhfN8eYtRmPqSvWxZnMkL2gHjUiOp3rTyEwQaS5dFg...",
"tags": [
["p", "admin_public_key"]
],
"sig": "response_event_signature"
}]
```
The `content` field contains a NIP-44 encrypted JSON response object.
### Admin Commands
All commands are sent as NIP-44 encrypted JSON arrays in the event content. The following table lists all available commands:
| Command Type | Command Format | Description |
|--------------|----------------|-------------|
| **Configuration Management** |
| `config_update` | `["config_update", [{"key": "auth_enabled", "value": "true", "data_type": "boolean", "category": "auth"}, {"key": "relay_description", "value": "My Relay", "data_type": "string", "category": "relay"}, ...]]` | Update relay configuration parameters (supports multiple updates) |
| `config_query` | `["config_query", "all"]` | Query all configuration parameters |
| **Auth Rules Management** |
| `auth_add_blacklist` | `["blacklist", "pubkey", "abc123..."]` | Add pubkey to blacklist |
| `auth_add_whitelist` | `["whitelist", "pubkey", "def456..."]` | Add pubkey to whitelist |
| `auth_delete_rule` | `["delete_auth_rule", "blacklist", "pubkey", "abc123..."]` | Delete specific auth rule |
| `auth_query_all` | `["auth_query", "all"]` | Query all auth rules |
| `auth_query_type` | `["auth_query", "whitelist"]` | Query specific rule type |
| `auth_query_pattern` | `["auth_query", "pattern", "abc123..."]` | Query specific pattern |
| **System Commands** |
| `system_clear_auth` | `["system_command", "clear_all_auth_rules"]` | Clear all auth rules |
| `system_status` | `["system_command", "system_status"]` | Get system status |
### Available Configuration Keys
**Basic Relay Settings:**
- `relay_name`: Relay name (displayed in NIP-11)
- `relay_description`: Relay description text
- `relay_contact`: Contact information
- `relay_software`: Software URL
- `relay_version`: Software version
- `supported_nips`: Comma-separated list of supported NIP numbers (e.g., "1,2,4,9,11,12,13,15,16,20,22,33,40,42")
- `language_tags`: Comma-separated list of supported language tags (e.g., "en,es,fr" or "*" for all)
- `relay_countries`: Comma-separated list of supported country codes (e.g., "US,CA,MX" or "*" for all)
- `posting_policy`: Posting policy URL or text
- `payments_url`: Payment URL for premium features
- `max_connections`: Maximum concurrent connections
- `max_subscriptions_per_client`: Max subscriptions per client
- `max_event_tags`: Maximum tags per event
- `max_content_length`: Maximum event content length
**Authentication & Access Control:**
- `auth_enabled`: Enable whitelist/blacklist auth rules (`true`/`false`)
- `nip42_auth_required`: Enable NIP-42 cryptographic authentication (`true`/`false`)
- `nip42_auth_required_kinds`: Event kinds requiring NIP-42 auth (comma-separated)
- `nip42_challenge_timeout`: NIP-42 challenge expiration seconds
**Proof of Work & Validation:**
- `pow_min_difficulty`: Minimum proof-of-work difficulty
- `nip40_expiration_enabled`: Enable event expiration (`true`/`false`)
### Dynamic Configuration Updates
C-Relay supports **dynamic configuration updates** without requiring a restart for most settings. Configuration parameters are categorized as either **dynamic** (can be updated immediately) or **restart-required** (require relay restart to take effect).
**Dynamic Configuration Parameters (No Restart Required):**
- All relay information (NIP-11) settings: `relay_name`, `relay_description`, `relay_contact`, `relay_software`, `relay_version`, `supported_nips`, `language_tags`, `relay_countries`, `posting_policy`, `payments_url`
- Authentication settings: `auth_enabled`, `nip42_auth_required`, `nip42_auth_required_kinds`, `nip42_challenge_timeout`
- Subscription limits: `max_subscriptions_per_client`, `max_total_subscriptions`
- Event validation limits: `max_event_tags`, `max_content_length`, `max_message_length`
- Proof of Work settings: `pow_min_difficulty`, `pow_mode`
- Event expiration settings: `nip40_expiration_enabled`, `nip40_expiration_strict`, `nip40_expiration_filter`, `nip40_expiration_grace_period`
**Restart-Required Configuration Parameters:**
- Connection settings: `max_connections`, `relay_port`
- Database and core system settings
When updating configuration, the admin API response will indicate whether a restart is required for each parameter. Dynamic updates take effect immediately and are reflected in NIP-11 relay information documents without restart.
### Response Format
All admin commands return **signed EVENT responses** via WebSocket following standard Nostr protocol. Responses use JSON content with structured data.
#### Response Examples
**Success Response:**
```json
["EVENT", "temp_sub_id", {
"id": "response_event_id",
"pubkey": "relay_public_key",
"created_at": 1234567890,
"kind": 23457,
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"status\": \"success\", \"message\": \"Operation completed successfully\", \"timestamp\": 1234567890}",
"tags": [
["p", "admin_public_key"]
],
"sig": "response_event_signature"
}]
```
**Error Response:**
```json
["EVENT", "temp_sub_id", {
"id": "response_event_id",
"pubkey": "relay_public_key",
"created_at": 1234567890,
"kind": 23457,
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"status\": \"error\", \"error\": \"invalid configuration value\", \"timestamp\": 1234567890}",
"tags": [
["p", "admin_public_key"]
],
"sig": "response_event_signature"
}]
```
**Auth Rules Query Response:**
```json
["EVENT", "temp_sub_id", {
"id": "response_event_id",
"pubkey": "relay_public_key",
"created_at": 1234567890,
"kind": 23457,
"content": "nip44 encrypted:{\"query_type\": \"auth_rules_all\", \"total_results\": 2, \"timestamp\": 1234567890, \"data\": [{\"rule_type\": \"blacklist\", \"pattern_type\": \"pubkey\", \"pattern_value\": \"abc123...\", \"action\": \"allow\"}]}",
"tags": [
["p", "admin_public_key"]
],
"sig": "response_event_signature"
}]
```
**Configuration Query Response:**
```json
["EVENT", "temp_sub_id", {
"id": "response_event_id",
"pubkey": "relay_public_key",
"created_at": 1234567890,
"kind": 23457,
"content": "nip44 encrypted:{\"query_type\": \"config_all\", \"total_results\": 27, \"timestamp\": 1234567890, \"data\": [{\"key\": \"auth_enabled\", \"value\": \"false\", \"data_type\": \"boolean\", \"category\": \"auth\", \"description\": \"Enable NIP-42 authentication\"}, {\"key\": \"relay_description\", \"value\": \"My Relay\", \"data_type\": \"string\", \"category\": \"relay\", \"description\": \"Relay description text\"}]}",
"tags": [
["p", "admin_public_key"]
],
"sig": "response_event_signature"
}]
```
**Configuration Update Success Response:**
```json
["EVENT", "temp_sub_id", {
"id": "response_event_id",
"pubkey": "relay_public_key",
"created_at": 1234567890,
"kind": 23457,
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"total_results\": 2, \"timestamp\": 1234567890, \"status\": \"success\", \"data\": [{\"key\": \"auth_enabled\", \"value\": \"true\", \"status\": \"updated\"}, {\"key\": \"relay_description\", \"value\": \"My Updated Relay\", \"status\": \"updated\"}]}",
"tags": [
["p", "admin_public_key"]
],
"sig": "response_event_signature"
}]
```
**Configuration Update Error Response:**
```json
["EVENT", "temp_sub_id", {
"id": "response_event_id",
"pubkey": "relay_public_key",
"created_at": 1234567890,
"kind": 23457,
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"status\": \"error\", \"error\": \"field validation failed: invalid port number '99999' (must be 1-65535)\", \"timestamp\": 1234567890}",
"tags": [
["p", "admin_public_key"]
],
"sig": "response_event_signature"
}]
```

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@
* Two-file architecture:
* 1. Load nostr.bundle.js (official nostr-tools bundle)
* 2. Load nostr-lite.js (this file - NOSTR_LOGIN_LITE library with CSS-only themes)
* Generated on: 2025-09-16T15:52:30.145Z
* Generated on: 2025-09-16T22:12:00.192Z
*/
// Verify dependencies are loaded
@@ -20,509 +20,10 @@ if (typeof window !== 'undefined') {
console.log('NOSTR_LOGIN_LITE: Dependencies verified ✓');
console.log('NOSTR_LOGIN_LITE: NostrTools available with keys:', Object.keys(window.NostrTools));
console.log('NOSTR_LOGIN_LITE: NIP-06 available:', !!window.NostrTools.nip06);
console.log('NOSTR_LOGIN_LITE: NIP-46 available:', !!window.NostrTools.nip46);
}
// ===== NIP-46 Extension Integration =====
// Add NIP-46 functionality to NostrTools if not already present
if (typeof window.NostrTools !== 'undefined' && !window.NostrTools.nip46) {
console.log('NOSTR_LOGIN_LITE: Adding NIP-46 extension to NostrTools');
const { nip44, generateSecretKey, getPublicKey, finalizeEvent, verifyEvent, utils } = window.NostrTools;
// NIP-05 regex for parsing
const NIP05_REGEX = /^(?:([\w.+-]+)@)?([\w_-]+(.[\w_-]+)+)$/;
const BUNKER_REGEX = /^bunker:\/\/([0-9a-f]{64})\??([?\/\w:.=&%-]*)$/;
const EMAIL_REGEX = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
// Event kinds
const NostrConnect = 24133;
const ClientAuth = 22242;
const Handlerinformation = 31990;
// Fetch implementation
let _fetch;
try {
_fetch = fetch;
} catch {
_fetch = null;
}
function useFetchImplementation(fetchImplementation) {
_fetch = fetchImplementation;
}
// Simple Pool implementation for NIP-46
class SimplePool {
constructor() {
this.relays = new Map();
this.subscriptions = new Map();
}
async ensureRelay(url) {
if (!this.relays.has(url)) {
console.log(`NIP-46: Connecting to relay ${url}`);
const ws = new WebSocket(url);
const relay = {
ws,
connected: false,
subscriptions: new Map()
};
this.relays.set(url, relay);
// Wait for connection with proper event handlers
await new Promise((resolve, reject) => {
const timeout = setTimeout(() => {
console.error(`NIP-46: Connection timeout for ${url}`);
reject(new Error(`Connection timeout to ${url}`));
}, 10000); // 10 second timeout
ws.onopen = () => {
console.log(`NIP-46: Successfully connected to relay ${url}, WebSocket state: ${ws.readyState}`);
relay.connected = true;
clearTimeout(timeout);
resolve();
};
ws.onerror = (error) => {
console.error(`NIP-46: Failed to connect to ${url}:`, error);
clearTimeout(timeout);
reject(new Error(`Failed to connect to ${url}: ${error.message || 'Connection failed'}`));
};
ws.onclose = (event) => {
console.log(`NIP-46: Disconnected from relay ${url}:`, event.code, event.reason);
relay.connected = false;
if (this.relays.has(url)) {
this.relays.delete(url);
}
clearTimeout(timeout);
reject(new Error(`Connection closed during setup: ${event.reason || 'Unknown reason'}`));
};
});
} else {
const relay = this.relays.get(url);
// Verify the existing connection is still open
if (!relay.connected || relay.ws.readyState !== WebSocket.OPEN) {
console.log(`NIP-46: Reconnecting to relay ${url}`);
this.relays.delete(url);
return await this.ensureRelay(url); // Recursively reconnect
}
}
const relay = this.relays.get(url);
console.log(`NIP-46: Relay ${url} ready, WebSocket state: ${relay.ws.readyState}`);
return relay;
}
subscribe(relays, filters, params = {}) {
const subId = Math.random().toString(36).substring(7);
relays.forEach(async (url) => {
try {
const relay = await this.ensureRelay(url);
relay.ws.onmessage = (event) => {
try {
const data = JSON.parse(event.data);
if (data[0] === 'EVENT' && data[1] === subId) {
params.onevent?.(data[2]);
} else if (data[0] === 'EOSE' && data[1] === subId) {
params.oneose?.();
}
} catch (err) {
console.warn('Failed to parse message:', err);
}
};
// Ensure filters is an array
const filtersArray = Array.isArray(filters) ? filters : [filters];
const reqMsg = JSON.stringify(['REQ', subId, ...filtersArray]);
relay.ws.send(reqMsg);
} catch (err) {
console.warn('Failed to connect to relay:', url, err);
}
});
return {
close: () => {
relays.forEach(async (url) => {
const relay = this.relays.get(url);
if (relay?.connected) {
relay.ws.send(JSON.stringify(['CLOSE', subId]));
}
});
}
};
}
async publish(relays, event) {
console.log(`NIP-46: Publishing event to ${relays.length} relays:`, event);
const promises = relays.map(async (url) => {
try {
console.log(`NIP-46: Attempting to publish to ${url}`);
const relay = await this.ensureRelay(url);
return new Promise((resolve, reject) => {
const timeout = setTimeout(() => {
console.error(`NIP-46: Publish timeout to ${url}`);
reject(new Error(`Publish timeout to ${url}`));
}, 10000); // Increased timeout to 10 seconds
// Set up message handler for this specific event
const messageHandler = (msg) => {
try {
const data = JSON.parse(msg.data);
if (data[0] === 'OK' && data[1] === event.id) {
clearTimeout(timeout);
relay.ws.removeEventListener('message', messageHandler);
if (data[2]) {
console.log(`NIP-46: Publish success to ${url}:`, data[3]);
resolve(data[3]);
} else {
console.error(`NIP-46: Publish rejected by ${url}:`, data[3]);
reject(new Error(`Publish rejected: ${data[3]}`));
}
}
} catch (err) {
console.error(`NIP-46: Error parsing message from ${url}:`, err);
clearTimeout(timeout);
relay.ws.removeEventListener('message', messageHandler);
reject(err);
}
};
relay.ws.addEventListener('message', messageHandler);
// Double-check WebSocket state before sending
console.log(`NIP-46: About to publish to ${url}, WebSocket state: ${relay.ws.readyState} (0=CONNECTING, 1=OPEN, 2=CLOSING, 3=CLOSED)`);
if (relay.ws.readyState === WebSocket.OPEN) {
console.log(`NIP-46: Sending event to ${url}`);
relay.ws.send(JSON.stringify(['EVENT', event]));
} else {
console.error(`NIP-46: WebSocket not ready for ${url}, state: ${relay.ws.readyState}`);
clearTimeout(timeout);
relay.ws.removeEventListener('message', messageHandler);
reject(new Error(`WebSocket not ready for ${url}, state: ${relay.ws.readyState}`));
}
});
} catch (err) {
console.error(`NIP-46: Failed to publish to ${url}:`, err);
return Promise.reject(new Error(`Failed to publish to ${url}: ${err.message}`));
}
});
const results = await Promise.allSettled(promises);
console.log(`NIP-46: Publish results:`, results);
return results;
}
async querySync(relays, filter, params = {}) {
return new Promise((resolve) => {
const events = [];
this.subscribe(relays, [filter], {
...params,
onevent: (event) => events.push(event),
oneose: () => resolve(events)
});
});
}
}
// Bunker URL utilities
function toBunkerURL(bunkerPointer) {
let bunkerURL = new URL(`bunker://${bunkerPointer.pubkey}`);
bunkerPointer.relays.forEach((relay) => {
bunkerURL.searchParams.append('relay', relay);
});
if (bunkerPointer.secret) {
bunkerURL.searchParams.set('secret', bunkerPointer.secret);
}
return bunkerURL.toString();
}
async function parseBunkerInput(input) {
let match = input.match(BUNKER_REGEX);
if (match) {
try {
const pubkey = match[1];
const qs = new URLSearchParams(match[2]);
return {
pubkey,
relays: qs.getAll('relay'),
secret: qs.get('secret')
};
} catch (_err) {
// Continue to NIP-05 parsing
}
}
return queryBunkerProfile(input);
}
async function queryBunkerProfile(nip05) {
if (!_fetch) {
throw new Error('Fetch implementation not available');
}
const match = nip05.match(NIP05_REGEX);
if (!match) return null;
const [_, name = '_', domain] = match;
try {
const url = `https://${domain}/.well-known/nostr.json?name=${name}`;
const res = await (await _fetch(url, { redirect: 'error' })).json();
let pubkey = res.names[name];
let relays = res.nip46[pubkey] || [];
return { pubkey, relays, secret: null };
} catch (_err) {
return null;
}
}
// BunkerSigner class
class BunkerSigner {
constructor(clientSecretKey, bp, params = {}) {
if (bp.relays.length === 0) {
throw new Error('no relays are specified for this bunker');
}
this.params = params;
this.pool = params.pool || new SimplePool();
this.secretKey = clientSecretKey;
this.conversationKey = nip44.getConversationKey(clientSecretKey, bp.pubkey);
this.bp = bp;
this.isOpen = false;
this.idPrefix = Math.random().toString(36).substring(7);
this.serial = 0;
this.listeners = {};
this.waitingForAuth = {};
this.ready = false;
this.readyPromise = this.setupSubscription(params);
}
async setupSubscription(params) {
console.log('NIP-46: Setting up subscription to relays:', this.bp.relays);
const listeners = this.listeners;
const waitingForAuth = this.waitingForAuth;
const convKey = this.conversationKey;
// Ensure all relays are connected first
await Promise.all(this.bp.relays.map(url => this.pool.ensureRelay(url)));
console.log('NIP-46: All relays connected, setting up subscription');
this.subCloser = this.pool.subscribe(
this.bp.relays,
[{ kinds: [NostrConnect], authors: [this.bp.pubkey], '#p': [getPublicKey(this.secretKey)] }],
{
onevent: async (event) => {
const o = JSON.parse(nip44.decrypt(event.content, convKey));
const { id, result, error } = o;
if (result === 'auth_url' && waitingForAuth[id]) {
delete waitingForAuth[id];
if (params.onauth) {
params.onauth(error);
} else {
console.warn(
`NIP-46: remote signer ${this.bp.pubkey} tried to send an "auth_url"='${error}' but there was no onauth() callback configured.`
);
}
return;
}
let handler = listeners[id];
if (handler) {
if (error) handler.reject(error);
else if (result) handler.resolve(result);
delete listeners[id];
}
},
onclose: () => {
this.subCloser = undefined;
}
}
);
this.isOpen = true;
this.ready = true;
console.log('NIP-46: BunkerSigner setup complete and ready');
}
async ensureReady() {
if (!this.ready) {
console.log('NIP-46: Waiting for BunkerSigner to be ready...');
await this.readyPromise;
}
}
async close() {
this.isOpen = false;
this.subCloser?.close();
}
async sendRequest(method, params) {
return new Promise(async (resolve, reject) => {
try {
await this.ensureReady(); // Wait for BunkerSigner to be ready
if (!this.isOpen) {
throw new Error('this signer is not open anymore, create a new one');
}
if (!this.subCloser) {
await this.setupSubscription(this.params);
}
this.serial++;
const id = `${this.idPrefix}-${this.serial}`;
const encryptedContent = nip44.encrypt(JSON.stringify({ id, method, params }), this.conversationKey);
const verifiedEvent = finalizeEvent(
{
kind: NostrConnect,
tags: [['p', this.bp.pubkey]],
content: encryptedContent,
created_at: Math.floor(Date.now() / 1000)
},
this.secretKey
);
this.listeners[id] = { resolve, reject };
this.waitingForAuth[id] = true;
console.log(`NIP-46: Sending ${method} request with id ${id}`);
const publishResults = await this.pool.publish(this.bp.relays, verifiedEvent);
// Check if at least one publish succeeded
const hasSuccess = publishResults.some(result => result.status === 'fulfilled');
if (!hasSuccess) {
throw new Error('Failed to publish to any relay');
}
console.log(`NIP-46: ${method} request sent successfully`);
} catch (err) {
console.error(`NIP-46: sendRequest ${method} failed:`, err);
reject(err);
}
});
}
async ping() {
let resp = await this.sendRequest('ping', []);
if (resp !== 'pong') {
throw new Error(`result is not pong: ${resp}`);
}
}
async connect() {
await this.sendRequest('connect', [this.bp.pubkey, this.bp.secret || '']);
}
async getPublicKey() {
if (!this.cachedPubKey) {
this.cachedPubKey = await this.sendRequest('get_public_key', []);
}
return this.cachedPubKey;
}
async signEvent(event) {
let resp = await this.sendRequest('sign_event', [JSON.stringify(event)]);
let signed = JSON.parse(resp);
if (verifyEvent(signed)) {
return signed;
} else {
throw new Error(`event returned from bunker is improperly signed: ${JSON.stringify(signed)}`);
}
}
async nip04Encrypt(thirdPartyPubkey, plaintext) {
return await this.sendRequest('nip04_encrypt', [thirdPartyPubkey, plaintext]);
}
async nip04Decrypt(thirdPartyPubkey, ciphertext) {
return await this.sendRequest('nip04_decrypt', [thirdPartyPubkey, ciphertext]);
}
async nip44Encrypt(thirdPartyPubkey, plaintext) {
return await this.sendRequest('nip44_encrypt', [thirdPartyPubkey, plaintext]);
}
async nip44Decrypt(thirdPartyPubkey, ciphertext) {
return await this.sendRequest('nip44_decrypt', [thirdPartyPubkey, ciphertext]);
}
}
async function createAccount(bunker, params, username, domain, email, localSecretKey = generateSecretKey()) {
if (email && !EMAIL_REGEX.test(email)) {
throw new Error('Invalid email');
}
let rpc = new BunkerSigner(localSecretKey, bunker.bunkerPointer, params);
let pubkey = await rpc.sendRequest('create_account', [username, domain, email || '']);
rpc.bp.pubkey = pubkey;
await rpc.connect();
return rpc;
}
async function fetchBunkerProviders(pool, relays) {
const events = await pool.querySync(relays, {
kinds: [Handlerinformation],
'#k': [NostrConnect.toString()]
});
events.sort((a, b) => b.created_at - a.created_at);
const validatedBunkers = await Promise.all(
events.map(async (event, i) => {
try {
const content = JSON.parse(event.content);
try {
if (events.findIndex((ev) => JSON.parse(ev.content).nip05 === content.nip05) !== i) {
return undefined;
}
} catch (err) {
// Continue processing
}
const bp = await queryBunkerProfile(content.nip05);
if (bp && bp.pubkey === event.pubkey && bp.relays.length) {
return {
bunkerPointer: bp,
nip05: content.nip05,
domain: content.nip05.split('@')[1],
name: content.name || content.display_name,
picture: content.picture,
about: content.about,
website: content.website,
local: false
};
}
} catch (err) {
return undefined;
}
})
);
return validatedBunkers.filter((b) => b !== undefined);
}
// Extend NostrTools with NIP-46 functionality
window.NostrTools.nip46 = {
BunkerSigner,
parseBunkerInput,
toBunkerURL,
queryBunkerProfile,
createAccount,
fetchBunkerProviders,
useFetchImplementation,
BUNKER_REGEX,
SimplePool
};
console.log('NIP-46 extension loaded successfully');
console.log('Available: NostrTools.nip46');
}
// ======================================
// NOSTR_LOGIN_LITE Components
// ======================================
@@ -854,7 +355,7 @@ class Modal {
overflow: hidden;
`;
} else {
// Modal content: centered with margin
// Modal content: centered with margin, no fixed height
modalContent.style.cssText = `
position: relative;
background: var(--nl-secondary-color);
@@ -864,7 +365,6 @@ class Modal {
margin: 50px auto;
border-radius: var(--nl-border-radius, 15px);
border: var(--nl-border-width) solid var(--nl-primary-color);
max-height: 600px;
overflow: hidden;
`;
}
@@ -929,8 +429,6 @@ class Modal {
this.modalBody = document.createElement('div');
this.modalBody.style.cssText = `
padding: 24px;
overflow-y: auto;
max-height: 500px;
background: transparent;
font-family: var(--nl-font-family, 'Courier New', monospace);
`;
@@ -1019,6 +517,16 @@ class Modal {
});
}
// Seed Phrase option - only show if explicitly enabled
if (this.options?.methods?.seedphrase === true) {
options.push({
type: 'seedphrase',
title: 'Seed Phrase',
description: 'Import from mnemonic seed phrase',
icon: '🌱'
});
}
// Nostr Connect option (check both 'connect' and 'remote' for compatibility)
if (this.options?.methods?.connect !== false && this.options?.methods?.remote !== false) {
options.push({
@@ -1076,6 +584,27 @@ class Modal {
button.style.background = 'var(--nl-secondary-color)';
};
const iconDiv = document.createElement('div');
// Replace emoji icons with text-based ones
const iconMap = {
'🔌': '[EXT]',
'🔑': '[KEY]',
'🌱': '[SEED]',
'🌐': '[NET]',
'👁️': '[VIEW]',
'📱': '[SMS]'
};
iconDiv.textContent = iconMap[option.icon] || option.icon;
iconDiv.style.cssText = `
font-size: 16px;
font-weight: bold;
margin-right: 16px;
width: 50px;
text-align: center;
color: var(--nl-primary-color);
font-family: var(--nl-font-family, 'Courier New', monospace);
`;
const contentDiv = document.createElement('div');
contentDiv.style.cssText = 'flex: 1; text-align: left;';
@@ -1099,6 +628,7 @@ class Modal {
contentDiv.appendChild(titleDiv);
contentDiv.appendChild(descDiv);
button.appendChild(iconDiv);
button.appendChild(contentDiv);
this.modalBody.appendChild(button);
});
@@ -1115,6 +645,9 @@ class Modal {
case 'local':
this._showLocalKeyScreen();
break;
case 'seedphrase':
this._showSeedPhraseScreen();
break;
case 'connect':
this._showConnectScreen();
break;
@@ -2159,6 +1692,287 @@ class Modal {
this._setAuthMethod('readonly');
}
_showSeedPhraseScreen() {
this.modalBody.innerHTML = '';
const title = document.createElement('h3');
title.textContent = 'Import from Seed Phrase';
title.style.cssText = 'margin: 0 0 16px 0; font-size: 18px; font-weight: 600;';
const description = document.createElement('p');
description.textContent = 'Enter your 12 or 24-word mnemonic seed phrase to derive Nostr accounts:';
description.style.cssText = 'margin-bottom: 12px; color: #6b7280; font-size: 14px;';
const textarea = document.createElement('textarea');
// Remove default placeholder text as requested
textarea.placeholder = '';
textarea.style.cssText = `
width: 100%;
height: 100px;
padding: 12px;
border: 1px solid #d1d5db;
border-radius: 6px;
margin-bottom: 12px;
resize: none;
font-family: monospace;
font-size: 14px;
box-sizing: border-box;
`;
// Add real-time mnemonic validation
const formatHint = document.createElement('div');
formatHint.style.cssText = 'margin-bottom: 16px; font-size: 12px; color: #6b7280; min-height: 16px;';
textarea.oninput = () => {
const value = textarea.value.trim();
if (!value) {
formatHint.textContent = '';
return;
}
const isValid = this._validateMnemonic(value);
if (isValid) {
const wordCount = value.split(/\s+/).length;
formatHint.textContent = `✅ Valid ${wordCount}-word mnemonic detected`;
formatHint.style.color = '#059669';
} else {
formatHint.textContent = '❌ Invalid mnemonic - must be 12 or 24 valid BIP-39 words';
formatHint.style.color = '#dc2626';
}
};
// Generate new seed phrase button
const generateButton = document.createElement('button');
generateButton.textContent = 'Generate New Seed Phrase';
generateButton.onclick = () => this._generateNewSeedPhrase(textarea, formatHint);
generateButton.style.cssText = this._getButtonStyle() + 'margin-bottom: 12px;';
const importButton = document.createElement('button');
importButton.textContent = 'Import Accounts';
importButton.onclick = () => this._importFromSeedPhrase(textarea.value);
importButton.style.cssText = this._getButtonStyle();
const backButton = document.createElement('button');
backButton.textContent = 'Back';
backButton.onclick = () => this._renderLoginOptions();
backButton.style.cssText = this._getButtonStyle('secondary') + 'margin-top: 12px;';
this.modalBody.appendChild(title);
this.modalBody.appendChild(description);
this.modalBody.appendChild(textarea);
this.modalBody.appendChild(formatHint);
this.modalBody.appendChild(generateButton);
this.modalBody.appendChild(importButton);
this.modalBody.appendChild(backButton);
}
_generateNewSeedPhrase(textarea, formatHint) {
try {
// Check if NIP-06 is available
if (!window.NostrTools?.nip06) {
throw new Error('NIP-06 not available in bundle');
}
// Generate a random 12-word mnemonic using NostrTools
const mnemonic = window.NostrTools.nip06.generateSeedWords();
// Set the generated mnemonic in the textarea
textarea.value = mnemonic;
// Trigger validation to show it's valid
const wordCount = mnemonic.split(/\s+/).length;
formatHint.textContent = `✅ Generated valid ${wordCount}-word mnemonic`;
formatHint.style.color = '#059669';
console.log('Generated new seed phrase:', wordCount, 'words');
} catch (error) {
console.error('Failed to generate seed phrase:', error);
formatHint.textContent = '❌ Failed to generate seed phrase - NIP-06 not available';
formatHint.style.color = '#dc2626';
}
}
_validateMnemonic(mnemonic) {
try {
// Check if NIP-06 is available
if (!window.NostrTools?.nip06) {
console.error('NIP-06 not available in bundle');
return false;
}
const words = mnemonic.trim().split(/\s+/);
// Must be 12 or 24 words
if (words.length !== 12 && words.length !== 24) {
return false;
}
// Try to validate using NostrTools nip06 - this will throw if invalid
window.NostrTools.nip06.privateKeyFromSeedWords(mnemonic, '', 0);
return true;
} catch (error) {
console.log('Mnemonic validation failed:', error.message);
return false;
}
}
_importFromSeedPhrase(mnemonic) {
try {
const trimmed = mnemonic.trim();
if (!trimmed) {
throw new Error('Please enter a mnemonic seed phrase');
}
// Validate the mnemonic
if (!this._validateMnemonic(trimmed)) {
throw new Error('Invalid mnemonic. Please enter a valid 12 or 24-word BIP-39 seed phrase');
}
// Generate accounts 0-5 using NIP-06
const accounts = [];
for (let i = 0; i < 6; i++) {
try {
const privateKey = window.NostrTools.nip06.privateKeyFromSeedWords(trimmed, '', i);
const publicKey = window.NostrTools.getPublicKey(privateKey);
const nsec = window.NostrTools.nip19.nsecEncode(privateKey);
const npub = window.NostrTools.nip19.npubEncode(publicKey);
accounts.push({
index: i,
privateKey,
publicKey,
nsec,
npub
});
} catch (error) {
console.error(`Failed to derive account ${i}:`, error);
}
}
if (accounts.length === 0) {
throw new Error('Failed to derive any accounts from seed phrase');
}
console.log(`Successfully derived ${accounts.length} accounts from seed phrase`);
this._showAccountSelection(accounts);
} catch (error) {
console.error('Seed phrase import failed:', error);
this._showError('Seed phrase import failed: ' + error.message);
}
}
_showAccountSelection(accounts) {
this.modalBody.innerHTML = '';
const title = document.createElement('h3');
title.textContent = 'Select Account';
title.style.cssText = 'margin: 0 0 16px 0; font-size: 18px; font-weight: 600;';
const description = document.createElement('p');
description.textContent = `Select which account to use (${accounts.length} accounts derived from seed phrase):`;
description.style.cssText = 'margin-bottom: 20px; color: #6b7280; font-size: 14px;';
this.modalBody.appendChild(title);
this.modalBody.appendChild(description);
// Create table for account selection
const table = document.createElement('table');
table.style.cssText = `
width: 100%;
border-collapse: collapse;
margin-bottom: 20px;
font-family: var(--nl-font-family, 'Courier New', monospace);
font-size: 12px;
`;
// Table header
const thead = document.createElement('thead');
thead.innerHTML = `
<tr style="background: #f3f4f6;">
<th style="padding: 8px; text-align: center; border: 1px solid #d1d5db; font-weight: bold;">#</th>
<th style="padding: 8px; text-align: left; border: 1px solid #d1d5db; font-weight: bold;">Public Key (npub)</th>
<th style="padding: 8px; text-align: center; border: 1px solid #d1d5db; font-weight: bold;">Action</th>
</tr>
`;
table.appendChild(thead);
// Table body
const tbody = document.createElement('tbody');
accounts.forEach(account => {
const row = document.createElement('tr');
row.style.cssText = 'border: 1px solid #d1d5db;';
const indexCell = document.createElement('td');
indexCell.textContent = account.index;
indexCell.style.cssText = 'padding: 8px; text-align: center; border: 1px solid #d1d5db; font-weight: bold;';
const pubkeyCell = document.createElement('td');
pubkeyCell.style.cssText = 'padding: 8px; border: 1px solid #d1d5db; font-family: monospace; word-break: break-all;';
// Show truncated npub for readability
const truncatedNpub = `${account.npub.slice(0, 12)}...${account.npub.slice(-8)}`;
pubkeyCell.innerHTML = `
<code style="background: #f3f4f6; padding: 2px 4px; border-radius: 2px;">${truncatedNpub}</code><br>
<small style="color: #6b7280;">Full: ${account.npub}</small>
`;
const actionCell = document.createElement('td');
actionCell.style.cssText = 'padding: 8px; text-align: center; border: 1px solid #d1d5db;';
const selectButton = document.createElement('button');
selectButton.textContent = 'Use';
selectButton.onclick = () => this._selectAccount(account);
selectButton.style.cssText = `
padding: 4px 12px;
font-size: 11px;
background: var(--nl-secondary-color);
color: var(--nl-primary-color);
border: 1px solid var(--nl-primary-color);
border-radius: 4px;
cursor: pointer;
font-family: var(--nl-font-family, 'Courier New', monospace);
`;
selectButton.onmouseover = () => {
selectButton.style.borderColor = 'var(--nl-accent-color)';
};
selectButton.onmouseout = () => {
selectButton.style.borderColor = 'var(--nl-primary-color)';
};
actionCell.appendChild(selectButton);
row.appendChild(indexCell);
row.appendChild(pubkeyCell);
row.appendChild(actionCell);
tbody.appendChild(row);
});
table.appendChild(tbody);
this.modalBody.appendChild(table);
// Back button
const backButton = document.createElement('button');
backButton.textContent = 'Back to Seed Phrase';
backButton.onclick = () => this._showSeedPhraseScreen();
backButton.style.cssText = this._getButtonStyle('secondary');
this.modalBody.appendChild(backButton);
}
_selectAccount(account) {
console.log('Selected account:', account.index, account.npub);
// Use the same auth method as local keys, but with seedphrase identifier
this._setAuthMethod('local', {
secret: account.nsec,
pubkey: account.publicKey,
source: 'seedphrase',
accountIndex: account.index
});
}
_showOtpScreen() {
// Placeholder for OTP functionality
this._showError('OTP/DM not yet implemented - coming soon!');
@@ -2503,13 +2317,13 @@ class FloatingTab {
// Determine which relays to use
const relays = this.options.getUserRelay.length > 0
? this.options.getUserRelay
: (this.modal?.options?.relays || ['wss://relay.damus.io', 'wss://nos.lol']);
: ['wss://relay.damus.io', 'wss://nos.lol'];
console.log('FloatingTab: Fetching profile from relays:', relays);
try {
// Create a SimplePool instance for querying
const pool = new window.NostrTools.nip46.SimplePool();
const pool = new window.NostrTools.SimplePool();
// Query for kind 0 (user metadata) events
const events = await pool.querySync(relays, {
@@ -2532,9 +2346,27 @@ class FloatingTab {
const profile = JSON.parse(latestEvent.content);
console.log('FloatingTab: Parsed profile:', profile);
// Return relevant profile fields
// Find the best name from any key containing "name" (case-insensitive)
let bestName = null;
const nameKeys = Object.keys(profile).filter(key =>
key.toLowerCase().includes('name') &&
typeof profile[key] === 'string' &&
profile[key].trim().length > 0
);
if (nameKeys.length > 0) {
// Find the shortest name value
bestName = nameKeys
.map(key => profile[key].trim())
.reduce((shortest, current) =>
current.length < shortest.length ? current : shortest
);
console.log('FloatingTab: Found name keys:', nameKeys, 'selected:', bestName);
}
// Return relevant profile fields with the best name
return {
name: profile.name || null,
name: bestName,
display_name: profile.display_name || null,
about: profile.about || null,
picture: profile.picture || null,
@@ -2695,10 +2527,10 @@ class NostrLite {
this.options = {
theme: 'default',
relays: ['wss://relay.damus.io', 'wss://nos.lol'],
methods: {
extension: true,
local: true,
seedphrase: false,
readonly: true,
connect: false,
otp: false
@@ -3127,8 +2959,8 @@ class WindowNostr {
}
async getRelays() {
// Return configured relays from nostr-lite options
return this.nostrLite.options?.relays || ['wss://relay.damus.io'];
// Return default relays since we removed the relays configuration
return ['wss://relay.damus.io', 'wss://nos.lol'];
}
get nip04() {

File diff suppressed because it is too large Load Diff

View File

@@ -139,11 +139,11 @@ compile_project() {
print_warning "Clean failed or no Makefile found"
fi
# Force regenerate version.h to pick up new tags
# Force regenerate main.h to pick up new tags
if make force-version > /dev/null 2>&1; then
print_success "Regenerated version.h"
print_success "Regenerated main.h"
else
print_warning "Failed to regenerate version.h"
print_warning "Failed to regenerate main.h"
fi
# Compile the project

460
docs/admin_api_plan.md Normal file
View File

@@ -0,0 +1,460 @@
# C-Relay Administrator API Implementation Plan
## Problem Analysis
### Current Issues Identified:
1. **Schema Mismatch**: Storage system (config.c) vs Validation system (request_validator.c) use different column names and values
2. **Missing API Endpoint**: No way to clear auth_rules table for testing
3. **Configuration Gap**: Auth rules enforcement may not be properly enabled
4. **Documentation Gap**: Admin API commands not documented
### Root Cause: Auth Rules Schema Inconsistency
**Current Schema (sql_schema.h lines 140-150):**
```sql
CREATE TABLE auth_rules (
rule_type TEXT CHECK (rule_type IN ('whitelist', 'blacklist')),
pattern_type TEXT CHECK (pattern_type IN ('pubkey', 'hash')),
pattern_value TEXT,
action TEXT CHECK (action IN ('allow', 'deny')),
active INTEGER DEFAULT 1
);
```
**Storage Implementation (config.c):**
- Stores: `rule_type='blacklist'`, `pattern_type='pubkey'`, `pattern_value='hex'`, `action='allow'`
**Validation Implementation (request_validator.c):**
- Queries: `rule_type='pubkey_blacklist'`, `rule_target='hex'`, `operation='event'`, `enabled=1`
**MISMATCH**: Validator looks for non-existent columns and wrong rule_type values!
## Proposed Solution Architecture
### Phase 1: API Documentation & Standardization
#### Admin API Commands (via WebSocket with admin private key)
**Kind 23456: Unified Admin API (Ephemeral)**
- Configuration management: Update relay settings, limits, authentication policies
- Auth rules: Add/remove/query whitelist/blacklist rules
- System commands: clear rules, status, cache management
- **Unified Format**: All commands use NIP-44 encrypted content with `["p", "relay_pubkey"]` tags
- **Command Types**:
- Configuration: `["config_key", "config_value"]`
- Auth rules: `["rule_type", "pattern_type", "pattern_value"]`
- Queries: `["auth_query", "filter"]` or `["system_command", "command_name"]`
- **Security**: All admin commands use NIP-44 encryption for privacy and security
#### Configuration Commands (using Kind 23456)
1. **Update Configuration**:
```json
{
"kind": 23456,
"content": "base64_nip44_encrypted_command_array",
"tags": [["p", "relay_pubkey"]]
}
```
*Encrypted content contains:* `["relay_description", "My Relay"]`
2. **Query System Status**:
```json
{
"kind": 23456,
"content": "base64_nip44_encrypted_command_array",
"tags": [["p", "relay_pubkey"]]
}
```
*Encrypted content contains:* `["system_command", "system_status"]`
#### Auth Rules and System Commands (using Kind 23456)
1. **Clear All Auth Rules**:
```json
{
"kind": 23456,
"content": "base64_nip44_encrypted_command_array",
"tags": [["p", "relay_pubkey"]]
}
```
*Encrypted content contains:* `["system_command", "clear_all_auth_rules"]`
2. **Query All Auth Rules**:
```json
{
"kind": 23456,
"content": "base64_nip44_encrypted_command_array",
"tags": [["p", "relay_pubkey"]]
}
```
*Encrypted content contains:* `["auth_query", "all"]`
3. **Add Blacklist Rule**:
```json
{
"kind": 23456,
"content": "base64_nip44_encrypted_command_array",
"tags": [["p", "relay_pubkey"]]
}
```
*Encrypted content contains:* `["blacklist", "pubkey", "deadbeef1234abcd..."]`
### Phase 2: Auth Rules Schema Alignment
#### Option A: Fix Validator to Match Schema (RECOMMENDED)
**Update request_validator.c:**
```sql
-- OLD (broken):
WHERE rule_type = 'pubkey_blacklist' AND rule_target = ? AND operation = ? AND enabled = 1
-- NEW (correct):
WHERE rule_type = 'blacklist' AND pattern_type = 'pubkey' AND pattern_value = ? AND active = 1
```
**Benefits:**
- Matches actual database schema
- Simpler rule_type values ('blacklist' vs 'pubkey_blacklist')
- Uses existing columns (pattern_value vs rule_target)
- Consistent with storage implementation
#### Option B: Update Schema to Match Validator (NOT RECOMMENDED)
Would require changing schema, migration scripts, and storage logic.
### Phase 3: Implementation Priority
#### High Priority (Critical for blacklist functionality):
1. Fix request_validator.c schema mismatch
2. Ensure auth_required configuration is enabled
3. Update tests to use unified ephemeral event kind (23456)
4. Test blacklist enforcement
#### Medium Priority (Enhanced Admin Features):
1. **Implement NIP-44 Encryption Support**:
- Detect NIP-44 encrypted content for Kind 23456 events
- Parse `encrypted_tags` field from content JSON
- Decrypt using admin privkey and relay pubkey
- Process decrypted tags as normal commands
2. Add clear_all_auth_rules system command
3. Add auth rule query functionality (both standard and encrypted modes)
4. Add configuration discovery (list available config keys)
5. Enhanced error reporting in admin API
6. Conflict resolution (same pubkey in whitelist + blacklist)
#### Security Priority (NIP-44 Implementation):
1. **Encryption Detection Logic**: Check for empty tags + encrypted_tags field
2. **Key Pair Management**: Use admin private key + relay public key for NIP-44
3. **Backward Compatibility**: Support both standard and encrypted modes
4. **Error Handling**: Graceful fallback if decryption fails
5. **Performance**: Cache decrypted results to avoid repeated decryption
#### Low Priority (Documentation & Polish):
1. Complete README.md API documentation
2. Example usage scripts
3. Admin client tools
### Phase 4: Expected API Structure
#### README.md Documentation Format:
```markdown
# C-Relay Administrator API
## Authentication
All admin commands require signing with the admin private key generated during first startup.
## Unified Admin API (Kind 23456 - Ephemeral)
Update relay configuration parameters or query available settings.
**Configuration Update Event:**
```json
{
"kind": 23456,
"content": "base64_nip44_encrypted_command_array",
"tags": [["p", "relay_pubkey"]]
}
```
*Encrypted content contains:* `["relay_description", "My Relay Description"]`
**Auth Rules Management:**
**Add Rule Event:**
```json
{
"kind": 23456,
"content": "{\"action\":\"add\",\"description\":\"Block malicious user\"}",
"tags": [
["blacklist", "pubkey", "deadbeef1234..."]
]
}
```
**Remove Rule Event:**
```json
{
"kind": 23456,
"content": "{\"action\":\"remove\",\"description\":\"Unblock user\"}",
"tags": [
["blacklist", "pubkey", "deadbeef1234..."]
]
}
```
**Query All Auth Rules:**
```json
{
"kind": 23456,
"content": "{\"query\":\"list_auth_rules\",\"description\":\"Get all rules\"}",
"tags": [
["auth_query", "all"]
]
}
```
**Query Whitelist Rules Only:**
```json
{
"kind": 23456,
"content": "{\"query\":\"list_auth_rules\",\"description\":\"Get whitelist\"}",
"tags": [
["auth_query", "whitelist"]
]
}
```
**Check Specific Pattern:**
```json
{
"kind": 23456,
"content": "{\"query\":\"check_pattern\",\"description\":\"Check if pattern exists\"}",
"tags": [
["auth_query", "pattern", "deadbeef1234..."]
]
}
```
## System Management (Kind 23456 - Ephemeral)
System administration commands using the same kind as auth rules.
**Clear All Auth Rules:**
```json
{
"kind": 23456,
"content": "{\"action\":\"clear_all\",\"description\":\"Clear all auth rules\"}",
"tags": [
["system_command", "clear_all_auth_rules"]
]
}
```
**System Status:**
```json
{
"kind": 23456,
"content": "{\"action\":\"system_status\",\"description\":\"Get system status\"}",
"tags": [
["system_command", "system_status"]
]
}
```
## Response Format
All admin commands return JSON responses via WebSocket:
**Success Response:**
```json
["OK", "event_id", true, "success_message"]
```
**Error Response:**
```json
["OK", "event_id", false, "error_message"]
```
## Configuration Keys
- `relay_description`: Relay description text
- `relay_contact`: Contact information
- `auth_enabled`: Enable authentication system
- `max_connections`: Maximum concurrent connections
- `pow_min_difficulty`: Minimum proof-of-work difficulty
- ... (full list of config keys)
## Examples
### Enable Authentication & Add Blacklist
```bash
# 1. Enable auth system
nak event -k 23456 --content "base64_nip44_encrypted_command" \
-t "auth_enabled=true" \
--sec $ADMIN_PRIVKEY | nak event ws://localhost:8888
# 2. Add user to blacklist
nak event -k 23456 --content '{"action":"add","description":"Spam user"}' \
-t "blacklist=pubkey;$SPAM_USER_PUBKEY" \
--sec $ADMIN_PRIVKEY | nak event ws://localhost:8888
# 3. Query all auth rules
nak event -k 23456 --content '{"query":"list_auth_rules","description":"Get all rules"}' \
-t "auth_query=all" \
--sec $ADMIN_PRIVKEY | nak event ws://localhost:8888
# 4. Clear all rules for testing
nak event -k 23456 --content '{"action":"clear_all","description":"Clear all rules"}' \
-t "system_command=clear_all_auth_rules" \
--sec $ADMIN_PRIVKEY | nak event ws://localhost:8888
```
## Expected Response Formats
### Configuration Query Response
```json
["EVENT", "subscription_id", {
"kind": 23457,
"content": "base64_nip44_encrypted_response",
"tags": [["p", "admin_pubkey"]]
}]
```
### Current Config Response
```json
["EVENT", "subscription_id", {
"kind": 23457,
"content": "base64_nip44_encrypted_response",
"tags": [["p", "admin_pubkey"]]
}]
```
### Auth Rules Query Response
```json
["EVENT", "subscription_id", {
"kind": 23456,
"content": "{\"auth_rules\": [{\"rule_type\": \"blacklist\", \"pattern_type\": \"pubkey\", \"pattern_value\": \"deadbeef...\"}, {\"rule_type\": \"whitelist\", \"pattern_type\": \"pubkey\", \"pattern_value\": \"cafebabe...\"}]}",
"tags": [["response_type", "auth_rules_list"], ["query_type", "all"]]
}]
```
### Pattern Check Response
```json
["EVENT", "subscription_id", {
"kind": 23456,
"content": "{\"pattern_exists\": true, \"rule_type\": \"blacklist\", \"pattern_value\": \"deadbeef...\"}",
"tags": [["response_type", "pattern_check"], ["pattern", "deadbeef..."]]
}]
```
## Implementation Steps
1. **Document API** (this file) ✅
2. **Update to ephemeral event kinds** ✅
3. **Fix request_validator.c** schema mismatch
4. **Update tests** to use unified Kind 23456
5. **Add auth rule query functionality**
6. **Add configuration discovery feature**
7. **Test blacklist functionality**
8. **Add remaining system commands**
## Testing Plan
1. Fix schema mismatch and test basic blacklist
2. Add clear_auth_rules and test table cleanup
3. Test whitelist/blacklist conflict scenarios
4. Test all admin API commands end-to-end
5. Update integration tests
This plan addresses the immediate blacklist issue while establishing a comprehensive admin API framework for future expansion.
## NIP-44 Encryption Implementation Details
### Server-Side Detection Logic
```c
// In admin event processing function
bool is_encrypted_command(struct nostr_event *event) {
// Check if Kind 23456 with NIP-44 encrypted content
if (event->kind == 23456 &&
event->tags_count == 0) {
return true;
}
return false;
}
cJSON *decrypt_admin_tags(struct nostr_event *event) {
cJSON *content_json = cJSON_Parse(event->content);
if (!content_json) return NULL;
cJSON *encrypted_tags = cJSON_GetObjectItem(content_json, "encrypted_tags");
if (!encrypted_tags) {
cJSON_Delete(content_json);
return NULL;
}
// Decrypt using NIP-44 with admin pubkey and relay privkey
char *decrypted = nip44_decrypt(
cJSON_GetStringValue(encrypted_tags),
admin_pubkey, // Shared secret with admin
relay_private_key // Our private key
);
cJSON *decrypted_tags = cJSON_Parse(decrypted);
free(decrypted);
cJSON_Delete(content_json);
return decrypted_tags; // Returns tag array: [["key1", "val1"], ["key2", "val2"]]
}
```
### Admin Event Processing Flow
1. **Receive Event**: Kind 23456 with admin signature
2. **Check Mode**: Empty tags = encrypted, populated tags = standard
3. **Decrypt if Needed**: Extract and decrypt `encrypted_tags` from content
4. **Process Commands**: Use decrypted/standard tags for command processing
5. **Execute**: Same logic for both modes after tag extraction
6. **Respond**: Standard response format (optionally encrypt response)
### Security Benefits
- **Command Privacy**: Admin operations invisible in event tags
- **Replay Protection**: NIP-44 includes timestamp/randomness
- **Key Management**: Uses existing admin/relay key pair
- **Backward Compatible**: Standard mode still works
- **Performance**: Only decrypt when needed (empty tags detection)
### NIP-44 Library Integration
The relay will need to integrate a NIP-44 encryption/decryption library:
```c
// Required NIP-44 functions
char* nip44_encrypt(const char* plaintext, const char* sender_privkey, const char* recipient_pubkey);
char* nip44_decrypt(const char* ciphertext, const char* recipient_privkey, const char* sender_pubkey);
```
### Implementation Priority (Updated)
#### Phase 1: Core Infrastructure (Complete)
- [x] Event-based admin authentication system
- [x] Kind 23456 (Unified Admin API) processing
- [x] Basic configuration parameter updates
- [x] Auth rule add/remove/clear functionality
- [x] Updated to ephemeral event kinds
- [x] Designed NIP-44 encryption support
#### Phase 2: NIP-44 Encryption Support (Next Priority)
- [ ] **Add NIP-44 library dependency** to project
- [ ] **Implement encryption detection logic** (`is_encrypted_command()`)
- [ ] **Add decrypt_admin_tags() function** with NIP-44 support
- [ ] **Update admin command processing** to handle both modes
- [ ] **Test encrypted admin commands** end-to-end
#### Phase 3: Enhanced Features
- [ ] **Auth rule query functionality** (both standard and encrypted modes)
- [ ] **Configuration discovery API** (list available config keys)
- [ ] **Enhanced error messages** with encryption status
- [ ] **Performance optimization** (caching, async decrypt)
#### Phase 4: Schema Fixes (Critical)
- [ ] **Fix request_validator.c** schema mismatch
- [ ] **Enable blacklist enforcement** with encrypted commands
- [ ] **Update tests** to use both standard and encrypted modes
This enhanced admin API provides enterprise-grade security while maintaining ease of use for basic operations.

View File

@@ -63,7 +63,7 @@ while [[ $# -gt 0 ]]; do
shift 2
fi
;;
--preserve-database)
-d|--preserve-database)
PRESERVE_DATABASE=true
shift
;;
@@ -198,25 +198,54 @@ fi
echo "Build successful. Proceeding with relay restart..."
# Kill existing relay if running
# Kill existing relay if running - start aggressive immediately
echo "Stopping any existing relay servers..."
pkill -f "c_relay_" 2>/dev/null
sleep 2 # Give time for shutdown
# Check if port is still bound
if lsof -i :8888 >/dev/null 2>&1; then
echo "Port 8888 still in use, force killing..."
fuser -k 8888/tcp 2>/dev/null || echo "No process on port 8888"
# Get all relay processes and kill them immediately with -9
RELAY_PIDS=$(pgrep -f "c_relay_" || echo "")
if [ -n "$RELAY_PIDS" ]; then
echo "Force killing relay processes immediately: $RELAY_PIDS"
kill -9 $RELAY_PIDS 2>/dev/null
else
echo "No existing relay processes found"
fi
# Get any remaining processes
REMAINING_PIDS=$(pgrep -f "c_relay_" || echo "")
if [ -n "$REMAINING_PIDS" ]; then
echo "Force killing remaining processes: $REMAINING_PIDS"
kill -9 $REMAINING_PIDS 2>/dev/null
# Ensure port 8888 is completely free with retry loop
echo "Ensuring port 8888 is available..."
for attempt in {1..15}; do
if ! lsof -i :8888 >/dev/null 2>&1; then
echo "Port 8888 is now free"
break
fi
echo "Attempt $attempt: Port 8888 still in use, force killing..."
# Kill anything using port 8888
fuser -k 8888/tcp 2>/dev/null || true
# Double-check for any remaining relay processes
REMAINING_PIDS=$(pgrep -f "c_relay_" || echo "")
if [ -n "$REMAINING_PIDS" ]; then
echo "Killing remaining relay processes: $REMAINING_PIDS"
kill -9 $REMAINING_PIDS 2>/dev/null || true
fi
sleep 2
if [ $attempt -eq 15 ]; then
echo "ERROR: Could not free port 8888 after 15 attempts"
echo "Current processes using port:"
lsof -i :8888 2>/dev/null || echo "No process details available"
echo "You may need to manually kill processes or reboot"
exit 1
fi
done
# Final safety check - ensure no relay processes remain
FINAL_PIDS=$(pgrep -f "c_relay_" || echo "")
if [ -n "$FINAL_PIDS" ]; then
echo "Final cleanup: killing processes $FINAL_PIDS"
kill -9 $FINAL_PIDS 2>/dev/null || true
sleep 1
else
echo "No existing relay found"
fi
# Clean up PID file
@@ -253,14 +282,14 @@ cd build
# Start relay in background and capture its PID
if [ "$USE_TEST_KEYS" = true ]; then
echo "Using deterministic test keys for development..."
./$(basename $BINARY_PATH) -a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -r 1111111111111111111111111111111111111111111111111111111111111111 > ../relay.log 2>&1 &
./$(basename $BINARY_PATH) -a 6a04ab98d9e4774ad806e302dddeb63bea16b5cb5f223ee77478e861bb583eb3 -r 1111111111111111111111111111111111111111111111111111111111111111 --strict-port > ../relay.log 2>&1 &
elif [ -n "$RELAY_ARGS" ]; then
echo "Starting relay with custom configuration..."
./$(basename $BINARY_PATH) $RELAY_ARGS > ../relay.log 2>&1 &
./$(basename $BINARY_PATH) $RELAY_ARGS --strict-port > ../relay.log 2>&1 &
else
# No command line arguments needed for random key generation
echo "Starting relay with random key generation..."
./$(basename $BINARY_PATH) > ../relay.log 2>&1 &
./$(basename $BINARY_PATH) --strict-port > ../relay.log 2>&1 &
fi
RELAY_PID=$!
# Change back to original directory

View File

@@ -1 +1 @@
134045
2356846

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,10 @@
#include <sqlite3.h>
#include <cjson/cJSON.h>
#include <time.h>
#include <pthread.h>
// Forward declaration for WebSocket support
struct lws;
// Configuration constants
#define CONFIG_VALUE_MAX_LENGTH 1024
@@ -23,24 +27,87 @@
// Database path for event-based config
extern char g_database_path[512];
// Configuration manager structure
// Unified configuration cache structure (consolidates all caching systems)
typedef struct {
sqlite3* db;
char relay_pubkey[65];
// Critical keys (frequently accessed)
char admin_pubkey[65];
time_t last_config_check;
char config_file_path[512]; // Temporary for compatibility
} config_manager_t;
char relay_pubkey[65];
// Auth config (from request_validator)
int auth_required;
long max_file_size;
int admin_enabled;
int nip42_mode;
int nip42_challenge_timeout;
int nip42_time_tolerance;
// Static buffer for config values (replaces static buffers in get_config_value functions)
char temp_buffer[CONFIG_VALUE_MAX_LENGTH];
// NIP-11 relay information (migrated from g_relay_info in main.c)
struct {
char name[RELAY_NAME_MAX_LENGTH];
char description[RELAY_DESCRIPTION_MAX_LENGTH];
char banner[RELAY_URL_MAX_LENGTH];
char icon[RELAY_URL_MAX_LENGTH];
char pubkey[RELAY_PUBKEY_MAX_LENGTH];
char contact[RELAY_CONTACT_MAX_LENGTH];
char software[RELAY_URL_MAX_LENGTH];
char version[64];
char privacy_policy[RELAY_URL_MAX_LENGTH];
char terms_of_service[RELAY_URL_MAX_LENGTH];
// Raw string values for parsing into cJSON arrays
char supported_nips_str[CONFIG_VALUE_MAX_LENGTH];
char language_tags_str[CONFIG_VALUE_MAX_LENGTH];
char relay_countries_str[CONFIG_VALUE_MAX_LENGTH];
// Parsed cJSON arrays
cJSON* supported_nips;
cJSON* limitation;
cJSON* retention;
cJSON* relay_countries;
cJSON* language_tags;
cJSON* tags;
char posting_policy[RELAY_URL_MAX_LENGTH];
cJSON* fees;
char payments_url[RELAY_URL_MAX_LENGTH];
} relay_info;
// NIP-13 PoW configuration (migrated from g_pow_config in main.c)
struct {
int enabled;
int min_pow_difficulty;
int validation_flags;
int require_nonce_tag;
int reject_lower_targets;
int strict_format;
int anti_spam_mode;
} pow_config;
// NIP-40 Expiration configuration (migrated from g_expiration_config in main.c)
struct {
int enabled;
int strict_mode;
int filter_responses;
int delete_expired;
long grace_period;
} expiration_config;
// Cache management
time_t cache_expires;
int cache_valid;
pthread_mutex_t cache_lock;
} unified_config_cache_t;
// Command line options structure for first-time startup
typedef struct {
int port_override; // -1 = not set, >0 = port value
char admin_privkey_override[65]; // Empty string = not set, 64-char hex = override
char admin_pubkey_override[65]; // Empty string = not set, 64-char hex = override
char relay_privkey_override[65]; // Empty string = not set, 64-char hex = override
int strict_port; // 0 = allow port increment, 1 = fail if exact port unavailable
} cli_options_t;
// Global configuration manager
extern config_manager_t g_config_manager;
// Global unified configuration cache
extern unified_config_cache_t g_unified_cache;
// Core configuration functions (temporary compatibility)
int init_configuration_system(const char* config_dir_override, const char* config_file_override);
@@ -100,11 +167,22 @@ int set_config_value_in_table(const char* key, const char* value, const char* da
const char* description, const char* category, int requires_restart);
int update_config_in_table(const char* key, const char* value);
int populate_default_config_values(void);
int add_pubkeys_to_config_table(void);
// Admin event processing functions
int process_admin_event_in_config(cJSON* event, char* error_message, size_t error_size);
// Admin event processing functions (updated with WebSocket support)
int process_admin_event_in_config(cJSON* event, char* error_message, size_t error_size, struct lws* wsi);
int process_admin_config_event(cJSON* event, char* error_message, size_t error_size);
int process_admin_auth_event(cJSON* event, char* error_message, size_t error_size);
int process_admin_auth_event(cJSON* event, char* error_message, size_t error_size, struct lws* wsi);
// Unified Kind 23456 handler functions
int handle_kind_23456_unified(cJSON* event, char* error_message, size_t error_size, struct lws* wsi);
int handle_auth_query_unified(cJSON* event, const char* query_type, char* error_message, size_t error_size, struct lws* wsi);
int handle_system_command_unified(cJSON* event, const char* command, char* error_message, size_t error_size, struct lws* wsi);
int handle_auth_rule_modification_unified(cJSON* event, char* error_message, size_t error_size, struct lws* wsi);
// Admin response functions
int send_admin_response_event(const cJSON* response_data, const char* recipient_pubkey, struct lws* wsi);
cJSON* build_query_response(const char* query_type, cJSON* results_array, int total_count);
// Auth rules management functions
int add_auth_rule_from_config(const char* rule_type, const char* pattern_type,
@@ -112,7 +190,10 @@ int add_auth_rule_from_config(const char* rule_type, const char* pattern_type,
int remove_auth_rule_from_config(const char* rule_type, const char* pattern_type,
const char* pattern_value);
// Configuration cache management
// Unified configuration cache management
void force_config_cache_refresh(void);
const char* get_admin_pubkey_cached(void);
const char* get_relay_pubkey_cached(void);
void invalidate_config_cache(void);
int reload_config_from_table(void);
@@ -129,4 +210,10 @@ int populate_config_table_from_event(const cJSON* event);
int process_startup_config_event(const cJSON* event);
int process_startup_config_event_with_fallback(const cJSON* event);
// Dynamic event generation functions for WebSocket configuration fetching
cJSON* generate_config_event_from_table(void);
int req_filter_requests_config_events(const cJSON* filter);
cJSON* generate_synthetic_config_event_for_subscription(const char* sub_id, const cJSON* filters);
char* generate_config_event_json(void);
#endif /* CONFIG_H */

View File

@@ -3,13 +3,13 @@
#include <cjson/cJSON.h>
#include "config.h" // For cli_options_t definition
#include "main.h" // For relay metadata constants
/*
* Default Configuration Event Template
*
* This header contains the default configuration values for the C Nostr Relay.
* These values are used to create the initial kind 33334 configuration event
* during first-time startup.
* These values are used to populate the config table during first-time startup.
*
* IMPORTANT: These values should never be accessed directly by other parts
* of the program. They are only used during initial configuration event creation.
@@ -34,10 +34,16 @@ static const struct {
{"max_connections", "100"},
// NIP-11 Relay Information (relay keys will be populated at runtime)
{"relay_description", "High-performance C Nostr relay with SQLite storage"},
{"relay_contact", ""},
{"relay_software", "https://git.laantungir.net/laantungir/c-relay.git"},
{"relay_version", "v1.0.0"},
{"relay_name", RELAY_NAME},
{"relay_description", RELAY_DESCRIPTION},
{"relay_contact", RELAY_CONTACT},
{"relay_software", RELAY_SOFTWARE},
{"relay_version", RELAY_VERSION},
{"supported_nips", SUPPORTED_NIPS},
{"language_tags", LANGUAGE_TAGS},
{"relay_countries", RELAY_COUNTRIES},
{"posting_policy", POSTING_POLICY},
{"payments_url", PAYMENTS_URL},
// NIP-13 Proof of Work (pow_min_difficulty = 0 means PoW disabled)
{"pow_min_difficulty", "0"},

2877
src/main.c

File diff suppressed because it is too large Load Diff

32
src/main.h Normal file
View File

@@ -0,0 +1,32 @@
/*
* C-Relay Main Header - Version and Metadata Information
*
* This header contains version information and relay metadata that is
* automatically updated by the build system (build_and_push.sh).
*
* The build_and_push.sh script updates VERSION and related macros when
* creating new releases.
*/
#ifndef MAIN_H
#define MAIN_H
// Version information (auto-updated by build_and_push.sh)
#define VERSION "v0.4.3"
#define VERSION_MAJOR 0
#define VERSION_MINOR 4
#define VERSION_PATCH 3
// Relay metadata (authoritative source for NIP-11 information)
#define RELAY_NAME "C-Relay"
#define RELAY_DESCRIPTION "High-performance C Nostr relay with SQLite storage"
#define RELAY_CONTACT ""
#define RELAY_SOFTWARE "https://git.laantungir.net/laantungir/c-relay.git"
#define RELAY_VERSION VERSION // Use the same version as the build
#define SUPPORTED_NIPS "1,2,4,9,11,12,13,15,16,20,22,33,40,42"
#define LANGUAGE_TAGS ""
#define RELAY_COUNTRIES ""
#define POSTING_POLICY ""
#define PAYMENTS_URL ""
#endif /* MAIN_H */

313
src/nip009.c Normal file
View File

@@ -0,0 +1,313 @@
#define _GNU_SOURCE
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// NIP-09 EVENT DELETION REQUEST HANDLING
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
#include <cjson/cJSON.h>
#include <sqlite3.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
#include <stdio.h>
#include <printf.h>
// Forward declarations for logging functions
void log_warning(const char* message);
void log_info(const char* message);
// Forward declaration for database functions
int store_event(cJSON* event);
// Forward declarations for deletion functions
int delete_events_by_id(const char* requester_pubkey, cJSON* event_ids);
int delete_events_by_address(const char* requester_pubkey, cJSON* addresses, long deletion_timestamp);
// Global database variable
extern sqlite3* g_db;
// Handle NIP-09 deletion request event (kind 5)
int handle_deletion_request(cJSON* event, char* error_message, size_t error_size) {
if (!event) {
snprintf(error_message, error_size, "invalid: null deletion request");
return -1;
}
// Extract event details
cJSON* kind_obj = cJSON_GetObjectItem(event, "kind");
cJSON* pubkey_obj = cJSON_GetObjectItem(event, "pubkey");
cJSON* created_at_obj = cJSON_GetObjectItem(event, "created_at");
cJSON* tags_obj = cJSON_GetObjectItem(event, "tags");
cJSON* content_obj = cJSON_GetObjectItem(event, "content");
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
if (!kind_obj || !pubkey_obj || !created_at_obj || !tags_obj || !event_id_obj) {
snprintf(error_message, error_size, "invalid: incomplete deletion request");
return -1;
}
int kind = (int)cJSON_GetNumberValue(kind_obj);
if (kind != 5) {
snprintf(error_message, error_size, "invalid: not a deletion request");
return -1;
}
const char* requester_pubkey = cJSON_GetStringValue(pubkey_obj);
// Extract deletion event ID and reason (for potential logging)
const char* deletion_event_id = cJSON_GetStringValue(event_id_obj);
const char* reason = content_obj ? cJSON_GetStringValue(content_obj) : "";
(void)deletion_event_id; // Mark as intentionally unused for now
(void)reason; // Mark as intentionally unused for now
long deletion_timestamp = (long)cJSON_GetNumberValue(created_at_obj);
if (!cJSON_IsArray(tags_obj)) {
snprintf(error_message, error_size, "invalid: deletion request tags must be an array");
return -1;
}
// Collect event IDs and addresses from tags
cJSON* event_ids = cJSON_CreateArray();
cJSON* addresses = cJSON_CreateArray();
cJSON* kinds_to_delete = cJSON_CreateArray();
int deletion_targets_found = 0;
cJSON* tag = NULL;
cJSON_ArrayForEach(tag, tags_obj) {
if (!cJSON_IsArray(tag) || cJSON_GetArraySize(tag) < 2) {
continue;
}
cJSON* tag_name = cJSON_GetArrayItem(tag, 0);
cJSON* tag_value = cJSON_GetArrayItem(tag, 1);
if (!cJSON_IsString(tag_name) || !cJSON_IsString(tag_value)) {
continue;
}
const char* name = cJSON_GetStringValue(tag_name);
const char* value = cJSON_GetStringValue(tag_value);
if (strcmp(name, "e") == 0) {
// Event ID reference
cJSON_AddItemToArray(event_ids, cJSON_CreateString(value));
deletion_targets_found++;
} else if (strcmp(name, "a") == 0) {
// Addressable event reference (kind:pubkey:d-identifier)
cJSON_AddItemToArray(addresses, cJSON_CreateString(value));
deletion_targets_found++;
} else if (strcmp(name, "k") == 0) {
// Kind hint - store for validation but not required
int kind_hint = atoi(value);
if (kind_hint > 0) {
cJSON_AddItemToArray(kinds_to_delete, cJSON_CreateNumber(kind_hint));
}
}
}
if (deletion_targets_found == 0) {
cJSON_Delete(event_ids);
cJSON_Delete(addresses);
cJSON_Delete(kinds_to_delete);
snprintf(error_message, error_size, "invalid: deletion request must contain 'e' or 'a' tags");
return -1;
}
int deleted_count = 0;
// Process event ID deletions
if (cJSON_GetArraySize(event_ids) > 0) {
int result = delete_events_by_id(requester_pubkey, event_ids);
if (result > 0) {
deleted_count += result;
}
}
// Process addressable event deletions
if (cJSON_GetArraySize(addresses) > 0) {
int result = delete_events_by_address(requester_pubkey, addresses, deletion_timestamp);
if (result > 0) {
deleted_count += result;
}
}
// Clean up
cJSON_Delete(event_ids);
cJSON_Delete(addresses);
cJSON_Delete(kinds_to_delete);
// Store the deletion request itself (it should be kept according to NIP-09)
if (store_event(event) != 0) {
log_warning("Failed to store deletion request event");
}
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg), "Deletion request processed: %d events deleted", deleted_count);
log_info(debug_msg);
error_message[0] = '\0'; // Success - empty error message
return 0;
}
// Delete events by ID (with pubkey authorization)
int delete_events_by_id(const char* requester_pubkey, cJSON* event_ids) {
if (!g_db || !requester_pubkey || !event_ids || !cJSON_IsArray(event_ids)) {
return 0;
}
int deleted_count = 0;
cJSON* event_id = NULL;
cJSON_ArrayForEach(event_id, event_ids) {
if (!cJSON_IsString(event_id)) {
continue;
}
const char* id = cJSON_GetStringValue(event_id);
// First check if event exists and if requester is authorized
const char* check_sql = "SELECT pubkey FROM events WHERE id = ?";
sqlite3_stmt* check_stmt;
int rc = sqlite3_prepare_v2(g_db, check_sql, -1, &check_stmt, NULL);
if (rc != SQLITE_OK) {
continue;
}
sqlite3_bind_text(check_stmt, 1, id, -1, SQLITE_STATIC);
if (sqlite3_step(check_stmt) == SQLITE_ROW) {
const char* event_pubkey = (char*)sqlite3_column_text(check_stmt, 0);
// Only delete if the requester is the author
if (event_pubkey && strcmp(event_pubkey, requester_pubkey) == 0) {
sqlite3_finalize(check_stmt);
// Delete the event
const char* delete_sql = "DELETE FROM events WHERE id = ? AND pubkey = ?";
sqlite3_stmt* delete_stmt;
rc = sqlite3_prepare_v2(g_db, delete_sql, -1, &delete_stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(delete_stmt, 1, id, -1, SQLITE_STATIC);
sqlite3_bind_text(delete_stmt, 2, requester_pubkey, -1, SQLITE_STATIC);
if (sqlite3_step(delete_stmt) == SQLITE_DONE && sqlite3_changes(g_db) > 0) {
deleted_count++;
char debug_msg[128];
snprintf(debug_msg, sizeof(debug_msg), "Deleted event by ID: %.16s...", id);
log_info(debug_msg);
}
sqlite3_finalize(delete_stmt);
}
} else {
sqlite3_finalize(check_stmt);
char warning_msg[128];
snprintf(warning_msg, sizeof(warning_msg), "Unauthorized deletion attempt for event: %.16s...", id);
log_warning(warning_msg);
}
} else {
sqlite3_finalize(check_stmt);
char debug_msg[128];
snprintf(debug_msg, sizeof(debug_msg), "Event not found for deletion: %.16s...", id);
log_info(debug_msg);
}
}
return deleted_count;
}
// Delete events by addressable reference (kind:pubkey:d-identifier)
int delete_events_by_address(const char* requester_pubkey, cJSON* addresses, long deletion_timestamp) {
if (!g_db || !requester_pubkey || !addresses || !cJSON_IsArray(addresses)) {
return 0;
}
int deleted_count = 0;
cJSON* address = NULL;
cJSON_ArrayForEach(address, addresses) {
if (!cJSON_IsString(address)) {
continue;
}
const char* addr = cJSON_GetStringValue(address);
// Parse address format: kind:pubkey:d-identifier
char* addr_copy = strdup(addr);
if (!addr_copy) continue;
char* kind_str = strtok(addr_copy, ":");
char* pubkey_str = strtok(NULL, ":");
char* d_identifier = strtok(NULL, ":");
if (!kind_str || !pubkey_str) {
free(addr_copy);
continue;
}
int kind = atoi(kind_str);
// Only delete if the requester is the author
if (strcmp(pubkey_str, requester_pubkey) != 0) {
free(addr_copy);
char warning_msg[128];
snprintf(warning_msg, sizeof(warning_msg), "Unauthorized deletion attempt for address: %.32s...", addr);
log_warning(warning_msg);
continue;
}
// Build deletion query based on whether we have d-identifier
const char* delete_sql;
sqlite3_stmt* delete_stmt;
if (d_identifier && strlen(d_identifier) > 0) {
// Delete specific addressable event with d-tag
delete_sql = "DELETE FROM events WHERE kind = ? AND pubkey = ? AND created_at <= ? "
"AND json_extract(tags, '$[*]') LIKE '%[\"d\",\"' || ? || '\"]%'";
} else {
// Delete all events of this kind by this author up to deletion timestamp
delete_sql = "DELETE FROM events WHERE kind = ? AND pubkey = ? AND created_at <= ?";
}
int rc = sqlite3_prepare_v2(g_db, delete_sql, -1, &delete_stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_int(delete_stmt, 1, kind);
sqlite3_bind_text(delete_stmt, 2, requester_pubkey, -1, SQLITE_STATIC);
sqlite3_bind_int64(delete_stmt, 3, deletion_timestamp);
if (d_identifier && strlen(d_identifier) > 0) {
sqlite3_bind_text(delete_stmt, 4, d_identifier, -1, SQLITE_STATIC);
}
if (sqlite3_step(delete_stmt) == SQLITE_DONE) {
int changes = sqlite3_changes(g_db);
if (changes > 0) {
deleted_count += changes;
char debug_msg[128];
snprintf(debug_msg, sizeof(debug_msg), "Deleted %d events by address: %.32s...", changes, addr);
log_info(debug_msg);
}
}
sqlite3_finalize(delete_stmt);
}
free(addr_copy);
}
return deleted_count;
}
// Mark event as deleted (alternative to hard deletion - not used in current implementation)
int mark_event_as_deleted(const char* event_id, const char* deletion_event_id, const char* reason) {
(void)event_id; (void)deletion_event_id; (void)reason; // Suppress unused warnings
// This function could be used if we wanted to implement soft deletion
// For now, NIP-09 implementation uses hard deletion as specified
return 0;
}

620
src/nip011.c Normal file
View File

@@ -0,0 +1,620 @@
// NIP-11 Relay Information Document module
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <pthread.h>
#include <libwebsockets.h>
#include "../nostr_core_lib/cjson/cJSON.h"
#include "config.h"
// Forward declarations for logging functions
void log_info(const char* message);
void log_success(const char* message);
void log_error(const char* message);
void log_warning(const char* message);
// Forward declarations for configuration functions
const char* get_config_value(const char* key);
int get_config_int(const char* key, int default_value);
int get_config_bool(const char* key, int default_value);
// Forward declarations for global cache access
extern unified_config_cache_t g_unified_cache;
// Forward declarations for constants (defined in config.h and other headers)
#define HTTP_STATUS_OK 200
#define HTTP_STATUS_NOT_ACCEPTABLE 406
#define HTTP_STATUS_INTERNAL_SERVER_ERROR 500
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// NIP-11 RELAY INFORMATION DOCUMENT
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// Helper function to parse comma-separated string into cJSON array
cJSON* parse_comma_separated_array(const char* csv_string) {
log_info("parse_comma_separated_array called");
if (!csv_string || strlen(csv_string) == 0) {
log_info("Empty or null csv_string, returning empty array");
return cJSON_CreateArray();
}
log_info("Creating cJSON array");
cJSON* array = cJSON_CreateArray();
if (!array) {
log_info("Failed to create cJSON array");
return NULL;
}
log_info("Duplicating csv_string");
char* csv_copy = strdup(csv_string);
if (!csv_copy) {
log_info("Failed to duplicate csv_string");
cJSON_Delete(array);
return NULL;
}
log_info("Starting token parsing");
char* token = strtok(csv_copy, ",");
while (token) {
log_info("Processing token");
// Trim whitespace
while (*token == ' ') token++;
char* end = token + strlen(token) - 1;
while (end > token && *end == ' ') *end-- = '\0';
if (strlen(token) > 0) {
log_info("Token has content, parsing");
// Try to parse as number first (for supported_nips)
char* endptr;
long num = strtol(token, &endptr, 10);
if (*endptr == '\0') {
log_info("Token is number, adding to array");
// It's a number
cJSON_AddItemToArray(array, cJSON_CreateNumber(num));
} else {
log_info("Token is string, adding to array");
// It's a string
cJSON_AddItemToArray(array, cJSON_CreateString(token));
}
} else {
log_info("Token is empty, skipping");
}
token = strtok(NULL, ",");
}
log_info("Freeing csv_copy");
free(csv_copy);
log_info("Returning parsed array");
return array;
}
// Initialize relay information using configuration system
void init_relay_info() {
log_info("Initializing relay information from configuration...");
// Get all config values first (without holding mutex to avoid deadlock)
// Note: These may be dynamically allocated strings that need to be freed
log_info("Fetching relay configuration values...");
const char* relay_name = get_config_value("relay_name");
log_info("relay_name fetched");
const char* relay_description = get_config_value("relay_description");
log_info("relay_description fetched");
const char* relay_software = get_config_value("relay_software");
log_info("relay_software fetched");
const char* relay_version = get_config_value("relay_version");
log_info("relay_version fetched");
const char* relay_contact = get_config_value("relay_contact");
log_info("relay_contact fetched");
const char* relay_pubkey = get_config_value("relay_pubkey");
log_info("relay_pubkey fetched");
const char* supported_nips_csv = get_config_value("supported_nips");
log_info("supported_nips fetched");
const char* language_tags_csv = get_config_value("language_tags");
log_info("language_tags fetched");
const char* relay_countries_csv = get_config_value("relay_countries");
log_info("relay_countries fetched");
const char* posting_policy = get_config_value("posting_policy");
log_info("posting_policy fetched");
const char* payments_url = get_config_value("payments_url");
log_info("payments_url fetched");
// Get config values for limitations
log_info("Fetching limitation configuration values...");
int max_message_length = get_config_int("max_message_length", 16384);
log_info("max_message_length fetched");
int max_subscriptions_per_client = get_config_int("max_subscriptions_per_client", 20);
log_info("max_subscriptions_per_client fetched");
int max_limit = get_config_int("max_limit", 5000);
log_info("max_limit fetched");
int max_event_tags = get_config_int("max_event_tags", 100);
log_info("max_event_tags fetched");
int max_content_length = get_config_int("max_content_length", 8196);
log_info("max_content_length fetched");
int default_limit = get_config_int("default_limit", 500);
log_info("default_limit fetched");
int admin_enabled = get_config_bool("admin_enabled", 0);
log_info("admin_enabled fetched");
pthread_mutex_lock(&g_unified_cache.cache_lock);
// Update relay information fields
log_info("Storing string values in cache...");
if (relay_name) {
log_info("Storing relay_name");
strncpy(g_unified_cache.relay_info.name, relay_name, sizeof(g_unified_cache.relay_info.name) - 1);
free((char*)relay_name); // Free dynamically allocated string
log_info("relay_name stored and freed");
} else {
log_info("Using default relay_name");
strncpy(g_unified_cache.relay_info.name, "C Nostr Relay", sizeof(g_unified_cache.relay_info.name) - 1);
}
if (relay_description) {
log_info("Storing relay_description");
strncpy(g_unified_cache.relay_info.description, relay_description, sizeof(g_unified_cache.relay_info.description) - 1);
free((char*)relay_description); // Free dynamically allocated string
log_info("relay_description stored and freed");
} else {
log_info("Using default relay_description");
strncpy(g_unified_cache.relay_info.description, "A high-performance Nostr relay implemented in C with SQLite storage", sizeof(g_unified_cache.relay_info.description) - 1);
}
if (relay_software) {
log_info("Storing relay_software");
strncpy(g_unified_cache.relay_info.software, relay_software, sizeof(g_unified_cache.relay_info.software) - 1);
free((char*)relay_software); // Free dynamically allocated string
log_info("relay_software stored and freed");
} else {
log_info("Using default relay_software");
strncpy(g_unified_cache.relay_info.software, "https://git.laantungir.net/laantungir/c-relay.git", sizeof(g_unified_cache.relay_info.software) - 1);
}
if (relay_version) {
log_info("Storing relay_version");
strncpy(g_unified_cache.relay_info.version, relay_version, sizeof(g_unified_cache.relay_info.version) - 1);
free((char*)relay_version); // Free dynamically allocated string
log_info("relay_version stored and freed");
} else {
log_info("Using default relay_version");
strncpy(g_unified_cache.relay_info.version, "0.2.0", sizeof(g_unified_cache.relay_info.version) - 1);
}
if (relay_contact) {
log_info("Storing relay_contact");
strncpy(g_unified_cache.relay_info.contact, relay_contact, sizeof(g_unified_cache.relay_info.contact) - 1);
free((char*)relay_contact); // Free dynamically allocated string
log_info("relay_contact stored and freed");
}
if (relay_pubkey) {
log_info("Storing relay_pubkey");
strncpy(g_unified_cache.relay_info.pubkey, relay_pubkey, sizeof(g_unified_cache.relay_info.pubkey) - 1);
free((char*)relay_pubkey); // Free dynamically allocated string
log_info("relay_pubkey stored and freed");
}
if (posting_policy) {
log_info("Storing posting_policy");
strncpy(g_unified_cache.relay_info.posting_policy, posting_policy, sizeof(g_unified_cache.relay_info.posting_policy) - 1);
free((char*)posting_policy); // Free dynamically allocated string
log_info("posting_policy stored and freed");
}
if (payments_url) {
log_info("Storing payments_url");
strncpy(g_unified_cache.relay_info.payments_url, payments_url, sizeof(g_unified_cache.relay_info.payments_url) - 1);
free((char*)payments_url); // Free dynamically allocated string
log_info("payments_url stored and freed");
}
// Initialize supported NIPs array from config
log_info("Initializing supported_nips array");
if (supported_nips_csv) {
log_info("Parsing supported_nips from config");
g_unified_cache.relay_info.supported_nips = parse_comma_separated_array(supported_nips_csv);
log_info("supported_nips parsed successfully");
free((char*)supported_nips_csv); // Free dynamically allocated string
log_info("supported_nips_csv freed");
} else {
log_info("Using default supported_nips");
// Fallback to default supported NIPs
g_unified_cache.relay_info.supported_nips = cJSON_CreateArray();
if (g_unified_cache.relay_info.supported_nips) {
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(1)); // NIP-01: Basic protocol
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(9)); // NIP-09: Event deletion
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(11)); // NIP-11: Relay information
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(13)); // NIP-13: Proof of Work
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(15)); // NIP-15: EOSE
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(20)); // NIP-20: Command results
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(40)); // NIP-40: Expiration Timestamp
cJSON_AddItemToArray(g_unified_cache.relay_info.supported_nips, cJSON_CreateNumber(42)); // NIP-42: Authentication
}
log_info("Default supported_nips created");
}
// Initialize server limitations using configuration
log_info("Initializing server limitations");
g_unified_cache.relay_info.limitation = cJSON_CreateObject();
if (g_unified_cache.relay_info.limitation) {
log_info("Adding limitation fields");
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "max_message_length", max_message_length);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "max_subscriptions", max_subscriptions_per_client);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "max_limit", max_limit);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "max_subid_length", SUBSCRIPTION_ID_MAX_LENGTH);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "max_event_tags", max_event_tags);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "max_content_length", max_content_length);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "min_pow_difficulty", g_unified_cache.pow_config.min_pow_difficulty);
cJSON_AddBoolToObject(g_unified_cache.relay_info.limitation, "auth_required", admin_enabled ? cJSON_True : cJSON_False);
cJSON_AddBoolToObject(g_unified_cache.relay_info.limitation, "payment_required", cJSON_False);
cJSON_AddBoolToObject(g_unified_cache.relay_info.limitation, "restricted_writes", cJSON_False);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "created_at_lower_limit", 0);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "created_at_upper_limit", 2147483647);
cJSON_AddNumberToObject(g_unified_cache.relay_info.limitation, "default_limit", default_limit);
log_info("Limitation fields added");
} else {
log_info("Failed to create limitation object");
}
// Initialize empty retention policies (can be configured later)
log_info("Initializing retention policies");
g_unified_cache.relay_info.retention = cJSON_CreateArray();
// Initialize language tags from config
log_info("Initializing language_tags");
if (language_tags_csv) {
log_info("Parsing language_tags from config");
g_unified_cache.relay_info.language_tags = parse_comma_separated_array(language_tags_csv);
log_info("language_tags parsed successfully");
free((char*)language_tags_csv); // Free dynamically allocated string
log_info("language_tags_csv freed");
} else {
log_info("Using default language_tags");
// Fallback to global
g_unified_cache.relay_info.language_tags = cJSON_CreateArray();
if (g_unified_cache.relay_info.language_tags) {
cJSON_AddItemToArray(g_unified_cache.relay_info.language_tags, cJSON_CreateString("*"));
}
}
// Initialize relay countries from config
log_info("Initializing relay_countries");
if (relay_countries_csv) {
log_info("Parsing relay_countries from config");
g_unified_cache.relay_info.relay_countries = parse_comma_separated_array(relay_countries_csv);
log_info("relay_countries parsed successfully");
free((char*)relay_countries_csv); // Free dynamically allocated string
log_info("relay_countries_csv freed");
} else {
log_info("Using default relay_countries");
// Fallback to global
g_unified_cache.relay_info.relay_countries = cJSON_CreateArray();
if (g_unified_cache.relay_info.relay_countries) {
cJSON_AddItemToArray(g_unified_cache.relay_info.relay_countries, cJSON_CreateString("*"));
}
}
// Initialize content tags as empty array
log_info("Initializing tags");
g_unified_cache.relay_info.tags = cJSON_CreateArray();
// Initialize fees as empty object (no payment required by default)
log_info("Initializing fees");
g_unified_cache.relay_info.fees = cJSON_CreateObject();
log_info("Unlocking cache mutex");
pthread_mutex_unlock(&g_unified_cache.cache_lock);
log_success("Relay information initialized with default values");
}
// Clean up relay information JSON objects
void cleanup_relay_info() {
pthread_mutex_lock(&g_unified_cache.cache_lock);
if (g_unified_cache.relay_info.supported_nips) {
cJSON_Delete(g_unified_cache.relay_info.supported_nips);
g_unified_cache.relay_info.supported_nips = NULL;
}
if (g_unified_cache.relay_info.limitation) {
cJSON_Delete(g_unified_cache.relay_info.limitation);
g_unified_cache.relay_info.limitation = NULL;
}
if (g_unified_cache.relay_info.retention) {
cJSON_Delete(g_unified_cache.relay_info.retention);
g_unified_cache.relay_info.retention = NULL;
}
if (g_unified_cache.relay_info.language_tags) {
cJSON_Delete(g_unified_cache.relay_info.language_tags);
g_unified_cache.relay_info.language_tags = NULL;
}
if (g_unified_cache.relay_info.relay_countries) {
cJSON_Delete(g_unified_cache.relay_info.relay_countries);
g_unified_cache.relay_info.relay_countries = NULL;
}
if (g_unified_cache.relay_info.tags) {
cJSON_Delete(g_unified_cache.relay_info.tags);
g_unified_cache.relay_info.tags = NULL;
}
if (g_unified_cache.relay_info.fees) {
cJSON_Delete(g_unified_cache.relay_info.fees);
g_unified_cache.relay_info.fees = NULL;
}
pthread_mutex_unlock(&g_unified_cache.cache_lock);
}
// Generate NIP-11 compliant JSON document
cJSON* generate_relay_info_json() {
cJSON* info = cJSON_CreateObject();
if (!info) {
log_error("Failed to create relay info JSON object");
return NULL;
}
pthread_mutex_lock(&g_unified_cache.cache_lock);
// Add basic relay information
if (strlen(g_unified_cache.relay_info.name) > 0) {
cJSON_AddStringToObject(info, "name", g_unified_cache.relay_info.name);
}
if (strlen(g_unified_cache.relay_info.description) > 0) {
cJSON_AddStringToObject(info, "description", g_unified_cache.relay_info.description);
}
if (strlen(g_unified_cache.relay_info.banner) > 0) {
cJSON_AddStringToObject(info, "banner", g_unified_cache.relay_info.banner);
}
if (strlen(g_unified_cache.relay_info.icon) > 0) {
cJSON_AddStringToObject(info, "icon", g_unified_cache.relay_info.icon);
}
if (strlen(g_unified_cache.relay_info.pubkey) > 0) {
cJSON_AddStringToObject(info, "pubkey", g_unified_cache.relay_info.pubkey);
}
if (strlen(g_unified_cache.relay_info.contact) > 0) {
cJSON_AddStringToObject(info, "contact", g_unified_cache.relay_info.contact);
}
// Add supported NIPs
if (g_unified_cache.relay_info.supported_nips) {
cJSON_AddItemToObject(info, "supported_nips", cJSON_Duplicate(g_unified_cache.relay_info.supported_nips, 1));
}
// Add software information
if (strlen(g_unified_cache.relay_info.software) > 0) {
cJSON_AddStringToObject(info, "software", g_unified_cache.relay_info.software);
}
if (strlen(g_unified_cache.relay_info.version) > 0) {
cJSON_AddStringToObject(info, "version", g_unified_cache.relay_info.version);
}
// Add policies
if (strlen(g_unified_cache.relay_info.privacy_policy) > 0) {
cJSON_AddStringToObject(info, "privacy_policy", g_unified_cache.relay_info.privacy_policy);
}
if (strlen(g_unified_cache.relay_info.terms_of_service) > 0) {
cJSON_AddStringToObject(info, "terms_of_service", g_unified_cache.relay_info.terms_of_service);
}
if (strlen(g_unified_cache.relay_info.posting_policy) > 0) {
cJSON_AddStringToObject(info, "posting_policy", g_unified_cache.relay_info.posting_policy);
}
// Add server limitations
if (g_unified_cache.relay_info.limitation) {
cJSON_AddItemToObject(info, "limitation", cJSON_Duplicate(g_unified_cache.relay_info.limitation, 1));
}
// Add retention policies if configured
if (g_unified_cache.relay_info.retention && cJSON_GetArraySize(g_unified_cache.relay_info.retention) > 0) {
cJSON_AddItemToObject(info, "retention", cJSON_Duplicate(g_unified_cache.relay_info.retention, 1));
}
// Add geographical and language information
if (g_unified_cache.relay_info.relay_countries) {
cJSON_AddItemToObject(info, "relay_countries", cJSON_Duplicate(g_unified_cache.relay_info.relay_countries, 1));
}
if (g_unified_cache.relay_info.language_tags) {
cJSON_AddItemToObject(info, "language_tags", cJSON_Duplicate(g_unified_cache.relay_info.language_tags, 1));
}
if (g_unified_cache.relay_info.tags && cJSON_GetArraySize(g_unified_cache.relay_info.tags) > 0) {
cJSON_AddItemToObject(info, "tags", cJSON_Duplicate(g_unified_cache.relay_info.tags, 1));
}
// Add payment information if configured
if (strlen(g_unified_cache.relay_info.payments_url) > 0) {
cJSON_AddStringToObject(info, "payments_url", g_unified_cache.relay_info.payments_url);
}
if (g_unified_cache.relay_info.fees && cJSON_GetObjectItem(g_unified_cache.relay_info.fees, "admission")) {
cJSON_AddItemToObject(info, "fees", cJSON_Duplicate(g_unified_cache.relay_info.fees, 1));
}
pthread_mutex_unlock(&g_unified_cache.cache_lock);
return info;
}
// NIP-11 HTTP session data structure for managing buffer lifetime
struct nip11_session_data {
char* json_buffer;
size_t json_length;
int headers_sent;
int body_sent;
};
// Handle NIP-11 HTTP request with proper asynchronous buffer management
int handle_nip11_http_request(struct lws* wsi, const char* accept_header) {
log_info("Handling NIP-11 relay information request");
// Check if client accepts application/nostr+json
int accepts_nostr_json = 0;
if (accept_header) {
if (strstr(accept_header, "application/nostr+json") != NULL) {
accepts_nostr_json = 1;
}
}
if (!accepts_nostr_json) {
log_warning("HTTP request without proper Accept header for NIP-11");
// Return 406 Not Acceptable
unsigned char buf[LWS_PRE + 256];
unsigned char *p = &buf[LWS_PRE];
unsigned char *start = p;
unsigned char *end = &buf[sizeof(buf) - 1];
if (lws_add_http_header_status(wsi, HTTP_STATUS_NOT_ACCEPTABLE, &p, end)) {
return -1;
}
if (lws_add_http_header_by_token(wsi, WSI_TOKEN_HTTP_CONTENT_TYPE, (unsigned char*)"text/plain", 10, &p, end)) {
return -1;
}
if (lws_add_http_header_content_length(wsi, 0, &p, end)) {
return -1;
}
if (lws_finalize_http_header(wsi, &p, end)) {
return -1;
}
lws_write(wsi, start, p - start, LWS_WRITE_HTTP_HEADERS);
return -1; // Close connection
}
// Generate relay information JSON
cJSON* info_json = generate_relay_info_json();
if (!info_json) {
log_error("Failed to generate relay info JSON");
unsigned char buf[LWS_PRE + 256];
unsigned char *p = &buf[LWS_PRE];
unsigned char *start = p;
unsigned char *end = &buf[sizeof(buf) - 1];
if (lws_add_http_header_status(wsi, HTTP_STATUS_INTERNAL_SERVER_ERROR, &p, end)) {
return -1;
}
if (lws_add_http_header_by_token(wsi, WSI_TOKEN_HTTP_CONTENT_TYPE, (unsigned char*)"text/plain", 10, &p, end)) {
return -1;
}
if (lws_add_http_header_content_length(wsi, 0, &p, end)) {
return -1;
}
if (lws_finalize_http_header(wsi, &p, end)) {
return -1;
}
lws_write(wsi, start, p - start, LWS_WRITE_HTTP_HEADERS);
return -1;
}
char* json_string = cJSON_Print(info_json);
cJSON_Delete(info_json);
if (!json_string) {
log_error("Failed to serialize relay info JSON");
unsigned char buf[LWS_PRE + 256];
unsigned char *p = &buf[LWS_PRE];
unsigned char *start = p;
unsigned char *end = &buf[sizeof(buf) - 1];
if (lws_add_http_header_status(wsi, HTTP_STATUS_INTERNAL_SERVER_ERROR, &p, end)) {
return -1;
}
if (lws_add_http_header_by_token(wsi, WSI_TOKEN_HTTP_CONTENT_TYPE, (unsigned char*)"text/plain", 10, &p, end)) {
return -1;
}
if (lws_add_http_header_content_length(wsi, 0, &p, end)) {
return -1;
}
if (lws_finalize_http_header(wsi, &p, end)) {
return -1;
}
lws_write(wsi, start, p - start, LWS_WRITE_HTTP_HEADERS);
return -1;
}
size_t json_len = strlen(json_string);
// Allocate session data to manage buffer lifetime across callbacks
struct nip11_session_data* session_data = malloc(sizeof(struct nip11_session_data));
if (!session_data) {
log_error("Failed to allocate NIP-11 session data");
free(json_string);
return -1;
}
// Store JSON buffer in session data for asynchronous handling
session_data->json_buffer = json_string;
session_data->json_length = json_len;
session_data->headers_sent = 0;
session_data->body_sent = 0;
// Store session data in WSI user data for callback access
lws_set_wsi_user(wsi, session_data);
// Prepare HTTP response with CORS headers
unsigned char buf[LWS_PRE + 1024];
unsigned char *p = &buf[LWS_PRE];
unsigned char *start = p;
unsigned char *end = &buf[sizeof(buf) - 1];
// Add status
if (lws_add_http_header_status(wsi, HTTP_STATUS_OK, &p, end)) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
// Add content type
if (lws_add_http_header_by_token(wsi, WSI_TOKEN_HTTP_CONTENT_TYPE,
(unsigned char*)"application/nostr+json", 22, &p, end)) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
// Add content length
if (lws_add_http_header_content_length(wsi, json_len, &p, end)) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
// Add CORS headers as required by NIP-11
if (lws_add_http_header_by_name(wsi, (unsigned char*)"access-control-allow-origin:",
(unsigned char*)"*", 1, &p, end)) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
if (lws_add_http_header_by_name(wsi, (unsigned char*)"access-control-allow-headers:",
(unsigned char*)"content-type, accept", 20, &p, end)) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
if (lws_add_http_header_by_name(wsi, (unsigned char*)"access-control-allow-methods:",
(unsigned char*)"GET, OPTIONS", 12, &p, end)) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
// Finalize headers
if (lws_finalize_http_header(wsi, &p, end)) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
// Write headers
if (lws_write(wsi, start, p - start, LWS_WRITE_HTTP_HEADERS) < 0) {
free(session_data->json_buffer);
free(session_data);
return -1;
}
session_data->headers_sent = 1;
// Request callback for body transmission
lws_callback_on_writable(wsi);
log_success("NIP-11 headers sent, body transmission scheduled");
return 0;
}

191
src/nip013.c Normal file
View File

@@ -0,0 +1,191 @@
// NIP-13 Proof of Work validation module
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <pthread.h>
#include "../nostr_core_lib/cjson/cJSON.h"
#include "../nostr_core_lib/nostr_core/nostr_core.h"
#include "../nostr_core_lib/nostr_core/nip013.h"
#include "config.h"
// Forward declarations for logging functions
void log_info(const char* message);
void log_success(const char* message);
void log_error(const char* message);
void log_warning(const char* message);
// NIP-13 PoW configuration structure
struct pow_config {
int enabled; // 0 = disabled, 1 = enabled
int min_pow_difficulty; // Minimum required difficulty (0 = no requirement)
int validation_flags; // Bitflags for validation options
int require_nonce_tag; // 1 = require nonce tag presence
int reject_lower_targets; // 1 = reject if committed < actual difficulty
int strict_format; // 1 = enforce strict nonce tag format
int anti_spam_mode; // 1 = full anti-spam validation
};
// Initialize PoW configuration using configuration system
void init_pow_config() {
log_info("Initializing NIP-13 Proof of Work configuration");
// Get all config values first (without holding mutex to avoid deadlock)
int pow_enabled = get_config_bool("pow_enabled", 1);
int pow_min_difficulty = get_config_int("pow_min_difficulty", 0);
const char* pow_mode = get_config_value("pow_mode");
pthread_mutex_lock(&g_unified_cache.cache_lock);
// Load PoW settings from configuration system
g_unified_cache.pow_config.enabled = pow_enabled;
g_unified_cache.pow_config.min_pow_difficulty = pow_min_difficulty;
// Configure PoW mode
if (pow_mode) {
if (strcmp(pow_mode, "strict") == 0) {
g_unified_cache.pow_config.validation_flags = NOSTR_POW_VALIDATE_ANTI_SPAM | NOSTR_POW_STRICT_FORMAT;
g_unified_cache.pow_config.require_nonce_tag = 1;
g_unified_cache.pow_config.reject_lower_targets = 1;
g_unified_cache.pow_config.strict_format = 1;
g_unified_cache.pow_config.anti_spam_mode = 1;
log_info("PoW configured in strict anti-spam mode");
} else if (strcmp(pow_mode, "full") == 0) {
g_unified_cache.pow_config.validation_flags = NOSTR_POW_VALIDATE_FULL;
g_unified_cache.pow_config.require_nonce_tag = 1;
log_info("PoW configured in full validation mode");
} else if (strcmp(pow_mode, "basic") == 0) {
g_unified_cache.pow_config.validation_flags = NOSTR_POW_VALIDATE_BASIC;
log_info("PoW configured in basic validation mode");
} else if (strcmp(pow_mode, "disabled") == 0) {
g_unified_cache.pow_config.enabled = 0;
log_info("PoW validation disabled via configuration");
}
free((char*)pow_mode); // Free dynamically allocated string
} else {
// Default to basic mode
g_unified_cache.pow_config.validation_flags = NOSTR_POW_VALIDATE_BASIC;
log_info("PoW configured in basic validation mode (default)");
}
// Log final configuration
char config_msg[512];
snprintf(config_msg, sizeof(config_msg),
"PoW Configuration: enabled=%s, min_difficulty=%d, validation_flags=0x%x, mode=%s",
g_unified_cache.pow_config.enabled ? "true" : "false",
g_unified_cache.pow_config.min_pow_difficulty,
g_unified_cache.pow_config.validation_flags,
g_unified_cache.pow_config.anti_spam_mode ? "anti-spam" :
(g_unified_cache.pow_config.validation_flags & NOSTR_POW_VALIDATE_FULL) ? "full" : "basic");
log_info(config_msg);
pthread_mutex_unlock(&g_unified_cache.cache_lock);
}
// Validate event Proof of Work according to NIP-13
int validate_event_pow(cJSON* event, char* error_message, size_t error_size) {
pthread_mutex_lock(&g_unified_cache.cache_lock);
int enabled = g_unified_cache.pow_config.enabled;
int min_pow_difficulty = g_unified_cache.pow_config.min_pow_difficulty;
int validation_flags = g_unified_cache.pow_config.validation_flags;
pthread_mutex_unlock(&g_unified_cache.cache_lock);
if (!enabled) {
return 0; // PoW validation disabled
}
if (!event) {
snprintf(error_message, error_size, "pow: null event");
return NOSTR_ERROR_INVALID_INPUT;
}
// If min_pow_difficulty is 0, only validate events that have nonce tags
// This allows events without PoW when difficulty requirement is 0
if (min_pow_difficulty == 0) {
cJSON* tags = cJSON_GetObjectItem(event, "tags");
int has_nonce_tag = 0;
if (tags && cJSON_IsArray(tags)) {
cJSON* tag = NULL;
cJSON_ArrayForEach(tag, tags) {
if (cJSON_IsArray(tag) && cJSON_GetArraySize(tag) >= 2) {
cJSON* tag_name = cJSON_GetArrayItem(tag, 0);
if (cJSON_IsString(tag_name)) {
const char* name = cJSON_GetStringValue(tag_name);
if (name && strcmp(name, "nonce") == 0) {
has_nonce_tag = 1;
break;
}
}
}
}
}
// If no minimum difficulty required and no nonce tag, skip PoW validation
if (!has_nonce_tag) {
return 0; // Accept event without PoW when min_difficulty=0
}
}
// Perform PoW validation using nostr_core_lib
nostr_pow_result_t pow_result;
int validation_result = nostr_validate_pow(event, min_pow_difficulty,
validation_flags, &pow_result);
if (validation_result != NOSTR_SUCCESS) {
// Handle specific error cases with appropriate messages
switch (validation_result) {
case NOSTR_ERROR_NIP13_INSUFFICIENT:
snprintf(error_message, error_size,
"pow: insufficient difficulty: %d < %d",
pow_result.actual_difficulty, min_pow_difficulty);
log_warning("Event rejected: insufficient PoW difficulty");
break;
case NOSTR_ERROR_NIP13_NO_NONCE_TAG:
// This should not happen with min_difficulty=0 after our check above
if (min_pow_difficulty > 0) {
snprintf(error_message, error_size, "pow: missing required nonce tag");
log_warning("Event rejected: missing nonce tag");
} else {
return 0; // Allow when min_difficulty=0
}
break;
case NOSTR_ERROR_NIP13_INVALID_NONCE_TAG:
snprintf(error_message, error_size, "pow: invalid nonce tag format");
log_warning("Event rejected: invalid nonce tag format");
break;
case NOSTR_ERROR_NIP13_TARGET_MISMATCH:
snprintf(error_message, error_size,
"pow: committed target (%d) lower than minimum (%d)",
pow_result.committed_target, min_pow_difficulty);
log_warning("Event rejected: committed target too low (anti-spam protection)");
break;
case NOSTR_ERROR_NIP13_CALCULATION:
snprintf(error_message, error_size, "pow: difficulty calculation failed");
log_error("PoW difficulty calculation error");
break;
case NOSTR_ERROR_EVENT_INVALID_ID:
snprintf(error_message, error_size, "pow: invalid event ID format");
log_warning("Event rejected: invalid event ID for PoW calculation");
break;
default:
snprintf(error_message, error_size, "pow: validation failed - %s",
strlen(pow_result.error_detail) > 0 ? pow_result.error_detail : "unknown error");
log_warning("Event rejected: PoW validation failed");
}
return validation_result;
}
// Log successful PoW validation (only if minimum difficulty is required)
if (min_pow_difficulty > 0 || pow_result.has_nonce_tag) {
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg),
"PoW validated: difficulty=%d, target=%d, nonce=%llu%s",
pow_result.actual_difficulty,
pow_result.committed_target,
(unsigned long long)pow_result.nonce_value,
pow_result.has_nonce_tag ? "" : " (no nonce tag)");
log_info(debug_msg);
}
return 0; // Success
}

173
src/nip040.c Normal file
View File

@@ -0,0 +1,173 @@
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
// Include nostr_core_lib for cJSON
#include "../nostr_core_lib/cjson/cJSON.h"
// Configuration management system
#include "config.h"
// NIP-40 Expiration configuration structure
struct expiration_config {
int enabled; // 0 = disabled, 1 = enabled
int strict_mode; // 1 = reject expired events on submission
int filter_responses; // 1 = filter expired events from responses
int delete_expired; // 1 = delete expired events from DB (future feature)
long grace_period; // Grace period in seconds for clock skew
};
// Global expiration configuration instance
struct expiration_config g_expiration_config = {
.enabled = 1, // Enable expiration handling by default
.strict_mode = 1, // Reject expired events on submission by default
.filter_responses = 1, // Filter expired events from responses by default
.delete_expired = 0, // Don't delete by default (keep for audit)
.grace_period = 1 // 1 second grace period for testing (was 300)
};
// Forward declarations for logging functions
void log_info(const char* message);
void log_warning(const char* message);
// Initialize expiration configuration using configuration system
void init_expiration_config() {
log_info("Initializing NIP-40 Expiration Timestamp configuration");
// Get all config values first (without holding mutex to avoid deadlock)
int expiration_enabled = get_config_bool("expiration_enabled", 1);
int expiration_strict = get_config_bool("expiration_strict", 1);
int expiration_filter = get_config_bool("expiration_filter", 1);
int expiration_delete = get_config_bool("expiration_delete", 0);
long expiration_grace_period = get_config_int("expiration_grace_period", 1);
// Load expiration settings from configuration system
g_expiration_config.enabled = expiration_enabled;
g_expiration_config.strict_mode = expiration_strict;
g_expiration_config.filter_responses = expiration_filter;
g_expiration_config.delete_expired = expiration_delete;
g_expiration_config.grace_period = expiration_grace_period;
// Validate grace period bounds
if (g_expiration_config.grace_period < 0 || g_expiration_config.grace_period > 86400) {
log_warning("Invalid grace period, using default of 300 seconds");
g_expiration_config.grace_period = 300;
}
// Log final configuration
char config_msg[512];
snprintf(config_msg, sizeof(config_msg),
"Expiration Configuration: enabled=%s, strict_mode=%s, filter_responses=%s, grace_period=%ld seconds",
g_expiration_config.enabled ? "true" : "false",
g_expiration_config.strict_mode ? "true" : "false",
g_expiration_config.filter_responses ? "true" : "false",
g_expiration_config.grace_period);
log_info(config_msg);
}
// Extract expiration timestamp from event tags
long extract_expiration_timestamp(cJSON* tags) {
if (!tags || !cJSON_IsArray(tags)) {
return 0; // No expiration
}
cJSON* tag = NULL;
cJSON_ArrayForEach(tag, tags) {
if (cJSON_IsArray(tag) && cJSON_GetArraySize(tag) >= 2) {
cJSON* tag_name = cJSON_GetArrayItem(tag, 0);
cJSON* tag_value = cJSON_GetArrayItem(tag, 1);
if (cJSON_IsString(tag_name) && cJSON_IsString(tag_value)) {
const char* name = cJSON_GetStringValue(tag_name);
const char* value = cJSON_GetStringValue(tag_value);
if (name && value && strcmp(name, "expiration") == 0) {
// Validate that the string contains only digits (and optional leading whitespace)
const char* p = value;
// Skip leading whitespace
while (*p == ' ' || *p == '\t') p++;
// Check if we have at least one digit
if (*p == '\0') {
continue; // Empty or whitespace-only string, ignore this tag
}
// Validate that all remaining characters are digits
const char* digit_start = p;
while (*p >= '0' && *p <= '9') p++;
// If we didn't consume the entire string or found no digits, it's malformed
if (*p != '\0' || p == digit_start) {
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg),
"Ignoring malformed expiration tag value: '%.32s'", value);
log_warning(debug_msg);
continue; // Ignore malformed expiration tag
}
long expiration_ts = atol(value);
if (expiration_ts > 0) {
return expiration_ts;
}
}
}
}
}
return 0; // No valid expiration tag found
}
// Check if event is currently expired
int is_event_expired(cJSON* event, time_t current_time) {
if (!event) {
return 0; // Invalid event, not expired
}
cJSON* tags = cJSON_GetObjectItem(event, "tags");
long expiration_ts = extract_expiration_timestamp(tags);
if (expiration_ts == 0) {
return 0; // No expiration timestamp, not expired
}
// Check if current time exceeds expiration + grace period
return (current_time > (expiration_ts + g_expiration_config.grace_period));
}
// Validate event expiration according to NIP-40
int validate_event_expiration(cJSON* event, char* error_message, size_t error_size) {
if (!g_expiration_config.enabled) {
return 0; // Expiration validation disabled
}
if (!event) {
snprintf(error_message, error_size, "expiration: null event");
return -1;
}
// Check if event is expired
time_t current_time = time(NULL);
if (is_event_expired(event, current_time)) {
if (g_expiration_config.strict_mode) {
cJSON* tags = cJSON_GetObjectItem(event, "tags");
long expiration_ts = extract_expiration_timestamp(tags);
snprintf(error_message, error_size,
"invalid: event expired (expiration=%ld, current=%ld, grace=%ld)",
expiration_ts, (long)current_time, g_expiration_config.grace_period);
log_warning("Event rejected: expired timestamp");
return -1;
} else {
// In non-strict mode, log but allow expired events
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg),
"Accepting expired event (strict_mode disabled)");
log_info(debug_msg);
}
}
return 0; // Success
}

180
src/nip042.c Normal file
View File

@@ -0,0 +1,180 @@
#define _GNU_SOURCE
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// NIP-42 AUTHENTICATION FUNCTIONS
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
#include <pthread.h>
#include <cjson/cJSON.h>
#include <libwebsockets.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
// Forward declarations for logging functions
void log_error(const char* message);
void log_info(const char* message);
void log_warning(const char* message);
void log_success(const char* message);
// Forward declaration for notice message function
void send_notice_message(struct lws* wsi, const char* message);
// Forward declarations for NIP-42 functions from request_validator.c
int nostr_nip42_generate_challenge(char *challenge_buffer, size_t buffer_size);
int nostr_nip42_verify_auth_event(cJSON *event, const char *challenge_id,
const char *relay_url, int time_tolerance_seconds);
// Forward declaration for per_session_data struct (defined in main.c)
struct per_session_data {
int authenticated;
void* subscriptions; // Head of this session's subscription list
pthread_mutex_t session_lock; // Per-session thread safety
char client_ip[41]; // Client IP for logging
int subscription_count; // Number of subscriptions for this session
// NIP-42 Authentication State
char authenticated_pubkey[65]; // Authenticated public key (64 hex + null)
char active_challenge[65]; // Current challenge for this session (64 hex + null)
time_t challenge_created; // When challenge was created
time_t challenge_expires; // Challenge expiration time
int nip42_auth_required_events; // Whether NIP-42 auth is required for EVENT submission
int nip42_auth_required_subscriptions; // Whether NIP-42 auth is required for REQ operations
int auth_challenge_sent; // Whether challenge has been sent (0/1)
};
// Send NIP-42 authentication challenge to client
void send_nip42_auth_challenge(struct lws* wsi, struct per_session_data* pss) {
if (!wsi || !pss) return;
// Generate challenge using existing request_validator function
char challenge[65];
if (nostr_nip42_generate_challenge(challenge, sizeof(challenge)) != 0) {
log_error("Failed to generate NIP-42 challenge");
send_notice_message(wsi, "Authentication temporarily unavailable");
return;
}
// Store challenge in session
pthread_mutex_lock(&pss->session_lock);
strncpy(pss->active_challenge, challenge, sizeof(pss->active_challenge) - 1);
pss->active_challenge[sizeof(pss->active_challenge) - 1] = '\0';
pss->challenge_created = time(NULL);
pss->challenge_expires = pss->challenge_created + 600; // 10 minutes
pss->auth_challenge_sent = 1;
pthread_mutex_unlock(&pss->session_lock);
// Send AUTH challenge message: ["AUTH", <challenge>]
cJSON* auth_msg = cJSON_CreateArray();
cJSON_AddItemToArray(auth_msg, cJSON_CreateString("AUTH"));
cJSON_AddItemToArray(auth_msg, cJSON_CreateString(challenge));
char* msg_str = cJSON_Print(auth_msg);
if (msg_str) {
size_t msg_len = strlen(msg_str);
unsigned char* buf = malloc(LWS_PRE + msg_len);
if (buf) {
memcpy(buf + LWS_PRE, msg_str, msg_len);
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
free(buf);
}
free(msg_str);
}
cJSON_Delete(auth_msg);
char debug_msg[128];
snprintf(debug_msg, sizeof(debug_msg), "NIP-42 auth challenge sent: %.16s...", challenge);
log_info(debug_msg);
}
// Handle NIP-42 signed authentication event from client
void handle_nip42_auth_signed_event(struct lws* wsi, struct per_session_data* pss, cJSON* auth_event) {
if (!wsi || !pss || !auth_event) return;
// Serialize event for validation
char* event_json = cJSON_Print(auth_event);
if (!event_json) {
send_notice_message(wsi, "Invalid authentication event format");
return;
}
pthread_mutex_lock(&pss->session_lock);
char challenge_copy[65];
strncpy(challenge_copy, pss->active_challenge, sizeof(challenge_copy) - 1);
challenge_copy[sizeof(challenge_copy) - 1] = '\0';
time_t challenge_expires = pss->challenge_expires;
pthread_mutex_unlock(&pss->session_lock);
// Check if challenge has expired
time_t current_time = time(NULL);
if (current_time > challenge_expires) {
free(event_json);
send_notice_message(wsi, "Authentication challenge expired, please retry");
log_warning("NIP-42 authentication failed: challenge expired");
return;
}
// Verify authentication using existing request_validator function
// Note: nostr_nip42_verify_auth_event doesn't extract pubkey, we need to do that separately
int result = nostr_nip42_verify_auth_event(auth_event, challenge_copy,
"ws://localhost:8888", 600); // 10 minutes tolerance
char authenticated_pubkey[65] = {0};
if (result == 0) {
// Extract pubkey from the auth event
cJSON* pubkey_json = cJSON_GetObjectItem(auth_event, "pubkey");
if (pubkey_json && cJSON_IsString(pubkey_json)) {
const char* pubkey_str = cJSON_GetStringValue(pubkey_json);
if (pubkey_str && strlen(pubkey_str) == 64) {
strncpy(authenticated_pubkey, pubkey_str, sizeof(authenticated_pubkey) - 1);
authenticated_pubkey[sizeof(authenticated_pubkey) - 1] = '\0';
} else {
result = -1; // Invalid pubkey format
}
} else {
result = -1; // Missing pubkey
}
}
free(event_json);
if (result == 0) {
// Authentication successful
pthread_mutex_lock(&pss->session_lock);
pss->authenticated = 1;
strncpy(pss->authenticated_pubkey, authenticated_pubkey, sizeof(pss->authenticated_pubkey) - 1);
pss->authenticated_pubkey[sizeof(pss->authenticated_pubkey) - 1] = '\0';
// Clear challenge
memset(pss->active_challenge, 0, sizeof(pss->active_challenge));
pss->challenge_expires = 0;
pss->auth_challenge_sent = 0;
pthread_mutex_unlock(&pss->session_lock);
char success_msg[256];
snprintf(success_msg, sizeof(success_msg),
"NIP-42 authentication successful for pubkey: %.16s...", authenticated_pubkey);
log_success(success_msg);
send_notice_message(wsi, "NIP-42 authentication successful");
} else {
// Authentication failed
char error_msg[256];
snprintf(error_msg, sizeof(error_msg),
"NIP-42 authentication failed (error code: %d)", result);
log_warning(error_msg);
send_notice_message(wsi, "NIP-42 authentication failed - invalid signature or challenge");
}
}
// Handle challenge response (not typically used in NIP-42, but included for completeness)
void handle_nip42_auth_challenge_response(struct lws* wsi, struct per_session_data* pss, const char* challenge) {
(void)wsi; (void)pss; (void)challenge; // Mark as intentionally unused
// NIP-42 doesn't typically use challenge responses from client to server
// This is reserved for potential future use or protocol extensions
log_warning("Received unexpected challenge response from client (not part of standard NIP-42 flow)");
send_notice_message(wsi, "Challenge responses are not supported - please send signed authentication event");
}

View File

@@ -132,24 +132,11 @@ typedef struct {
int time_tolerance_seconds;
} nip42_challenge_manager_t;
// Cached configuration structure
typedef struct {
int auth_required; // Whether authentication is required
long max_file_size; // Maximum file size in bytes
int admin_enabled; // Whether admin interface is enabled
char admin_pubkey[65]; // Admin public key
int nip42_mode; // NIP-42 authentication mode
int nip42_challenge_timeout; // NIP-42 challenge timeout in seconds
int nip42_time_tolerance; // NIP-42 time tolerance in seconds
time_t cache_expires; // When cache expires
int cache_valid; // Whether cache is valid
} auth_config_cache_t;
//=============================================================================
// GLOBAL STATE
//=============================================================================
static auth_config_cache_t g_auth_cache = {0};
// No longer using local auth cache - using unified cache from config.c
static nip42_challenge_manager_t g_challenge_manager = {0};
static int g_validator_initialized = 0;
@@ -182,8 +169,8 @@ static void validator_debug_log(const char *message) {
static int reload_auth_config(void);
// Removed unused forward declarations for functions that are no longer called
static int check_database_auth_rules(const char *pubkey, const char *operation,
const char *resource_hash);
int check_database_auth_rules(const char *pubkey, const char *operation,
const char *resource_hash);
void nostr_request_validator_clear_violation(void);
// NIP-42 challenge management functions
@@ -222,15 +209,17 @@ int ginxsom_request_validator_init(const char *db_path, const char *app_name) {
return result;
}
// Initialize NIP-42 challenge manager
// Initialize NIP-42 challenge manager using unified config
memset(&g_challenge_manager, 0, sizeof(g_challenge_manager));
g_challenge_manager.timeout_seconds =
g_auth_cache.nip42_challenge_timeout > 0
? g_auth_cache.nip42_challenge_timeout
: 600;
g_challenge_manager.time_tolerance_seconds =
g_auth_cache.nip42_time_tolerance > 0 ? g_auth_cache.nip42_time_tolerance
: 300;
const char* nip42_timeout = get_config_value("nip42_challenge_timeout");
g_challenge_manager.timeout_seconds = nip42_timeout ? atoi(nip42_timeout) : 600;
if (nip42_timeout) free((char*)nip42_timeout);
const char* nip42_tolerance = get_config_value("nip42_time_tolerance");
g_challenge_manager.time_tolerance_seconds = nip42_tolerance ? atoi(nip42_tolerance) : 300;
if (nip42_tolerance) free((char*)nip42_tolerance);
g_challenge_manager.last_cleanup = time(NULL);
g_validator_initialized = 1;
@@ -243,12 +232,22 @@ int ginxsom_request_validator_init(const char *db_path, const char *app_name) {
* Check if authentication rules are enabled
*/
int nostr_auth_rules_enabled(void) {
// Reload config if cache expired
if (!g_auth_cache.cache_valid || time(NULL) > g_auth_cache.cache_expires) {
reload_auth_config();
// Use unified cache from config.c
const char* auth_enabled = get_config_value("auth_enabled");
int result = 0;
if (auth_enabled && strcmp(auth_enabled, "true") == 0) {
result = 1;
}
if (auth_enabled) free((char*)auth_enabled);
return g_auth_cache.auth_required;
// Also check legacy key
const char* auth_rules_enabled = get_config_value("auth_rules_enabled");
if (auth_rules_enabled && strcmp(auth_rules_enabled, "true") == 0) {
result = 1;
}
if (auth_rules_enabled) free((char*)auth_rules_enabled);
return result;
}
///////////////////////////////////////////////////////////////////////////////////////
@@ -306,14 +305,12 @@ int nostr_validate_unified_request(const char* json_string, size_t json_length)
int event_kind = (int)cJSON_GetNumberValue(kind);
// 5. Reload config if needed
if (!g_auth_cache.cache_valid || time(NULL) > g_auth_cache.cache_expires) {
reload_auth_config();
}
// 5. Check configuration using unified cache
int auth_required = nostr_auth_rules_enabled();
char config_msg[256];
sprintf(config_msg, "VALIDATOR_DEBUG: STEP 5 PASSED - Event kind: %d, auth_required: %d\n",
event_kind, g_auth_cache.auth_required);
event_kind, auth_required);
validator_debug_log(config_msg);
/////////////////////////////////////////////////////////////////////
@@ -352,11 +349,15 @@ int nostr_validate_unified_request(const char* json_string, size_t json_length)
if (event_kind == 22242) {
validator_debug_log("VALIDATOR_DEBUG: STEP 8 - Processing NIP-42 challenge response\n");
if (g_auth_cache.nip42_mode == 0) {
// Check NIP-42 mode using unified cache
const char* nip42_enabled = get_config_value("nip42_auth_enabled");
if (nip42_enabled && strcmp(nip42_enabled, "false") == 0) {
validator_debug_log("VALIDATOR_DEBUG: STEP 8 FAILED - NIP-42 is disabled\n");
free((char*)nip42_enabled);
cJSON_Delete(event);
return NOSTR_ERROR_NIP42_DISABLED;
}
if (nip42_enabled) free((char*)nip42_enabled);
// TODO: Implement full NIP-42 challenge validation
// For now, accept all valid NIP-42 events
@@ -370,7 +371,7 @@ int nostr_validate_unified_request(const char* json_string, size_t json_length)
/////////////////////////////////////////////////////////////////////
// 9. Check if authentication rules are enabled
if (!g_auth_cache.auth_required) {
if (!auth_required) {
validator_debug_log("VALIDATOR_DEBUG: STEP 9 - Authentication disabled, skipping database auth rules\n");
} else {
// 10. Check database authentication rules (only if auth enabled)
@@ -404,17 +405,23 @@ int nostr_validate_unified_request(const char* json_string, size_t json_length)
/////////////////////////////////////////////////////////////////////
// 11. NIP-13 Proof of Work validation
if (g_pow_config.enabled && g_pow_config.min_pow_difficulty > 0) {
pthread_mutex_lock(&g_unified_cache.cache_lock);
int pow_enabled = g_unified_cache.pow_config.enabled;
int pow_min_difficulty = g_unified_cache.pow_config.min_pow_difficulty;
int pow_validation_flags = g_unified_cache.pow_config.validation_flags;
pthread_mutex_unlock(&g_unified_cache.cache_lock);
if (pow_enabled && pow_min_difficulty > 0) {
validator_debug_log("VALIDATOR_DEBUG: STEP 11 - Validating NIP-13 Proof of Work\n");
nostr_pow_result_t pow_result;
int pow_validation_result = nostr_validate_pow(event, g_pow_config.min_pow_difficulty,
g_pow_config.validation_flags, &pow_result);
int pow_validation_result = nostr_validate_pow(event, pow_min_difficulty,
pow_validation_flags, &pow_result);
if (pow_validation_result != NOSTR_SUCCESS) {
char pow_msg[256];
sprintf(pow_msg, "VALIDATOR_DEBUG: STEP 11 FAILED - PoW validation failed (error=%d, difficulty=%d/%d)\n",
pow_validation_result, pow_result.actual_difficulty, g_pow_config.min_pow_difficulty);
pow_validation_result, pow_result.actual_difficulty, pow_min_difficulty);
validator_debug_log(pow_msg);
cJSON_Delete(event);
return pow_validation_result;
@@ -553,7 +560,6 @@ void nostr_request_validator_clear_violation(void) {
*/
void ginxsom_request_validator_cleanup(void) {
g_validator_initialized = 0;
memset(&g_auth_cache, 0, sizeof(g_auth_cache));
nostr_request_validator_clear_violation();
}
@@ -573,145 +579,22 @@ void nostr_request_result_free_file_data(nostr_request_result_t *result) {
// HELPER FUNCTIONS
//=============================================================================
/**
* Get cache timeout from environment variable or default
*/
static int get_cache_timeout(void) {
char *no_cache = getenv("GINX_NO_CACHE");
char *cache_timeout = getenv("GINX_CACHE_TIMEOUT");
if (no_cache && strcmp(no_cache, "1") == 0) {
return 0; // No caching
}
if (cache_timeout) {
int timeout = atoi(cache_timeout);
return (timeout >= 0) ? timeout : 300; // Use provided value or default
}
return 300; // Default 5 minutes
}
/**
* Force cache refresh - invalidates current cache
* Force cache refresh - use unified cache system
*/
void nostr_request_validator_force_cache_refresh(void) {
g_auth_cache.cache_valid = 0;
g_auth_cache.cache_expires = 0;
validator_debug_log("VALIDATOR: Cache forcibly invalidated\n");
// Use unified cache refresh from config.c
force_config_cache_refresh();
validator_debug_log("VALIDATOR: Cache forcibly invalidated via unified cache\n");
}
/**
* Reload authentication configuration from unified config table
* This function is no longer needed - configuration is handled by unified cache
*/
static int reload_auth_config(void) {
sqlite3 *db = NULL;
sqlite3_stmt *stmt = NULL;
int rc;
// Clear cache
memset(&g_auth_cache, 0, sizeof(g_auth_cache));
// Open database using global database path
if (strlen(g_database_path) == 0) {
validator_debug_log("VALIDATOR: No database path available\n");
// Use defaults
g_auth_cache.auth_required = 0;
g_auth_cache.max_file_size = 104857600; // 100MB
g_auth_cache.admin_enabled = 0;
g_auth_cache.nip42_mode = 1; // Optional
int cache_timeout = get_cache_timeout();
g_auth_cache.cache_expires = time(NULL) + cache_timeout;
g_auth_cache.cache_valid = 1;
return NOSTR_SUCCESS;
}
rc = sqlite3_open_v2(g_database_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
validator_debug_log("VALIDATOR: Could not open database\n");
// Use defaults
g_auth_cache.auth_required = 0;
g_auth_cache.max_file_size = 104857600; // 100MB
g_auth_cache.admin_enabled = 0;
g_auth_cache.nip42_mode = 1; // Optional
int cache_timeout = get_cache_timeout();
g_auth_cache.cache_expires = time(NULL) + cache_timeout;
g_auth_cache.cache_valid = 1;
return NOSTR_SUCCESS;
}
// Load configuration values from unified config table
const char *config_sql =
"SELECT key, value FROM config WHERE key IN ('require_auth', "
"'auth_rules_enabled', 'max_file_size', 'admin_enabled', 'admin_pubkey', "
"'nip42_require_auth', 'nip42_challenge_timeout', "
"'nip42_time_tolerance')";
rc = sqlite3_prepare_v2(db, config_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
while (sqlite3_step(stmt) == SQLITE_ROW) {
const char *key = (const char *)sqlite3_column_text(stmt, 0);
const char *value = (const char *)sqlite3_column_text(stmt, 1);
if (!key || !value)
continue;
if (strcmp(key, "require_auth") == 0) {
g_auth_cache.auth_required = (strcmp(value, "true") == 0) ? 1 : 0;
} else if (strcmp(key, "auth_rules_enabled") == 0) {
// Override auth_required with auth_rules_enabled if present (higher
// priority)
g_auth_cache.auth_required = (strcmp(value, "true") == 0) ? 1 : 0;
} else if (strcmp(key, "max_file_size") == 0) {
g_auth_cache.max_file_size = atol(value);
} else if (strcmp(key, "admin_enabled") == 0) {
g_auth_cache.admin_enabled = (strcmp(value, "true") == 0) ? 1 : 0;
} else if (strcmp(key, "admin_pubkey") == 0) {
strncpy(g_auth_cache.admin_pubkey, value,
sizeof(g_auth_cache.admin_pubkey) - 1);
} else if (strcmp(key, "nip42_require_auth") == 0) {
if (strcmp(value, "false") == 0) {
g_auth_cache.nip42_mode = 0; // Disabled
} else if (strcmp(value, "required") == 0) {
g_auth_cache.nip42_mode = 2; // Required
} else if (strcmp(value, "true") == 0) {
g_auth_cache.nip42_mode = 1; // Optional/Enabled
} else {
g_auth_cache.nip42_mode = 1; // Default to Optional/Enabled
}
} else if (strcmp(key, "nip42_challenge_timeout") == 0) {
g_auth_cache.nip42_challenge_timeout = atoi(value);
} else if (strcmp(key, "nip42_time_tolerance") == 0) {
g_auth_cache.nip42_time_tolerance = atoi(value);
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
// Set cache expiration with environment variable support
int cache_timeout = get_cache_timeout();
g_auth_cache.cache_expires = time(NULL) + cache_timeout;
g_auth_cache.cache_valid = 1;
// Set defaults for missing values
if (g_auth_cache.max_file_size == 0) {
g_auth_cache.max_file_size = 104857600; // 100MB
}
// Debug logging
fprintf(stderr,
"VALIDATOR: Configuration loaded from unified config table - "
"auth_required: %d, max_file_size: %ld, nip42_mode: %d, "
"cache_timeout: %d\n",
g_auth_cache.auth_required, g_auth_cache.max_file_size,
g_auth_cache.nip42_mode, cache_timeout);
fprintf(stderr,
"VALIDATOR: NIP-42 mode details - nip42_mode=%d (0=disabled, "
"1=optional/enabled, 2=required)\n",
g_auth_cache.nip42_mode);
// Configuration is now handled by the unified cache in config.c
validator_debug_log("VALIDATOR: Using unified cache system for configuration\n");
return NOSTR_SUCCESS;
}
@@ -723,8 +606,8 @@ static int reload_auth_config(void) {
* Check database authentication rules for the request
* Implements the 6-step rule evaluation engine from AUTH_API.md
*/
static int check_database_auth_rules(const char *pubkey, const char *operation,
const char *resource_hash) {
int check_database_auth_rules(const char *pubkey, const char *operation,
const char *resource_hash) {
sqlite3 *db = NULL;
sqlite3_stmt *stmt = NULL;
int rc;
@@ -757,28 +640,26 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
// Step 1: Check pubkey blacklist (highest priority)
const char *blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'pubkey_blacklist' AND rule_target = ? AND operation = ? AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type, action FROM auth_rules WHERE rule_type = "
"'blacklist' AND pattern_type = 'pubkey' AND pattern_value = ? LIMIT 1";
rc = sqlite3_prepare_v2(db, blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, pubkey, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
const char *action = (const char *)sqlite3_column_text(stmt, 1);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 1 FAILED - "
"Pubkey blacklisted\n");
char blacklist_msg[256];
sprintf(blacklist_msg,
"VALIDATOR_DEBUG: RULES ENGINE - Blacklist rule matched: %s\n",
description ? description : "Unknown");
"VALIDATOR_DEBUG: RULES ENGINE - Blacklist rule matched: action=%s\n",
action ? action : "deny");
validator_debug_log(blacklist_msg);
// Set specific violation details for status code mapping
strcpy(g_last_rule_violation.violation_type, "pubkey_blacklist");
sprintf(g_last_rule_violation.reason, "%s: Public key blacklisted",
description ? description : "TEST_PUBKEY_BLACKLIST");
sprintf(g_last_rule_violation.reason, "Public key blacklisted: %s",
action ? action : "PUBKEY_BLACKLIST");
sqlite3_finalize(stmt);
sqlite3_close(db);
@@ -792,29 +673,27 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
// Step 2: Check hash blacklist
if (resource_hash) {
const char *hash_blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'hash_blacklist' AND rule_target = ? AND operation = ? AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type, action FROM auth_rules WHERE rule_type = "
"'blacklist' AND pattern_type = 'hash' AND pattern_value = ? LIMIT 1";
rc = sqlite3_prepare_v2(db, hash_blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, resource_hash, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
const char *action = (const char *)sqlite3_column_text(stmt, 1);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 2 FAILED - "
"Hash blacklisted\n");
char hash_blacklist_msg[256];
sprintf(
hash_blacklist_msg,
"VALIDATOR_DEBUG: RULES ENGINE - Hash blacklist rule matched: %s\n",
description ? description : "Unknown");
"VALIDATOR_DEBUG: RULES ENGINE - Hash blacklist rule matched: action=%s\n",
action ? action : "deny");
validator_debug_log(hash_blacklist_msg);
// Set specific violation details for status code mapping
strcpy(g_last_rule_violation.violation_type, "hash_blacklist");
sprintf(g_last_rule_violation.reason, "%s: File hash blacklisted",
description ? description : "TEST_HASH_BLACKLIST");
sprintf(g_last_rule_violation.reason, "File hash blacklisted: %s",
action ? action : "HASH_BLACKLIST");
sqlite3_finalize(stmt);
sqlite3_close(db);
@@ -831,22 +710,20 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
// Step 3: Check pubkey whitelist
const char *whitelist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'pubkey_whitelist' AND rule_target = ? AND operation = ? AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type, action FROM auth_rules WHERE rule_type = "
"'whitelist' AND pattern_type = 'pubkey' AND pattern_value = ? LIMIT 1";
rc = sqlite3_prepare_v2(db, whitelist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, pubkey, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
const char *action = (const char *)sqlite3_column_text(stmt, 1);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 PASSED - "
"Pubkey whitelisted\n");
char whitelist_msg[256];
sprintf(whitelist_msg,
"VALIDATOR_DEBUG: RULES ENGINE - Whitelist rule matched: %s\n",
description ? description : "Unknown");
"VALIDATOR_DEBUG: RULES ENGINE - Whitelist rule matched: action=%s\n",
action ? action : "allow");
validator_debug_log(whitelist_msg);
sqlite3_finalize(stmt);
sqlite3_close(db);
@@ -859,12 +736,10 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
// Step 4: Check if any whitelist rules exist - if yes, deny by default
const char *whitelist_exists_sql =
"SELECT COUNT(*) FROM auth_rules WHERE rule_type = 'pubkey_whitelist' "
"AND operation = ? AND enabled = 1 LIMIT 1";
"SELECT COUNT(*) FROM auth_rules WHERE rule_type = 'whitelist' "
"AND pattern_type = 'pubkey' LIMIT 1";
rc = sqlite3_prepare_v2(db, whitelist_exists_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
int whitelist_count = sqlite3_column_int(stmt, 0);
if (whitelist_count > 0) {

View File

@@ -12,7 +12,7 @@
static const char* const EMBEDDED_SCHEMA_SQL =
"-- C Nostr Relay Database Schema\n\
-- SQLite schema for storing Nostr events with JSON tags support\n\
-- Event-based configuration system using kind 33334 Nostr events\n\
-- Configuration system using config table\n\
\n\
-- Schema version tracking\n\
PRAGMA user_version = 7;\n\

723
src/subscriptions.c Normal file
View File

@@ -0,0 +1,723 @@
#define _GNU_SOURCE
#include <cjson/cJSON.h>
#include <sqlite3.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
#include <stdio.h>
#include <printf.h>
#include <pthread.h>
#include <libwebsockets.h>
#include "subscriptions.h"
// Forward declarations for logging functions
void log_info(const char* message);
void log_error(const char* message);
void log_warning(const char* message);
// Forward declarations for configuration functions
const char* get_config_value(const char* key);
// Forward declarations for NIP-40 expiration functions
int is_event_expired(cJSON* event, time_t current_time);
// Global database variable
extern sqlite3* g_db;
// Global unified cache
extern unified_config_cache_t g_unified_cache;
// Global subscription manager
extern subscription_manager_t g_subscription_manager;
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// PERSISTENT SUBSCRIPTIONS SYSTEM
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// Create a subscription filter from cJSON filter object
subscription_filter_t* create_subscription_filter(cJSON* filter_json) {
if (!filter_json || !cJSON_IsObject(filter_json)) {
return NULL;
}
subscription_filter_t* filter = calloc(1, sizeof(subscription_filter_t));
if (!filter) {
return NULL;
}
// Copy filter criteria
cJSON* kinds = cJSON_GetObjectItem(filter_json, "kinds");
if (kinds && cJSON_IsArray(kinds)) {
filter->kinds = cJSON_Duplicate(kinds, 1);
}
cJSON* authors = cJSON_GetObjectItem(filter_json, "authors");
if (authors && cJSON_IsArray(authors)) {
filter->authors = cJSON_Duplicate(authors, 1);
}
cJSON* ids = cJSON_GetObjectItem(filter_json, "ids");
if (ids && cJSON_IsArray(ids)) {
filter->ids = cJSON_Duplicate(ids, 1);
}
cJSON* since = cJSON_GetObjectItem(filter_json, "since");
if (since && cJSON_IsNumber(since)) {
filter->since = (long)cJSON_GetNumberValue(since);
}
cJSON* until = cJSON_GetObjectItem(filter_json, "until");
if (until && cJSON_IsNumber(until)) {
filter->until = (long)cJSON_GetNumberValue(until);
}
cJSON* limit = cJSON_GetObjectItem(filter_json, "limit");
if (limit && cJSON_IsNumber(limit)) {
filter->limit = (int)cJSON_GetNumberValue(limit);
}
// Handle tag filters (e.g., {"#e": ["id1"], "#p": ["pubkey1"]})
cJSON* item = NULL;
cJSON_ArrayForEach(item, filter_json) {
if (item->string && strlen(item->string) >= 2 && item->string[0] == '#') {
if (!filter->tag_filters) {
filter->tag_filters = cJSON_CreateObject();
}
if (filter->tag_filters) {
cJSON_AddItemToObject(filter->tag_filters, item->string, cJSON_Duplicate(item, 1));
}
}
}
return filter;
}
// Free a subscription filter
void free_subscription_filter(subscription_filter_t* filter) {
if (!filter) return;
if (filter->kinds) cJSON_Delete(filter->kinds);
if (filter->authors) cJSON_Delete(filter->authors);
if (filter->ids) cJSON_Delete(filter->ids);
if (filter->tag_filters) cJSON_Delete(filter->tag_filters);
if (filter->next) {
free_subscription_filter(filter->next);
}
free(filter);
}
// Create a new subscription
subscription_t* create_subscription(const char* sub_id, struct lws* wsi, cJSON* filters_array, const char* client_ip) {
if (!sub_id || !wsi || !filters_array) {
return NULL;
}
subscription_t* sub = calloc(1, sizeof(subscription_t));
if (!sub) {
return NULL;
}
// Copy subscription ID (truncate if too long)
strncpy(sub->id, sub_id, SUBSCRIPTION_ID_MAX_LENGTH - 1);
sub->id[SUBSCRIPTION_ID_MAX_LENGTH - 1] = '\0';
// Set WebSocket connection
sub->wsi = wsi;
// Set client IP
if (client_ip) {
strncpy(sub->client_ip, client_ip, CLIENT_IP_MAX_LENGTH - 1);
sub->client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
}
// Set timestamps and state
sub->created_at = time(NULL);
sub->events_sent = 0;
sub->active = 1;
// Convert filters array to linked list
subscription_filter_t* filter_tail = NULL;
int filter_count = 0;
if (cJSON_IsArray(filters_array)) {
cJSON* filter_json = NULL;
cJSON_ArrayForEach(filter_json, filters_array) {
if (filter_count >= MAX_FILTERS_PER_SUBSCRIPTION) {
log_warning("Maximum filters per subscription exceeded, ignoring excess filters");
break;
}
subscription_filter_t* filter = create_subscription_filter(filter_json);
if (filter) {
if (!sub->filters) {
sub->filters = filter;
filter_tail = filter;
} else {
filter_tail->next = filter;
filter_tail = filter;
}
filter_count++;
}
}
}
if (filter_count == 0) {
log_error("No valid filters found for subscription");
free(sub);
return NULL;
}
return sub;
}
// Free a subscription
void free_subscription(subscription_t* sub) {
if (!sub) return;
if (sub->filters) {
free_subscription_filter(sub->filters);
}
free(sub);
}
// Add subscription to global manager (thread-safe)
int add_subscription_to_manager(subscription_t* sub) {
if (!sub) return -1;
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
// Check global limits
if (g_subscription_manager.total_subscriptions >= g_subscription_manager.max_total_subscriptions) {
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
log_error("Maximum total subscriptions reached");
return -1;
}
// Add to global list
sub->next = g_subscription_manager.active_subscriptions;
g_subscription_manager.active_subscriptions = sub;
g_subscription_manager.total_subscriptions++;
g_subscription_manager.total_created++;
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
// Log subscription creation to database
log_subscription_created(sub);
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg), "Added subscription '%s' (total: %d)",
sub->id, g_subscription_manager.total_subscriptions);
log_info(debug_msg);
return 0;
}
// Remove subscription from global manager (thread-safe)
int remove_subscription_from_manager(const char* sub_id, struct lws* wsi) {
if (!sub_id) return -1;
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
subscription_t** current = &g_subscription_manager.active_subscriptions;
while (*current) {
subscription_t* sub = *current;
// Match by ID and WebSocket connection
if (strcmp(sub->id, sub_id) == 0 && (!wsi || sub->wsi == wsi)) {
// Remove from list
*current = sub->next;
g_subscription_manager.total_subscriptions--;
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
// Log subscription closure to database
log_subscription_closed(sub_id, sub->client_ip, "closed");
// Update events sent counter before freeing
update_subscription_events_sent(sub_id, sub->events_sent);
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg), "Removed subscription '%s' (total: %d)",
sub_id, g_subscription_manager.total_subscriptions);
log_info(debug_msg);
free_subscription(sub);
return 0;
}
current = &(sub->next);
}
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg), "Subscription '%s' not found for removal", sub_id);
log_warning(debug_msg);
return -1;
}
// Check if an event matches a subscription filter
int event_matches_filter(cJSON* event, subscription_filter_t* filter) {
if (!event || !filter) {
return 0;
}
// Check kinds filter
if (filter->kinds && cJSON_IsArray(filter->kinds)) {
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
if (!event_kind || !cJSON_IsNumber(event_kind)) {
return 0;
}
int event_kind_val = (int)cJSON_GetNumberValue(event_kind);
int kind_match = 0;
cJSON* kind_item = NULL;
cJSON_ArrayForEach(kind_item, filter->kinds) {
if (cJSON_IsNumber(kind_item) && (int)cJSON_GetNumberValue(kind_item) == event_kind_val) {
kind_match = 1;
break;
}
}
if (!kind_match) {
return 0;
}
}
// Check authors filter
if (filter->authors && cJSON_IsArray(filter->authors)) {
cJSON* event_pubkey = cJSON_GetObjectItem(event, "pubkey");
if (!event_pubkey || !cJSON_IsString(event_pubkey)) {
return 0;
}
const char* event_pubkey_str = cJSON_GetStringValue(event_pubkey);
int author_match = 0;
cJSON* author_item = NULL;
cJSON_ArrayForEach(author_item, filter->authors) {
if (cJSON_IsString(author_item)) {
const char* author_str = cJSON_GetStringValue(author_item);
// Support prefix matching (partial pubkeys)
if (strncmp(event_pubkey_str, author_str, strlen(author_str)) == 0) {
author_match = 1;
break;
}
}
}
if (!author_match) {
return 0;
}
}
// Check IDs filter
if (filter->ids && cJSON_IsArray(filter->ids)) {
cJSON* event_id = cJSON_GetObjectItem(event, "id");
if (!event_id || !cJSON_IsString(event_id)) {
return 0;
}
const char* event_id_str = cJSON_GetStringValue(event_id);
int id_match = 0;
cJSON* id_item = NULL;
cJSON_ArrayForEach(id_item, filter->ids) {
if (cJSON_IsString(id_item)) {
const char* id_str = cJSON_GetStringValue(id_item);
// Support prefix matching (partial IDs)
if (strncmp(event_id_str, id_str, strlen(id_str)) == 0) {
id_match = 1;
break;
}
}
}
if (!id_match) {
return 0;
}
}
// Check since filter
if (filter->since > 0) {
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
if (!event_created_at || !cJSON_IsNumber(event_created_at)) {
return 0;
}
long event_timestamp = (long)cJSON_GetNumberValue(event_created_at);
if (event_timestamp < filter->since) {
return 0;
}
}
// Check until filter
if (filter->until > 0) {
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
if (!event_created_at || !cJSON_IsNumber(event_created_at)) {
return 0;
}
long event_timestamp = (long)cJSON_GetNumberValue(event_created_at);
if (event_timestamp > filter->until) {
return 0;
}
}
// Check tag filters (e.g., #e, #p tags)
if (filter->tag_filters && cJSON_IsObject(filter->tag_filters)) {
cJSON* event_tags = cJSON_GetObjectItem(event, "tags");
if (!event_tags || !cJSON_IsArray(event_tags)) {
return 0; // Event has no tags but filter requires tags
}
// Check each tag filter
cJSON* tag_filter = NULL;
cJSON_ArrayForEach(tag_filter, filter->tag_filters) {
if (!tag_filter->string || strlen(tag_filter->string) < 2 || tag_filter->string[0] != '#') {
continue; // Invalid tag filter
}
const char* tag_name = tag_filter->string + 1; // Skip the '#'
if (!cJSON_IsArray(tag_filter)) {
continue; // Tag filter must be an array
}
int tag_match = 0;
// Search through event tags for matching tag name and value
cJSON* event_tag = NULL;
cJSON_ArrayForEach(event_tag, event_tags) {
if (!cJSON_IsArray(event_tag) || cJSON_GetArraySize(event_tag) < 2) {
continue; // Invalid tag format
}
cJSON* event_tag_name = cJSON_GetArrayItem(event_tag, 0);
cJSON* event_tag_value = cJSON_GetArrayItem(event_tag, 1);
if (!cJSON_IsString(event_tag_name) || !cJSON_IsString(event_tag_value)) {
continue;
}
// Check if tag name matches
if (strcmp(cJSON_GetStringValue(event_tag_name), tag_name) == 0) {
const char* event_tag_value_str = cJSON_GetStringValue(event_tag_value);
// Check if any of the filter values match this tag value
cJSON* filter_value = NULL;
cJSON_ArrayForEach(filter_value, tag_filter) {
if (cJSON_IsString(filter_value)) {
const char* filter_value_str = cJSON_GetStringValue(filter_value);
// Support prefix matching for tag values
if (strncmp(event_tag_value_str, filter_value_str, strlen(filter_value_str)) == 0) {
tag_match = 1;
break;
}
}
}
if (tag_match) {
break;
}
}
}
if (!tag_match) {
return 0; // This tag filter didn't match, so the event doesn't match
}
}
}
return 1; // All filters passed
}
// Check if an event matches any filter in a subscription (filters are OR'd together)
int event_matches_subscription(cJSON* event, subscription_t* subscription) {
if (!event || !subscription || !subscription->filters) {
return 0;
}
subscription_filter_t* filter = subscription->filters;
while (filter) {
if (event_matches_filter(event, filter)) {
return 1; // Match found (OR logic)
}
filter = filter->next;
}
return 0; // No filters matched
}
// Broadcast event to all matching subscriptions (thread-safe)
int broadcast_event_to_subscriptions(cJSON* event) {
if (!event) {
return 0;
}
// Check if event is expired and should not be broadcast (NIP-40)
pthread_mutex_lock(&g_unified_cache.cache_lock);
int expiration_enabled = g_unified_cache.expiration_config.enabled;
int filter_responses = g_unified_cache.expiration_config.filter_responses;
pthread_mutex_unlock(&g_unified_cache.cache_lock);
if (expiration_enabled && filter_responses) {
time_t current_time = time(NULL);
if (is_event_expired(event, current_time)) {
char debug_msg[256];
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
const char* event_id = event_id_obj ? cJSON_GetStringValue(event_id_obj) : "unknown";
snprintf(debug_msg, sizeof(debug_msg), "Skipping broadcast of expired event: %.16s", event_id);
log_info(debug_msg);
return 0; // Don't broadcast expired events
}
}
int broadcasts = 0;
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
subscription_t* sub = g_subscription_manager.active_subscriptions;
while (sub) {
if (sub->active && event_matches_subscription(event, sub)) {
// Create EVENT message for this subscription
cJSON* event_msg = cJSON_CreateArray();
cJSON_AddItemToArray(event_msg, cJSON_CreateString("EVENT"));
cJSON_AddItemToArray(event_msg, cJSON_CreateString(sub->id));
cJSON_AddItemToArray(event_msg, cJSON_Duplicate(event, 1));
char* msg_str = cJSON_Print(event_msg);
if (msg_str) {
size_t msg_len = strlen(msg_str);
unsigned char* buf = malloc(LWS_PRE + msg_len);
if (buf) {
memcpy(buf + LWS_PRE, msg_str, msg_len);
// Send to WebSocket connection
int write_result = lws_write(sub->wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
if (write_result >= 0) {
sub->events_sent++;
broadcasts++;
// Log event broadcast to database (optional - can be disabled for performance)
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
if (event_id_obj && cJSON_IsString(event_id_obj)) {
log_event_broadcast(cJSON_GetStringValue(event_id_obj), sub->id, sub->client_ip);
}
}
free(buf);
}
free(msg_str);
}
cJSON_Delete(event_msg);
}
sub = sub->next;
}
// Update global statistics
g_subscription_manager.total_events_broadcast += broadcasts;
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
if (broadcasts > 0) {
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg), "Broadcasted event to %d subscriptions", broadcasts);
log_info(debug_msg);
}
return broadcasts;
}
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// SUBSCRIPTION DATABASE LOGGING
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// Log subscription creation to database
void log_subscription_created(const subscription_t* sub) {
if (!g_db || !sub) return;
// Create filter JSON for logging
char* filter_json = NULL;
if (sub->filters) {
cJSON* filters_array = cJSON_CreateArray();
subscription_filter_t* filter = sub->filters;
while (filter) {
cJSON* filter_obj = cJSON_CreateObject();
if (filter->kinds) {
cJSON_AddItemToObject(filter_obj, "kinds", cJSON_Duplicate(filter->kinds, 1));
}
if (filter->authors) {
cJSON_AddItemToObject(filter_obj, "authors", cJSON_Duplicate(filter->authors, 1));
}
if (filter->ids) {
cJSON_AddItemToObject(filter_obj, "ids", cJSON_Duplicate(filter->ids, 1));
}
if (filter->since > 0) {
cJSON_AddNumberToObject(filter_obj, "since", filter->since);
}
if (filter->until > 0) {
cJSON_AddNumberToObject(filter_obj, "until", filter->until);
}
if (filter->limit > 0) {
cJSON_AddNumberToObject(filter_obj, "limit", filter->limit);
}
if (filter->tag_filters) {
cJSON* tags_obj = cJSON_Duplicate(filter->tag_filters, 1);
cJSON* item = NULL;
cJSON_ArrayForEach(item, tags_obj) {
if (item->string) {
cJSON_AddItemToObject(filter_obj, item->string, cJSON_Duplicate(item, 1));
}
}
cJSON_Delete(tags_obj);
}
cJSON_AddItemToArray(filters_array, filter_obj);
filter = filter->next;
}
filter_json = cJSON_Print(filters_array);
cJSON_Delete(filters_array);
}
const char* sql =
"INSERT INTO subscription_events (subscription_id, client_ip, event_type, filter_json) "
"VALUES (?, ?, 'created', ?)";
sqlite3_stmt* stmt;
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, sub->id, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, sub->client_ip, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, filter_json ? filter_json : "[]", -1, SQLITE_TRANSIENT);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
if (filter_json) free(filter_json);
}
// Log subscription closure to database
void log_subscription_closed(const char* sub_id, const char* client_ip, const char* reason) {
(void)reason; // Mark as intentionally unused
if (!g_db || !sub_id) return;
const char* sql =
"INSERT INTO subscription_events (subscription_id, client_ip, event_type) "
"VALUES (?, ?, 'closed')";
sqlite3_stmt* stmt;
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, sub_id, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, client_ip ? client_ip : "unknown", -1, SQLITE_STATIC);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
// Update the corresponding 'created' entry with end time and events sent
const char* update_sql =
"UPDATE subscription_events "
"SET ended_at = strftime('%s', 'now') "
"WHERE subscription_id = ? AND event_type = 'created' AND ended_at IS NULL";
rc = sqlite3_prepare_v2(g_db, update_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, sub_id, -1, SQLITE_STATIC);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
}
// Log subscription disconnection to database
void log_subscription_disconnected(const char* client_ip) {
if (!g_db || !client_ip) return;
// Mark all active subscriptions for this client as disconnected
const char* sql =
"UPDATE subscription_events "
"SET ended_at = strftime('%s', 'now') "
"WHERE client_ip = ? AND event_type = 'created' AND ended_at IS NULL";
sqlite3_stmt* stmt;
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, client_ip, -1, SQLITE_STATIC);
int changes = sqlite3_changes(g_db);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
if (changes > 0) {
// Log a disconnection event
const char* insert_sql =
"INSERT INTO subscription_events (subscription_id, client_ip, event_type) "
"VALUES ('disconnect', ?, 'disconnected')";
rc = sqlite3_prepare_v2(g_db, insert_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, client_ip, -1, SQLITE_STATIC);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
}
}
}
// Log event broadcast to database (optional, can be resource intensive)
void log_event_broadcast(const char* event_id, const char* sub_id, const char* client_ip) {
if (!g_db || !event_id || !sub_id || !client_ip) return;
const char* sql =
"INSERT INTO event_broadcasts (event_id, subscription_id, client_ip) "
"VALUES (?, ?, ?)";
sqlite3_stmt* stmt;
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, event_id, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, sub_id, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, client_ip, -1, SQLITE_STATIC);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
}
// Update events sent counter for a subscription
void update_subscription_events_sent(const char* sub_id, int events_sent) {
if (!g_db || !sub_id) return;
const char* sql =
"UPDATE subscription_events "
"SET events_sent = ? "
"WHERE subscription_id = ? AND event_type = 'created'";
sqlite3_stmt* stmt;
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_int(stmt, 1, events_sent);
sqlite3_bind_text(stmt, 2, sub_id, -1, SQLITE_STATIC);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
}

91
src/subscriptions.h Normal file
View File

@@ -0,0 +1,91 @@
// Subscription system structures and functions for C-Relay
// This header defines subscription management functionality
#ifndef SUBSCRIPTIONS_H
#define SUBSCRIPTIONS_H
#include <pthread.h>
#include <time.h>
#include <stdint.h>
#include "../nostr_core_lib/cjson/cJSON.h"
#include "config.h" // For CLIENT_IP_MAX_LENGTH
// Forward declaration for libwebsockets struct
struct lws;
// Constants
#define SUBSCRIPTION_ID_MAX_LENGTH 64
#define MAX_FILTERS_PER_SUBSCRIPTION 10
#define MAX_TOTAL_SUBSCRIPTIONS 5000
// Forward declarations for typedefs
typedef struct subscription_filter subscription_filter_t;
typedef struct subscription subscription_t;
typedef struct subscription_manager subscription_manager_t;
// Subscription filter structure
struct subscription_filter {
// Filter criteria (all optional)
cJSON* kinds; // Array of event kinds [1,2,3]
cJSON* authors; // Array of author pubkeys
cJSON* ids; // Array of event IDs
long since; // Unix timestamp (0 = not set)
long until; // Unix timestamp (0 = not set)
int limit; // Result limit (0 = no limit)
cJSON* tag_filters; // Object with tag filters: {"#e": ["id1"], "#p": ["pubkey1"]}
// Linked list for multiple filters per subscription
struct subscription_filter* next;
};
// Active subscription structure
struct subscription {
char id[SUBSCRIPTION_ID_MAX_LENGTH]; // Subscription ID
struct lws* wsi; // WebSocket connection handle
subscription_filter_t* filters; // Linked list of filters (OR'd together)
time_t created_at; // When subscription was created
int events_sent; // Counter for sent events
int active; // 1 = active, 0 = closed
// Client info for logging
char client_ip[CLIENT_IP_MAX_LENGTH]; // Client IP address
// Linked list pointers
struct subscription* next; // Next subscription globally
struct subscription* session_next; // Next subscription for this session
};
// Global subscription manager
struct subscription_manager {
subscription_t* active_subscriptions; // Head of global subscription list
pthread_mutex_t subscriptions_lock; // Global thread safety
int total_subscriptions; // Current count
// Configuration
int max_subscriptions_per_client; // Default: 20
int max_total_subscriptions; // Default: 5000
// Statistics
uint64_t total_created; // Lifetime subscription count
uint64_t total_events_broadcast; // Lifetime event broadcast count
};
// Function declarations
subscription_filter_t* create_subscription_filter(cJSON* filter_json);
void free_subscription_filter(subscription_filter_t* filter);
subscription_t* create_subscription(const char* sub_id, struct lws* wsi, cJSON* filters_array, const char* client_ip);
void free_subscription(subscription_t* sub);
int add_subscription_to_manager(subscription_t* sub);
int remove_subscription_from_manager(const char* sub_id, struct lws* wsi);
int event_matches_filter(cJSON* event, subscription_filter_t* filter);
int event_matches_subscription(cJSON* event, subscription_t* subscription);
int broadcast_event_to_subscriptions(cJSON* event);
// Database logging functions
void log_subscription_created(const subscription_t* sub);
void log_subscription_closed(const char* sub_id, const char* client_ip, const char* reason);
void log_subscription_disconnected(const char* client_ip);
void log_event_broadcast(const char* event_id, const char* sub_id, const char* client_ip);
void update_subscription_events_sent(const char* sub_id, int events_sent);
#endif // SUBSCRIPTIONS_H

901
src/websockets.c Normal file
View File

@@ -0,0 +1,901 @@
// Define _GNU_SOURCE to ensure all POSIX features are available
#define _GNU_SOURCE
// Includes
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <signal.h>
#include <time.h>
#include <pthread.h>
#include <sqlite3.h>
// Include libwebsockets after pthread.h to ensure pthread_rwlock_t is defined
#include <libwebsockets.h>
#include <errno.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
// Include nostr_core_lib for Nostr functionality
#include "../nostr_core_lib/cjson/cJSON.h"
#include "../nostr_core_lib/nostr_core/nostr_core.h"
#include "../nostr_core_lib/nostr_core/nip013.h" // NIP-13: Proof of Work
#include "config.h" // Configuration management system
#include "sql_schema.h" // Embedded database schema
#include "websockets.h" // WebSocket structures and constants
#include "subscriptions.h" // Subscription structures and functions
// Forward declarations for logging functions
void log_info(const char* message);
void log_success(const char* message);
void log_error(const char* message);
void log_warning(const char* message);
// Forward declarations for configuration functions
const char* get_config_value(const char* key);
int get_config_int(const char* key, int default_value);
int get_config_bool(const char* key, int default_value);
// Forward declarations for NIP-42 authentication functions
int is_nip42_auth_globally_required(void);
int is_nip42_auth_required_for_kind(int kind);
void send_nip42_auth_challenge(struct lws* wsi, struct per_session_data* pss);
void handle_nip42_auth_signed_event(struct lws* wsi, struct per_session_data* pss, cJSON* auth_event);
void handle_nip42_auth_challenge_response(struct lws* wsi, struct per_session_data* pss, const char* challenge);
// Forward declarations for NIP-11 relay information handling
int handle_nip11_http_request(struct lws* wsi, const char* accept_header);
// Forward declarations for database functions
int store_event(cJSON* event);
// Forward declarations for subscription management
int broadcast_event_to_subscriptions(cJSON* event);
int add_subscription_to_manager(struct subscription* sub);
int remove_subscription_from_manager(const char* sub_id, struct lws* wsi);
// Forward declarations for event handling
int handle_event_message(cJSON* event, char* error_message, size_t error_size);
int nostr_validate_unified_request(const char* json_string, size_t json_length);
// Forward declarations for admin event processing
int process_admin_event_in_config(cJSON* event, char* error_message, size_t error_size, struct lws* wsi);
int is_authorized_admin_event(cJSON* event, char* error_message, size_t error_size);
// Forward declarations for NIP-09 deletion request handling
int handle_deletion_request(cJSON* event, char* error_message, size_t error_size);
// Forward declarations for NIP-13 PoW handling
int validate_event_pow(cJSON* event, char* error_message, size_t error_size);
// Forward declarations for NIP-40 expiration handling
int is_event_expired(cJSON* event, time_t current_time);
// Forward declarations for subscription handling
int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, struct per_session_data *pss);
// Forward declarations for NOTICE message support
void send_notice_message(struct lws* wsi, const char* message);
// Forward declarations for unified cache access
extern unified_config_cache_t g_unified_cache;
// Forward declarations for global state
extern sqlite3* g_db;
extern int g_server_running;
extern struct lws_context *ws_context;
// Global subscription manager
struct subscription_manager g_subscription_manager;
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// WEBSOCKET PROTOCOL
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// WebSocket callback function for Nostr relay protocol
static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reason,
void *user, void *in, size_t len) {
struct per_session_data *pss = (struct per_session_data *)user;
switch (reason) {
case LWS_CALLBACK_HTTP:
// Handle NIP-11 relay information requests (HTTP GET to root path)
{
char *requested_uri = (char *)in;
log_info("HTTP request received");
// Check if this is a GET request to the root path
if (strcmp(requested_uri, "/") == 0) {
// Get Accept header
char accept_header[256] = {0};
int header_len = lws_hdr_copy(wsi, accept_header, sizeof(accept_header) - 1, WSI_TOKEN_HTTP_ACCEPT);
if (header_len > 0) {
accept_header[header_len] = '\0';
// Handle NIP-11 request
if (handle_nip11_http_request(wsi, accept_header) == 0) {
return 0; // Successfully handled
}
} else {
log_warning("HTTP request without Accept header");
}
// Return 404 for other requests
lws_return_http_status(wsi, HTTP_STATUS_NOT_FOUND, NULL);
return -1;
}
// Return 404 for non-root paths
lws_return_http_status(wsi, HTTP_STATUS_NOT_FOUND, NULL);
return -1;
}
case LWS_CALLBACK_HTTP_WRITEABLE:
// Handle NIP-11 HTTP body transmission with proper buffer management
{
struct nip11_session_data* session_data = (struct nip11_session_data*)lws_wsi_user(wsi);
if (session_data && session_data->headers_sent && !session_data->body_sent) {
// Allocate buffer for JSON body transmission
unsigned char *json_buf = malloc(LWS_PRE + session_data->json_length);
if (!json_buf) {
log_error("Failed to allocate buffer for NIP-11 body transmission");
// Clean up session data
free(session_data->json_buffer);
free(session_data);
lws_set_wsi_user(wsi, NULL);
return -1;
}
// Copy JSON data to buffer
memcpy(json_buf + LWS_PRE, session_data->json_buffer, session_data->json_length);
// Write JSON body
int write_result = lws_write(wsi, json_buf + LWS_PRE, session_data->json_length, LWS_WRITE_HTTP);
// Free the transmission buffer immediately (it's been copied by libwebsockets)
free(json_buf);
if (write_result < 0) {
log_error("Failed to write NIP-11 JSON body");
// Clean up session data
free(session_data->json_buffer);
free(session_data);
lws_set_wsi_user(wsi, NULL);
return -1;
}
// Mark body as sent and clean up session data
session_data->body_sent = 1;
free(session_data->json_buffer);
free(session_data);
lws_set_wsi_user(wsi, NULL);
log_success("NIP-11 relay information served successfully");
return 0; // Close connection after successful transmission
}
}
break;
case LWS_CALLBACK_ESTABLISHED:
log_info("WebSocket connection established");
memset(pss, 0, sizeof(*pss));
pthread_mutex_init(&pss->session_lock, NULL);
// Get real client IP address
char client_ip[CLIENT_IP_MAX_LENGTH];
lws_get_peer_simple(wsi, client_ip, sizeof(client_ip));
// Ensure client_ip is null-terminated and copy safely
client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
size_t ip_len = strlen(client_ip);
size_t copy_len = (ip_len < CLIENT_IP_MAX_LENGTH - 1) ? ip_len : CLIENT_IP_MAX_LENGTH - 1;
memcpy(pss->client_ip, client_ip, copy_len);
pss->client_ip[copy_len] = '\0';
// Initialize NIP-42 authentication state
pss->authenticated = 0;
pss->nip42_auth_required_events = get_config_bool("nip42_auth_required_events", 0);
pss->nip42_auth_required_subscriptions = get_config_bool("nip42_auth_required_subscriptions", 0);
pss->auth_challenge_sent = 0;
memset(pss->authenticated_pubkey, 0, sizeof(pss->authenticated_pubkey));
memset(pss->active_challenge, 0, sizeof(pss->active_challenge));
pss->challenge_created = 0;
pss->challenge_expires = 0;
break;
case LWS_CALLBACK_RECEIVE:
if (len > 0) {
char *message = malloc(len + 1);
if (message) {
memcpy(message, in, len);
message[len] = '\0';
// Parse JSON message (this is the normal program flow)
cJSON* json = cJSON_Parse(message);
if (json && cJSON_IsArray(json)) {
// Log the complete parsed JSON message once
char* complete_message = cJSON_Print(json);
if (complete_message) {
char debug_msg[2048];
snprintf(debug_msg, sizeof(debug_msg),
"Received complete WebSocket message: %s", complete_message);
log_info(debug_msg);
free(complete_message);
}
// Get message type
cJSON* type = cJSON_GetArrayItem(json, 0);
if (type && cJSON_IsString(type)) {
const char* msg_type = cJSON_GetStringValue(type);
if (strcmp(msg_type, "EVENT") == 0) {
// Extract event for kind-specific NIP-42 authentication check
cJSON* event_obj = cJSON_GetArrayItem(json, 1);
if (event_obj && cJSON_IsObject(event_obj)) {
// Extract event kind for kind-specific NIP-42 authentication check
cJSON* kind_obj = cJSON_GetObjectItem(event_obj, "kind");
int event_kind = kind_obj && cJSON_IsNumber(kind_obj) ? (int)cJSON_GetNumberValue(kind_obj) : -1;
// Extract pubkey and event ID for debugging
cJSON* pubkey_obj = cJSON_GetObjectItem(event_obj, "pubkey");
cJSON* id_obj = cJSON_GetObjectItem(event_obj, "id");
const char* event_pubkey = pubkey_obj ? cJSON_GetStringValue(pubkey_obj) : "unknown";
const char* event_id = id_obj ? cJSON_GetStringValue(id_obj) : "unknown";
char debug_event_msg[512];
snprintf(debug_event_msg, sizeof(debug_event_msg),
"DEBUG EVENT: Processing kind %d event from pubkey %.16s... ID %.16s...",
event_kind, event_pubkey, event_id);
log_info(debug_event_msg);
// Check if NIP-42 authentication is required for this event kind or globally
int auth_required = is_nip42_auth_globally_required() || is_nip42_auth_required_for_kind(event_kind);
char debug_auth_msg[256];
snprintf(debug_auth_msg, sizeof(debug_auth_msg),
"DEBUG AUTH: auth_required=%d, pss->authenticated=%d, event_kind=%d",
auth_required, pss ? pss->authenticated : -1, event_kind);
log_info(debug_auth_msg);
if (pss && auth_required && !pss->authenticated) {
if (!pss->auth_challenge_sent) {
log_info("DEBUG AUTH: Sending NIP-42 authentication challenge");
send_nip42_auth_challenge(wsi, pss);
} else {
char auth_msg[256];
if (event_kind == 4 || event_kind == 14) {
snprintf(auth_msg, sizeof(auth_msg),
"NIP-42 authentication required for direct message events (kind %d)", event_kind);
} else {
snprintf(auth_msg, sizeof(auth_msg),
"NIP-42 authentication required for event kind %d", event_kind);
}
send_notice_message(wsi, auth_msg);
log_warning("Event rejected: NIP-42 authentication required for kind");
char debug_msg[128];
snprintf(debug_msg, sizeof(debug_msg), "Auth required for kind %d", event_kind);
log_info(debug_msg);
}
cJSON_Delete(json);
free(message);
return 0;
}
// Check blacklist/whitelist rules regardless of NIP-42 auth settings
// Blacklist should always be enforced
if (event_pubkey) {
// Forward declaration for auth rules checking function
extern int check_database_auth_rules(const char *pubkey, const char *operation, const char *resource_hash);
int auth_rules_result = check_database_auth_rules(event_pubkey, "event", NULL);
if (auth_rules_result != 0) { // 0 = NOSTR_SUCCESS, non-zero = blocked
char auth_rules_msg[256];
if (auth_rules_result == -101) { // NOSTR_ERROR_AUTH_REQUIRED
snprintf(auth_rules_msg, sizeof(auth_rules_msg),
"blocked: pubkey not authorized (blacklist/whitelist violation)");
} else {
snprintf(auth_rules_msg, sizeof(auth_rules_msg),
"blocked: authorization check failed (error %d)", auth_rules_result);
}
send_notice_message(wsi, auth_rules_msg);
log_warning("Event rejected: blacklist/whitelist violation");
// Send OK response with false status
cJSON* response = cJSON_CreateArray();
cJSON_AddItemToArray(response, cJSON_CreateString("OK"));
cJSON_AddItemToArray(response, cJSON_CreateString(event_id));
cJSON_AddItemToArray(response, cJSON_CreateBool(0)); // false = rejected
cJSON_AddItemToArray(response, cJSON_CreateString(auth_rules_msg));
char *response_str = cJSON_Print(response);
if (response_str) {
size_t response_len = strlen(response_str);
unsigned char *buf = malloc(LWS_PRE + response_len);
if (buf) {
memcpy(buf + LWS_PRE, response_str, response_len);
lws_write(wsi, buf + LWS_PRE, response_len, LWS_WRITE_TEXT);
free(buf);
}
free(response_str);
}
cJSON_Delete(response);
cJSON_Delete(json);
free(message);
return 0;
}
}
}
// Handle EVENT message
cJSON* event = cJSON_GetArrayItem(json, 1);
if (event && cJSON_IsObject(event)) {
// Extract event JSON string for unified validator
char *event_json_str = cJSON_Print(event);
if (!event_json_str) {
log_error("Failed to serialize event JSON for validation");
cJSON* error_response = cJSON_CreateArray();
cJSON_AddItemToArray(error_response, cJSON_CreateString("OK"));
cJSON_AddItemToArray(error_response, cJSON_CreateString("unknown"));
cJSON_AddItemToArray(error_response, cJSON_CreateBool(0));
cJSON_AddItemToArray(error_response, cJSON_CreateString("error: failed to process event"));
char *error_str = cJSON_Print(error_response);
if (error_str) {
size_t error_len = strlen(error_str);
unsigned char *buf = malloc(LWS_PRE + error_len);
if (buf) {
memcpy(buf + LWS_PRE, error_str, error_len);
lws_write(wsi, buf + LWS_PRE, error_len, LWS_WRITE_TEXT);
free(buf);
}
free(error_str);
}
cJSON_Delete(error_response);
return 0;
}
log_info("DEBUG VALIDATION: Starting unified validator");
// Call unified validator with JSON string
size_t event_json_len = strlen(event_json_str);
int validation_result = nostr_validate_unified_request(event_json_str, event_json_len);
// Map validation result to old result format (0 = success, -1 = failure)
int result = (validation_result == NOSTR_SUCCESS) ? 0 : -1;
char debug_validation_msg[256];
snprintf(debug_validation_msg, sizeof(debug_validation_msg),
"DEBUG VALIDATION: validation_result=%d, result=%d", validation_result, result);
log_info(debug_validation_msg);
// Generate error message based on validation result
char error_message[512] = {0};
if (result != 0) {
switch (validation_result) {
case NOSTR_ERROR_INVALID_INPUT:
strncpy(error_message, "invalid: malformed event structure", sizeof(error_message) - 1);
break;
case NOSTR_ERROR_EVENT_INVALID_SIGNATURE:
strncpy(error_message, "invalid: signature verification failed", sizeof(error_message) - 1);
break;
case NOSTR_ERROR_EVENT_INVALID_ID:
strncpy(error_message, "invalid: event id verification failed", sizeof(error_message) - 1);
break;
case NOSTR_ERROR_EVENT_INVALID_PUBKEY:
strncpy(error_message, "invalid: invalid pubkey format", sizeof(error_message) - 1);
break;
case -103: // NOSTR_ERROR_EVENT_EXPIRED
strncpy(error_message, "rejected: event expired", sizeof(error_message) - 1);
break;
case -102: // NOSTR_ERROR_NIP42_DISABLED
strncpy(error_message, "auth-required: NIP-42 authentication required", sizeof(error_message) - 1);
break;
case -101: // NOSTR_ERROR_AUTH_REQUIRED
strncpy(error_message, "blocked: pubkey not authorized", sizeof(error_message) - 1);
break;
default:
strncpy(error_message, "error: validation failed", sizeof(error_message) - 1);
break;
}
char debug_error_msg[256];
snprintf(debug_error_msg, sizeof(debug_error_msg),
"DEBUG VALIDATION ERROR: %s", error_message);
log_warning(debug_error_msg);
} else {
log_info("DEBUG VALIDATION: Event validated successfully using unified validator");
}
// Cleanup event JSON string
free(event_json_str);
// Check for admin events (kind 23456) and intercept them
if (result == 0) {
cJSON* kind_obj = cJSON_GetObjectItem(event, "kind");
if (kind_obj && cJSON_IsNumber(kind_obj)) {
int event_kind = (int)cJSON_GetNumberValue(kind_obj);
log_info("DEBUG ADMIN: Checking if admin event processing is needed");
// Log reception of Kind 23456 events
if (event_kind == 23456) {
char* event_json_debug = cJSON_Print(event);
char debug_received_msg[1024];
snprintf(debug_received_msg, sizeof(debug_received_msg),
"RECEIVED Kind %d event: %s", event_kind,
event_json_debug ? event_json_debug : "Failed to serialize");
log_info(debug_received_msg);
if (event_json_debug) {
free(event_json_debug);
}
}
if (event_kind == 23456) {
// Enhanced admin event security - check authorization first
log_info("DEBUG ADMIN: Admin event detected, checking authorization");
char auth_error[512] = {0};
int auth_result = is_authorized_admin_event(event, auth_error, sizeof(auth_error));
if (auth_result != 0) {
// Authorization failed - log and reject
log_warning("DEBUG ADMIN: Admin event authorization failed");
result = -1;
size_t error_len = strlen(auth_error);
size_t copy_len = (error_len < sizeof(error_message) - 1) ? error_len : sizeof(error_message) - 1;
memcpy(error_message, auth_error, copy_len);
error_message[copy_len] = '\0';
char debug_auth_error_msg[600];
snprintf(debug_auth_error_msg, sizeof(debug_auth_error_msg),
"DEBUG ADMIN AUTH ERROR: %.400s", auth_error);
log_warning(debug_auth_error_msg);
} else {
// Authorization successful - process through admin API
log_info("DEBUG ADMIN: Admin event authorized, processing through admin API");
char admin_error[512] = {0};
int admin_result = process_admin_event_in_config(event, admin_error, sizeof(admin_error), wsi);
char debug_admin_msg[256];
snprintf(debug_admin_msg, sizeof(debug_admin_msg),
"DEBUG ADMIN: process_admin_event_in_config returned %d", admin_result);
log_info(debug_admin_msg);
// Log results for Kind 23456 events
if (event_kind == 23456) {
if (admin_result == 0) {
char success_result_msg[256];
snprintf(success_result_msg, sizeof(success_result_msg),
"SUCCESS: Kind %d event processed successfully", event_kind);
log_success(success_result_msg);
} else {
char error_result_msg[512];
snprintf(error_result_msg, sizeof(error_result_msg),
"ERROR: Kind %d event processing failed: %s", event_kind, admin_error);
log_error(error_result_msg);
}
}
if (admin_result != 0) {
log_error("DEBUG ADMIN: Failed to process admin event through admin API");
result = -1;
size_t error_len = strlen(admin_error);
size_t copy_len = (error_len < sizeof(error_message) - 1) ? error_len : sizeof(error_message) - 1;
memcpy(error_message, admin_error, copy_len);
error_message[copy_len] = '\0';
char debug_admin_error_msg[600];
snprintf(debug_admin_error_msg, sizeof(debug_admin_error_msg),
"DEBUG ADMIN ERROR: %.400s", admin_error);
log_error(debug_admin_error_msg);
} else {
log_success("DEBUG ADMIN: Admin event processed successfully through admin API");
// Admin events are processed by the admin API, not broadcast to subscriptions
}
}
} else {
// Regular event - store in database and broadcast
log_info("DEBUG STORAGE: Regular event - storing in database");
if (store_event(event) != 0) {
log_error("DEBUG STORAGE: Failed to store event in database");
result = -1;
strncpy(error_message, "error: failed to store event", sizeof(error_message) - 1);
} else {
log_info("DEBUG STORAGE: Event stored successfully in database");
// Broadcast event to matching persistent subscriptions
int broadcast_count = broadcast_event_to_subscriptions(event);
char debug_broadcast_msg[128];
snprintf(debug_broadcast_msg, sizeof(debug_broadcast_msg),
"DEBUG BROADCAST: Event broadcast to %d subscriptions", broadcast_count);
log_info(debug_broadcast_msg);
}
}
} else {
// Event without valid kind - try normal storage
log_warning("DEBUG STORAGE: Event without valid kind - trying normal storage");
if (store_event(event) != 0) {
log_error("DEBUG STORAGE: Failed to store event without kind in database");
result = -1;
strncpy(error_message, "error: failed to store event", sizeof(error_message) - 1);
} else {
log_info("DEBUG STORAGE: Event without kind stored successfully in database");
broadcast_event_to_subscriptions(event);
}
}
}
// Send OK response
cJSON* event_id = cJSON_GetObjectItem(event, "id");
if (event_id && cJSON_IsString(event_id)) {
cJSON* response = cJSON_CreateArray();
cJSON_AddItemToArray(response, cJSON_CreateString("OK"));
cJSON_AddItemToArray(response, cJSON_CreateString(cJSON_GetStringValue(event_id)));
cJSON_AddItemToArray(response, cJSON_CreateBool(result == 0));
cJSON_AddItemToArray(response, cJSON_CreateString(strlen(error_message) > 0 ? error_message : ""));
// TODO: REPLACE - Remove wasteful cJSON_Print conversion
char *response_str = cJSON_Print(response);
if (response_str) {
char debug_response_msg[512];
snprintf(debug_response_msg, sizeof(debug_response_msg),
"DEBUG RESPONSE: Sending OK response: %s", response_str);
log_info(debug_response_msg);
size_t response_len = strlen(response_str);
unsigned char *buf = malloc(LWS_PRE + response_len);
if (buf) {
memcpy(buf + LWS_PRE, response_str, response_len);
int write_result = lws_write(wsi, buf + LWS_PRE, response_len, LWS_WRITE_TEXT);
char debug_write_msg[128];
snprintf(debug_write_msg, sizeof(debug_write_msg),
"DEBUG RESPONSE: lws_write returned %d", write_result);
log_info(debug_write_msg);
free(buf);
}
free(response_str);
}
cJSON_Delete(response);
}
}
} else if (strcmp(msg_type, "REQ") == 0) {
// Check NIP-42 authentication for REQ subscriptions if required
if (pss && pss->nip42_auth_required_subscriptions && !pss->authenticated) {
if (!pss->auth_challenge_sent) {
send_nip42_auth_challenge(wsi, pss);
} else {
send_notice_message(wsi, "NIP-42 authentication required for subscriptions");
log_warning("REQ rejected: NIP-42 authentication required");
}
cJSON_Delete(json);
free(message);
return 0;
}
// Handle REQ message
cJSON* sub_id = cJSON_GetArrayItem(json, 1);
if (sub_id && cJSON_IsString(sub_id)) {
const char* subscription_id = cJSON_GetStringValue(sub_id);
// Create array of filter objects from position 2 onwards
cJSON* filters = cJSON_CreateArray();
int json_size = cJSON_GetArraySize(json);
for (int i = 2; i < json_size; i++) {
cJSON* filter = cJSON_GetArrayItem(json, i);
if (filter) {
cJSON_AddItemToArray(filters, cJSON_Duplicate(filter, 1));
}
}
handle_req_message(subscription_id, filters, wsi, pss);
// Clean up the filters array we created
cJSON_Delete(filters);
// Send EOSE (End of Stored Events)
cJSON* eose_response = cJSON_CreateArray();
cJSON_AddItemToArray(eose_response, cJSON_CreateString("EOSE"));
cJSON_AddItemToArray(eose_response, cJSON_CreateString(subscription_id));
char *eose_str = cJSON_Print(eose_response);
if (eose_str) {
size_t eose_len = strlen(eose_str);
unsigned char *buf = malloc(LWS_PRE + eose_len);
if (buf) {
memcpy(buf + LWS_PRE, eose_str, eose_len);
lws_write(wsi, buf + LWS_PRE, eose_len, LWS_WRITE_TEXT);
free(buf);
}
free(eose_str);
}
cJSON_Delete(eose_response);
}
} else if (strcmp(msg_type, "CLOSE") == 0) {
// Handle CLOSE message
cJSON* sub_id = cJSON_GetArrayItem(json, 1);
if (sub_id && cJSON_IsString(sub_id)) {
const char* subscription_id = cJSON_GetStringValue(sub_id);
// Remove from global manager
remove_subscription_from_manager(subscription_id, wsi);
// Remove from session list if present
if (pss) {
pthread_mutex_lock(&pss->session_lock);
struct subscription** current = &pss->subscriptions;
while (*current) {
if (strcmp((*current)->id, subscription_id) == 0) {
struct subscription* to_remove = *current;
*current = to_remove->session_next;
pss->subscription_count--;
break;
}
current = &((*current)->session_next);
}
pthread_mutex_unlock(&pss->session_lock);
}
char debug_msg[256];
snprintf(debug_msg, sizeof(debug_msg), "Closed subscription: %s", subscription_id);
log_info(debug_msg);
}
} else if (strcmp(msg_type, "AUTH") == 0) {
// Handle NIP-42 AUTH message
if (cJSON_GetArraySize(json) >= 2) {
cJSON* auth_payload = cJSON_GetArrayItem(json, 1);
if (cJSON_IsString(auth_payload)) {
// AUTH challenge response: ["AUTH", <challenge>] (unusual)
handle_nip42_auth_challenge_response(wsi, pss, cJSON_GetStringValue(auth_payload));
} else if (cJSON_IsObject(auth_payload)) {
// AUTH signed event: ["AUTH", <event>] (standard NIP-42)
handle_nip42_auth_signed_event(wsi, pss, auth_payload);
} else {
send_notice_message(wsi, "Invalid AUTH message format");
log_warning("Received AUTH message with invalid payload type");
}
} else {
send_notice_message(wsi, "AUTH message requires payload");
log_warning("Received AUTH message without payload");
}
} else {
// Unknown message type
char unknown_msg[128];
snprintf(unknown_msg, sizeof(unknown_msg), "Unknown message type: %.32s", msg_type);
log_warning(unknown_msg);
send_notice_message(wsi, "Unknown message type");
}
}
}
if (json) cJSON_Delete(json);
free(message);
}
}
break;
case LWS_CALLBACK_CLOSED:
log_info("WebSocket connection closed");
// Clean up session subscriptions
if (pss) {
pthread_mutex_lock(&pss->session_lock);
struct subscription* sub = pss->subscriptions;
while (sub) {
struct subscription* next = sub->session_next;
remove_subscription_from_manager(sub->id, wsi);
sub = next;
}
pss->subscriptions = NULL;
pss->subscription_count = 0;
pthread_mutex_unlock(&pss->session_lock);
pthread_mutex_destroy(&pss->session_lock);
}
break;
default:
break;
}
return 0;
}
// WebSocket protocol definition
static struct lws_protocols protocols[] = {
{
"nostr-relay-protocol",
nostr_relay_callback,
sizeof(struct per_session_data),
4096, // rx buffer size
0, NULL, 0
},
{ NULL, NULL, 0, 0, 0, NULL, 0 } // terminator
};
// Check if a port is available for binding
int check_port_available(int port) {
int sockfd;
struct sockaddr_in addr;
int result;
int reuse = 1;
// Create a socket
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0) {
return 0; // Cannot create socket, assume port unavailable
}
// Set SO_REUSEADDR to allow binding to ports in TIME_WAIT state
// This matches libwebsockets behavior and prevents false unavailability
if (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &reuse, sizeof(reuse)) < 0) {
close(sockfd);
return 0; // Failed to set socket option
}
// Set up the address structure
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = INADDR_ANY;
addr.sin_port = htons(port);
// Try to bind to the port
result = bind(sockfd, (struct sockaddr*)&addr, sizeof(addr));
// Close the socket
close(sockfd);
// Return 1 if bind succeeded (port available), 0 if failed (port in use)
return (result == 0) ? 1 : 0;
}
// Start libwebsockets-based WebSocket Nostr relay server
int start_websocket_relay(int port_override, int strict_port) {
struct lws_context_creation_info info;
log_info("Starting libwebsockets-based Nostr relay server...");
memset(&info, 0, sizeof(info));
// Use port override if provided, otherwise use configuration
int configured_port = (port_override > 0) ? port_override : get_config_int("relay_port", DEFAULT_PORT);
int actual_port = configured_port;
int port_attempts = 0;
const int max_port_attempts = 10; // Increased from 5 to 10
// Minimal libwebsockets configuration
info.protocols = protocols;
info.gid = -1;
info.uid = -1;
info.options = LWS_SERVER_OPTION_VALIDATE_UTF8;
// Remove interface restrictions - let system choose
// info.vhost_name = NULL;
// info.iface = NULL;
// Increase max connections for relay usage
info.max_http_header_pool = 16;
info.timeout_secs = 10;
// Max payload size for Nostr events
info.max_http_header_data = 4096;
// Find an available port with pre-checking (or fail immediately in strict mode)
while (port_attempts < (strict_port ? 1 : max_port_attempts)) {
char attempt_msg[256];
snprintf(attempt_msg, sizeof(attempt_msg), "Checking port availability: %d", actual_port);
log_info(attempt_msg);
// Pre-check if port is available
if (!check_port_available(actual_port)) {
port_attempts++;
if (strict_port) {
char error_msg[256];
snprintf(error_msg, sizeof(error_msg),
"Strict port mode: port %d is not available", actual_port);
log_error(error_msg);
return -1;
} else if (port_attempts < max_port_attempts) {
char retry_msg[256];
snprintf(retry_msg, sizeof(retry_msg), "Port %d is in use, trying port %d (attempt %d/%d)",
actual_port, actual_port + 1, port_attempts + 1, max_port_attempts);
log_warning(retry_msg);
actual_port++;
continue;
} else {
char error_msg[512];
snprintf(error_msg, sizeof(error_msg),
"Failed to find available port after %d attempts (tried ports %d-%d)",
max_port_attempts, configured_port, actual_port);
log_error(error_msg);
return -1;
}
}
// Port appears available, try creating libwebsockets context
info.port = actual_port;
char binding_msg[256];
snprintf(binding_msg, sizeof(binding_msg), "Attempting to bind libwebsockets to port %d", actual_port);
log_info(binding_msg);
ws_context = lws_create_context(&info);
if (ws_context) {
// Success! Port binding worked
break;
}
// libwebsockets failed even though port check passed
// This could be due to timing or different socket options
int errno_saved = errno;
char lws_error_msg[256];
snprintf(lws_error_msg, sizeof(lws_error_msg),
"libwebsockets failed to bind to port %d (errno: %d)", actual_port, errno_saved);
log_warning(lws_error_msg);
port_attempts++;
if (strict_port) {
char error_msg[256];
snprintf(error_msg, sizeof(error_msg),
"Strict port mode: failed to bind to port %d", actual_port);
log_error(error_msg);
break;
} else if (port_attempts < max_port_attempts) {
actual_port++;
continue;
}
// If we get here, we've exhausted attempts
break;
}
if (!ws_context) {
char error_msg[512];
snprintf(error_msg, sizeof(error_msg),
"Failed to create libwebsockets context after %d attempts. Last attempted port: %d",
port_attempts, actual_port);
log_error(error_msg);
perror("libwebsockets creation error");
return -1;
}
char startup_msg[256];
if (actual_port != configured_port) {
snprintf(startup_msg, sizeof(startup_msg),
"WebSocket relay started on ws://127.0.0.1:%d (configured port %d was unavailable)",
actual_port, configured_port);
log_warning(startup_msg);
} else {
snprintf(startup_msg, sizeof(startup_msg), "WebSocket relay started on ws://127.0.0.1:%d", actual_port);
}
log_success(startup_msg);
// Main event loop with proper signal handling
while (g_server_running) {
int result = lws_service(ws_context, 1000);
if (result < 0) {
log_error("libwebsockets service error");
break;
}
}
log_info("Shutting down WebSocket server...");
lws_context_destroy(ws_context);
ws_context = NULL;
log_success("WebSocket relay shut down cleanly");
return 0;
}

49
src/websockets.h Normal file
View File

@@ -0,0 +1,49 @@
// WebSocket protocol structures and constants for C-Relay
// This header defines structures shared between main.c and websockets.c
#ifndef WEBSOCKETS_H
#define WEBSOCKETS_H
#include <pthread.h>
#include <libwebsockets.h>
#include <time.h>
#include "../nostr_core_lib/cjson/cJSON.h"
#include "config.h" // For CLIENT_IP_MAX_LENGTH and MAX_SUBSCRIPTIONS_PER_CLIENT
// Constants
#define CHALLENGE_MAX_LENGTH 128
#define AUTHENTICATED_PUBKEY_MAX_LENGTH 65 // 64 hex + null
// Enhanced per-session data with subscription management and NIP-42 authentication
struct per_session_data {
int authenticated;
struct subscription* subscriptions; // Head of this session's subscription list
pthread_mutex_t session_lock; // Per-session thread safety
char client_ip[CLIENT_IP_MAX_LENGTH]; // Client IP for logging
int subscription_count; // Number of subscriptions for this session
// NIP-42 Authentication State
char authenticated_pubkey[65]; // Authenticated public key (64 hex + null)
char active_challenge[65]; // Current challenge for this session (64 hex + null)
time_t challenge_created; // When challenge was created
time_t challenge_expires; // Challenge expiration time
int nip42_auth_required_events; // Whether NIP-42 auth is required for EVENT submission
int nip42_auth_required_subscriptions; // Whether NIP-42 auth is required for REQ operations
int auth_challenge_sent; // Whether challenge has been sent (0/1)
};
// NIP-11 HTTP session data structure for managing buffer lifetime
struct nip11_session_data {
char* json_buffer;
size_t json_length;
int headers_sent;
int body_sent;
};
// Function declarations
int start_websocket_relay(int port_override, int strict_port);
// Auth rules checking function from request_validator.c
int check_database_auth_rules(const char *pubkey, const char *operation, const char *resource_hash);
#endif // WEBSOCKETS_H

133
test_dynamic_config.sh Executable file
View File

@@ -0,0 +1,133 @@
#!/bin/bash
# Test dynamic config updates without restart
set -e
# Configuration from relay startup
ADMIN_PRIVKEY="ddea442930976541e199a05248eb6cd92f2a65ba366a883a8f6880add9bdc9c9"
RELAY_PUBKEY="1bd4a5e2e32401737f8c16cc0dfa89b93f25f395770a2896fe78c9fb61582dfc"
RELAY_URL="ws://localhost:8888"
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check if nak is available
if ! command -v nak &> /dev/null; then
log_error "nak command not found. Please install nak first."
exit 1
fi
log_info "Testing dynamic config updates without restart..."
# Test 1: Check current NIP-11 info
log_info "Checking current NIP-11 relay info..."
CURRENT_DESC=$(curl -s -H "Accept: application/nostr+json" http://localhost:8888 | jq -r '.description')
log_info "Current description: $CURRENT_DESC"
# Test 2: Update relay description dynamically
NEW_DESC="Dynamic Config Test - Updated at $(date)"
log_info "Updating relay description to: $NEW_DESC"
COMMAND="[\"config_update\", [{\"key\": \"relay_description\", \"value\": \"$NEW_DESC\", \"data_type\": \"string\", \"category\": \"relay\"}]]"
# Encrypt the command
ENCRYPTED_COMMAND=$(nak encrypt "$COMMAND" --sec "$ADMIN_PRIVKEY" --recipient-pubkey "$RELAY_PUBKEY")
if [ -z "$ENCRYPTED_COMMAND" ]; then
log_error "Failed to encrypt config update command"
exit 1
fi
# Create admin event
ADMIN_EVENT=$(nak event \
--kind 23456 \
--content "$ENCRYPTED_COMMAND" \
--sec "$ADMIN_PRIVKEY" \
--tag "p=$RELAY_PUBKEY")
# Send the admin command
log_info "Sending config update command..."
ADMIN_RESULT=$(echo "$ADMIN_EVENT" | nak event "$RELAY_URL")
if echo "$ADMIN_RESULT" | grep -q "error\|failed\|denied"; then
log_error "Failed to send config update: $ADMIN_RESULT"
exit 1
fi
log_success "Config update command sent successfully"
# Wait for processing
sleep 3
# Test 3: Check if NIP-11 info updated without restart
log_info "Checking if NIP-11 info was updated without restart..."
UPDATED_DESC=$(curl -s -H "Accept: application/nostr+json" http://localhost:8888 | jq -r '.description')
if [ "$UPDATED_DESC" = "$NEW_DESC" ]; then
log_success "SUCCESS: Relay description updated dynamically without restart!"
log_success "Old: $CURRENT_DESC"
log_success "New: $UPDATED_DESC"
else
log_error "FAILED: Relay description was not updated"
log_error "Expected: $NEW_DESC"
log_error "Got: $UPDATED_DESC"
exit 1
fi
# Test 4: Test another dynamic config - max_subscriptions_per_client
log_info "Testing another dynamic config: max_subscriptions_per_client"
# Get current value from database
OLD_LIMIT=$(sqlite3 build/*.db "SELECT value FROM config WHERE key = 'max_subscriptions_per_client';" 2>/dev/null || echo "25")
log_info "Current max_subscriptions_per_client: $OLD_LIMIT"
NEW_LIMIT=50
COMMAND2="[\"config_update\", [{\"key\": \"max_subscriptions_per_client\", \"value\": \"$NEW_LIMIT\", \"data_type\": \"integer\", \"category\": \"limits\"}]]"
ENCRYPTED_COMMAND2=$(nak encrypt "$COMMAND2" --sec "$ADMIN_PRIVKEY" --recipient-pubkey "$RELAY_PUBKEY")
ADMIN_EVENT2=$(nak event \
--kind 23456 \
--content "$ENCRYPTED_COMMAND2" \
--sec "$ADMIN_PRIVKEY" \
--tag "p=$RELAY_PUBKEY")
log_info "Updating max_subscriptions_per_client to $NEW_LIMIT..."
ADMIN_RESULT2=$(echo "$ADMIN_EVENT2" | nak event "$RELAY_URL")
if echo "$ADMIN_RESULT2" | grep -q "error\|failed\|denied"; then
log_error "Failed to send second config update: $ADMIN_RESULT2"
exit 1
fi
sleep 3
# Check updated value from database
UPDATED_LIMIT=$(sqlite3 build/*.db "SELECT value FROM config WHERE key = 'max_subscriptions_per_client';" 2>/dev/null || echo "25")
if [ "$UPDATED_LIMIT" = "$NEW_LIMIT" ]; then
log_success "SUCCESS: max_subscriptions_per_client updated dynamically!"
log_success "Old: $OLD_LIMIT, New: $UPDATED_LIMIT"
else
log_error "FAILED: max_subscriptions_per_client was not updated"
log_error "Expected: $NEW_LIMIT, Got: $UPDATED_LIMIT"
fi
log_success "Dynamic config update testing completed successfully!"

View File

@@ -146,27 +146,115 @@ test_subscription() {
local filter="$2"
local description="$3"
local expected_count="$4"
print_step "Testing subscription: $description"
# Create REQ message
local req_message="[\"REQ\",\"$sub_id\",$filter]"
print_info "Testing filter: $filter"
# Send subscription and collect events
local response=""
if command -v websocat &> /dev/null; then
response=$(echo -e "$req_message\n[\"CLOSE\",\"$sub_id\"]" | timeout 3s websocat "$RELAY_URL" 2>/dev/null || echo "")
fi
# Count EVENT responses (lines containing ["EVENT","sub_id",...])
local event_count=0
local filter_mismatch_count=0
if [[ -n "$response" ]]; then
event_count=$(echo "$response" | grep -c "\"EVENT\"" 2>/dev/null || echo "0")
filter_mismatch_count=$(echo "$response" | grep -c "filter does not match" 2>/dev/null || echo "0")
fi
# Clean up the filter_mismatch_count (remove any extra spaces/newlines)
filter_mismatch_count=$(echo "$filter_mismatch_count" | tr -d '[:space:]' | sed 's/[^0-9]//g')
if [[ -z "$filter_mismatch_count" ]]; then
filter_mismatch_count=0
fi
# Debug: Show what we found
print_info "Found $event_count events, $filter_mismatch_count filter mismatches"
# Check for filter mismatches (protocol violation)
if [[ "$filter_mismatch_count" -gt 0 ]]; then
print_error "$description - PROTOCOL VIOLATION: Relay sent $filter_mismatch_count events that don't match filter!"
print_error "Filter: $filter"
print_error "This indicates improper server-side filtering - relay should only send matching events"
return 1
fi
# Additional check: Analyze returned events against filter criteria
local filter_violation_count=0
if [[ -n "$response" && "$event_count" -gt 0 ]]; then
# Parse filter to check for violations
if echo "$filter" | grep -q '"kinds":\['; then
# Kind filter - check that all returned events have matching kinds
local allowed_kinds=$(echo "$filter" | sed 's/.*"kinds":\[\([^]]*\)\].*/\1/' | sed 's/[^0-9,]//g')
echo "$response" | grep '"EVENT"' | while IFS= read -r event_line; do
local event_kind=$(echo "$event_line" | jq -r '.[2].kind' 2>/dev/null)
if [[ -n "$event_kind" && "$event_kind" =~ ^[0-9]+$ ]]; then
local kind_matches=0
IFS=',' read -ra KIND_ARRAY <<< "$allowed_kinds"
for kind in "${KIND_ARRAY[@]}"; do
if [[ "$event_kind" == "$kind" ]]; then
kind_matches=1
break
fi
done
if [[ "$kind_matches" == "0" ]]; then
((filter_violation_count++))
fi
fi
done
elif echo "$filter" | grep -q '"ids":\['; then
# ID filter - check that all returned events have matching IDs
local allowed_ids=$(echo "$filter" | sed 's/.*"ids":\[\([^]]*\)\].*/\1/' | sed 's/"//g' | sed 's/[][]//g')
echo "$response" | grep '"EVENT"' | while IFS= read -r event_line; do
local event_id=$(echo "$event_line" | jq -r '.[2].id' 2>/dev/null)
if [[ -n "$event_id" ]]; then
local id_matches=0
IFS=',' read -ra ID_ARRAY <<< "$allowed_ids"
for id in "${ID_ARRAY[@]}"; do
if [[ "$event_id" == "$id" ]]; then
id_matches=1
break
fi
done
if [[ "$id_matches" == "0" ]]; then
((filter_violation_count++))
fi
fi
done
fi
fi
# Report filter violations
if [[ "$filter_violation_count" -gt 0 ]]; then
print_error "$description - FILTER VIOLATION: $filter_violation_count events don't match the filter criteria!"
print_error "Filter: $filter"
print_error "Expected only events matching the filter, but received non-matching events"
print_error "This indicates improper server-side filtering"
return 1
fi
# Also fail on count mismatches for strict filters (like specific IDs and kinds with expected counts)
if [[ "$expected_count" != "any" && "$event_count" != "$expected_count" ]]; then
if echo "$filter" | grep -q '"ids":\['; then
print_error "$description - CRITICAL VIOLATION: ID filter should return exactly $expected_count event(s), got $event_count"
print_error "Filter: $filter"
print_error "ID queries must return exactly the requested event or none"
return 1
elif echo "$filter" | grep -q '"kinds":\[' && [[ "$expected_count" =~ ^[0-9]+$ ]]; then
print_error "$description - FILTER VIOLATION: Kind filter expected $expected_count event(s), got $event_count"
print_error "Filter: $filter"
print_error "This suggests improper filtering - events of wrong kinds are being returned"
return 1
fi
fi
if [[ "$expected_count" == "any" ]]; then
if [[ $event_count -gt 0 ]]; then
print_success "$description - Found $event_count events"
@@ -178,7 +266,7 @@ test_subscription() {
else
print_warning "$description - Expected $expected_count events, found $event_count"
fi
# Show a few sample events for verification (first 2)
if [[ $event_count -gt 0 && "$description" == "All events" ]]; then
print_info "Sample events (first 2):"
@@ -189,7 +277,7 @@ test_subscription() {
echo " - ID: ${event_id:0:16}... Kind: $event_kind Content: ${event_content:0:30}..."
done
fi
echo # Add blank line for readability
return 0
}
@@ -290,30 +378,64 @@ run_comprehensive_test() {
# Test subscription filters
print_step "Testing various subscription filters..."
local test_failures=0
# Test 1: Get all events
test_subscription "test_all" '{}' "All events" "any"
if ! test_subscription "test_all" '{}' "All events" "any"; then
((test_failures++))
fi
# Test 2: Get events by kind
test_subscription "test_kind1" '{"kinds":[1]}' "Kind 1 events only" "2"
test_subscription "test_kind0" '{"kinds":[0]}' "Kind 0 events only" "any"
if ! test_subscription "test_kind1" '{"kinds":[1]}' "Kind 1 events only" "any"; then
((test_failures++))
fi
if ! test_subscription "test_kind0" '{"kinds":[0]}' "Kind 0 events only" "any"; then
((test_failures++))
fi
# Test 3: Get events by author (pubkey)
local test_pubkey=$(echo "$regular1" | jq -r '.pubkey' 2>/dev/null)
test_subscription "test_author" "{\"authors\":[\"$test_pubkey\"]}" "Events by specific author" "any"
if ! test_subscription "test_author" "{\"authors\":[\"$test_pubkey\"]}" "Events by specific author" "any"; then
((test_failures++))
fi
# Test 4: Get recent events (time-based)
local recent_timestamp=$(($(date +%s) - 200))
test_subscription "test_recent" "{\"since\":$recent_timestamp}" "Recent events" "any"
if ! test_subscription "test_recent" "{\"since\":$recent_timestamp}" "Recent events" "any"; then
((test_failures++))
fi
# Test 5: Get events with specific tags
test_subscription "test_tag_type" '{"#type":["regular"]}' "Events with type=regular tag" "any"
if ! test_subscription "test_tag_type" '{"#type":["regular"]}' "Events with type=regular tag" "any"; then
((test_failures++))
fi
# Test 6: Multiple kinds
test_subscription "test_multi_kinds" '{"kinds":[0,1]}' "Multiple kinds (0,1)" "any"
if ! test_subscription "test_multi_kinds" '{"kinds":[0,1]}' "Multiple kinds (0,1)" "any"; then
((test_failures++))
fi
# Test 7: Limit results
test_subscription "test_limit" '{"kinds":[1],"limit":1}' "Limited to 1 event" "1"
if ! test_subscription "test_limit" '{"kinds":[1],"limit":1}' "Limited to 1 event" "1"; then
((test_failures++))
fi
# Test 8: Specific event ID query (tests for "filter does not match" bug)
if [[ ${#REGULAR_EVENT_IDS[@]} -gt 0 ]]; then
local test_event_id="${REGULAR_EVENT_IDS[0]}"
if ! test_subscription "test_specific_id" "{\"ids\":[\"$test_event_id\"]}" "Specific event ID query" "1"; then
((test_failures++))
fi
fi
# Report subscription test results
if [[ $test_failures -gt 0 ]]; then
print_error "SUBSCRIPTION TESTS FAILED: $test_failures test(s) detected protocol violations"
return 1
else
print_success "All subscription tests passed"
fi
print_header "PHASE 4: Database Verification"
@@ -321,17 +443,28 @@ run_comprehensive_test() {
print_step "Verifying database contents..."
if command -v sqlite3 &> /dev/null; then
print_info "Events by type in database:"
sqlite3 db/c_nostr_relay.db "SELECT event_type, COUNT(*) as count FROM events GROUP BY event_type;" | while read line; do
echo " $line"
done
print_info "Recent events in database:"
sqlite3 db/c_nostr_relay.db "SELECT substr(id, 1, 16) || '...' as short_id, event_type, kind, substr(content, 1, 30) || '...' as short_content FROM events ORDER BY created_at DESC LIMIT 5;" | while read line; do
echo " $line"
done
print_success "Database verification complete"
# Find the database file (should be in build/ directory with relay pubkey as filename)
local db_file=""
if [[ -d "../build" ]]; then
db_file=$(find ../build -name "*.db" -type f | head -1)
fi
if [[ -n "$db_file" && -f "$db_file" ]]; then
print_info "Events by type in database ($db_file):"
sqlite3 "$db_file" "SELECT event_type, COUNT(*) as count FROM events GROUP BY event_type;" 2>/dev/null | while read line; do
echo " $line"
done
print_info "Recent events in database:"
sqlite3 "$db_file" "SELECT substr(id, 1, 16) || '...' as short_id, event_type, kind, substr(content, 1, 30) || '...' as short_content FROM events ORDER BY created_at DESC LIMIT 5;" 2>/dev/null | while read line; do
echo " $line"
done
print_success "Database verification complete"
else
print_warning "Database file not found in build/ directory"
print_info "Expected database files: build/*.db (named after relay pubkey)"
fi
else
print_warning "sqlite3 not available for database verification"
fi
@@ -352,6 +485,11 @@ if run_comprehensive_test; then
exit 0
else
echo
print_error "Some tests failed"
print_error "❌ TESTS FAILED: Protocol violations detected!"
print_error "The C-Relay has critical issues that need to be fixed:"
print_error " - Server-side filtering is not implemented properly"
print_error " - Events are sent to clients regardless of subscription filters"
print_error " - This violates the Nostr protocol specification"
echo
exit 1
fi

View File

@@ -310,8 +310,51 @@ else
print_failure "Relay failed to start for network test"
fi
# TEST 10: Multiple Startup Attempts (Port Conflict)
print_test_header "Test 10: Port Conflict Handling"
# TEST 10: Port Override with Admin/Relay Key Overrides
print_test_header "Test 10: Port Override with -a/-r Flags"
cleanup_test_files
# Generate test keys (64 hex chars each)
TEST_ADMIN_PUBKEY="1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
TEST_RELAY_PRIVKEY="abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890"
print_info "Testing port override with -p 9999 -a $TEST_ADMIN_PUBKEY -r $TEST_RELAY_PRIVKEY"
# Start relay with port override and key overrides
timeout 15 $RELAY_BINARY -p 9999 -a $TEST_ADMIN_PUBKEY -r $TEST_RELAY_PRIVKEY > "test_port_override.log" 2>&1 &
relay_pid=$!
sleep 5
if kill -0 $relay_pid 2>/dev/null; then
# Check if relay bound to port 9999 (not default 8888)
if netstat -tln 2>/dev/null | grep -q ":9999"; then
print_success "Relay successfully bound to overridden port 9999"
else
print_failure "Relay not bound to overridden port 9999"
fi
# Check that relay started successfully
if check_relay_startup "test_port_override.log"; then
print_success "Relay startup completed with overrides"
else
print_failure "Relay failed to complete startup with overrides"
fi
# Check that admin keys were NOT generated (since -a was provided)
if ! check_admin_keys "test_port_override.log"; then
print_success "Admin keys not generated (correctly using provided -a key)"
else
print_failure "Admin keys generated despite -a override"
fi
stop_relay_test $relay_pid
else
print_failure "Relay failed to start with port/key overrides"
fi
# TEST 11: Multiple Startup Attempts (Port Conflict)
print_test_header "Test 11: Port Conflict Handling"
relay_pid1=$(start_relay_test "port_conflict_1" 10)
sleep 2
@@ -320,14 +363,14 @@ if kill -0 $relay_pid1 2>/dev/null; then
# Try to start a second relay (should fail due to port conflict)
relay_pid2=$(start_relay_test "port_conflict_2" 5)
sleep 1
if [ "$relay_pid2" = "0" ] || ! kill -0 $relay_pid2 2>/dev/null; then
print_success "Port conflict properly handled (second instance failed to start)"
else
print_failure "Multiple relay instances started (port conflict not handled)"
stop_relay_test $relay_pid2
fi
stop_relay_test $relay_pid1
else
print_failure "First relay instance failed to start"

View File

@@ -1,11 +1,11 @@
=== NIP-42 Authentication Test Started ===
2025-09-13 08:48:02 - Starting NIP-42 authentication tests
2025-09-30 11:15:28 - Starting NIP-42 authentication tests
[INFO] === Starting NIP-42 Authentication Tests ===
[INFO] Checking dependencies...
[SUCCESS] Dependencies check complete
[INFO] Test 1: Checking NIP-42 support in relay info
[SUCCESS] NIP-42 is advertised in supported NIPs
2025-09-13 08:48:02 - Supported NIPs: 1,9,11,13,15,20,40,42
2025-09-30 11:15:28 - Supported NIPs: 1,9,11,13,15,20,40,42
[INFO] Test 2: Testing AUTH challenge generation
[INFO] Found admin private key, configuring NIP-42 authentication...
[WARNING] Failed to create configuration event - proceeding with manual test
@@ -13,69 +13,64 @@
[INFO] Generated test keypair: test_pubkey
[INFO] Attempting to publish event without authentication...
[INFO] Publishing test event to relay...
2025-09-13 08:48:03 - Event publish result: connecting to ws://localhost:8888... ok.
{"kind":1,"id":"c42a8cbdd1cc6ea3e7fd060919c57386aef0c35da272ba2fa34b45f80934cfca","pubkey":"d0111448b3bd0da6aa699b92163f684291bb43bc213aa54a2ee726c2acde76e8","created_at":1757767683,"tags":[],"content":"NIP-42 test event - should require auth","sig":"d2a2c7efc00e06d8d8582fa05b2ec8cb96979525770dff9ef36a91df6d53807c86115581de2d6058d7d64eebe3b7d7404cc03dbb2ad1e91d140283703c2dec53"}
2025-09-30 11:15:30 - Event publish result: connecting to ws://localhost:8888... ok.
{"kind":1,"id":"acfc4da1903ce1c065f2c472348b21837a322c79cb4b248c62de5cff9b5b6607","pubkey":"d3e8d83eabac2a28e21039136a897399f4866893dd43bfbf0bdc8391913a4013","created_at":1759245329,"tags":[],"content":"NIP-42 test event - should require auth","sig":"2051b3da705214d5b5e95fb5b4dd9f1c893666965f7c51ccd2a9ccd495b67dd76ed3ce9768f0f2a16a3f9a602368e8102758ca3cc1408280094abf7e92fcc75e"}
publishing to ws://localhost:8888... success.
[SUCCESS] Relay requested authentication as expected
[INFO] Test 4: Testing WebSocket AUTH message handling
[INFO] Testing WebSocket connection and AUTH message...
[INFO] Sending test message via WebSocket...
2025-09-13 08:48:03 - WebSocket response:
2025-09-30 11:15:30 - WebSocket response:
[INFO] No AUTH challenge in WebSocket response
[INFO] Test 5: Testing NIP-42 configuration options
[INFO] Retrieving current relay configuration...
[SUCCESS] Retrieved configuration events from relay
[SUCCESS] Found NIP-42 configuration:
2025-09-13 08:48:04 - nip42_auth_required_events=false
2025-09-13 08:48:04 - nip42_auth_required_subscriptions=false
2025-09-13 08:48:04 - nip42_auth_required_kinds=4,14
2025-09-13 08:48:04 - nip42_challenge_expiration=600
[WARNING] Could not retrieve configuration events
[INFO] Test 6: Testing NIP-42 performance and stability
[INFO] Testing multiple authentication attempts...
2025-09-13 08:48:05 - Attempt 1: .271641300s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"916049dbd6835443e8fd553bd12a37ef03060a01fedb099b414ea2cc18b597eb","pubkey":"b383f405d81860ec9b0eebf88612093ab18dc6abd322639b19ac79969599c8c4","created_at":1757767685,"tags":[],"content":"Performance test event 1","sig":"b04e0b38bbb49e0aa3c8a69530071bb08d917c4ba12eae38045a487c43e83f6dc1389ac4640453b0492d9c991df37f71e25ef501fd48c4c11c878e6cb3fa7a84"}
2025-09-30 11:15:31 - Attempt 1: .297874340s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"0d742f093b7be0ce811068e7a6171573dd225418c9459f5c7e9580f57d88af7b","pubkey":"37d1a52ec83a837eb8c6ae46df5c892f338c65ae0c29eb4873e775082252a18a","created_at":1759245331,"tags":[],"content":"Performance test event 1","sig":"d4aec950c47fbd4c1da637b84fafbde570adf86e08795236fb6a3f7e12d2dbaa16cb38cbb68d3b9755d186b20800bdb84b0a050f8933d06b10991a9542fe9909"}
publishing to ws://localhost:8888... success.
2025-09-13 08:48:05 - Attempt 2: .259343520s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"e4495a56ec6f1ba2759eabbf0128aec615c53acf3e4720be7726dcd7163da703","pubkey":"b383f405d81860ec9b0eebf88612093ab18dc6abd322639b19ac79969599c8c4","created_at":1757767685,"tags":[],"content":"Performance test event 2","sig":"d1efe3f576eeded4e292ec22f2fea12296fa17ed2f87a8cd2dde0444b594ef55f7d74b680aeca11295a16397df5ccc53a938533947aece27efb965e6c643b62c"}
2025-09-30 11:15:32 - Attempt 2: .270493759s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"b45ae1b0458e284ed89b6de453bab489d506352680f6d37c8a5f0aed9eebc7a5","pubkey":"37d1a52ec83a837eb8c6ae46df5c892f338c65ae0c29eb4873e775082252a18a","created_at":1759245331,"tags":[],"content":"Performance test event 2","sig":"f9702aa537ec1485d151a0115c38c7f6f1bc05a63929be784e33850b46be6a961996eb922b8b337d607312c8e4583590ee35f38330300e19ab921f94926719c5"}
publishing to ws://localhost:8888... success.
2025-09-13 08:48:06 - Attempt 3: .221167032s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"55035b4c95a2c93a169236c7f5f5bd627838ec13522c88cf82d8b55516560cd9","pubkey":"b383f405d81860ec9b0eebf88612093ab18dc6abd322639b19ac79969599c8c4","created_at":1757767686,"tags":[],"content":"Performance test event 3","sig":"4bd581580a5a2416e6a9af44c055333635832dbf21793517f16100f1366c73437659545a8a712dcc4623a801b9deccd372b36b658309e7102a4300c3f481facb"}
2025-09-30 11:15:32 - Attempt 3: .239220029s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"5f70f9cb2a30a12e7d088e62a9295ef2fbea4f40a1d8b07006db03f610c5abce","pubkey":"37d1a52ec83a837eb8c6ae46df5c892f338c65ae0c29eb4873e775082252a18a","created_at":1759245332,"tags":[],"content":"Performance test event 3","sig":"ea2e1611ce3ddea3aa73764f4542bad7d922fc0d2ed40e58dcc2a66cb6e046bfae22d6baef296eb51d965a22b2a07394fc5f8664e3a7777382ae523431c782cd"}
publishing to ws://localhost:8888... success.
2025-09-13 08:48:06 - Attempt 4: .260219496s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"58dee587a1a0f085ff44441b3074f5ff42715088ee24e694107100df3c63ff2b","pubkey":"b383f405d81860ec9b0eebf88612093ab18dc6abd322639b19ac79969599c8c4","created_at":1757767686,"tags":[],"content":"Performance test event 4","sig":"b6174b0c56138466d3bb228ef2ced1d917f7253b76c624235fa3b661c9fa109c78ae557c4ddaf0e6232aa597608916f0dfba1c192f8b90ffb819c36ac1e4e516"}
2025-09-30 11:15:33 - Attempt 4: .221429674s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"eafcf5f7e0bd0be35267f13ff93eef339faec6a5af13fe451fee2b7443b9de6e","pubkey":"37d1a52ec83a837eb8c6ae46df5c892f338c65ae0c29eb4873e775082252a18a","created_at":1759245332,"tags":[],"content":"Performance test event 4","sig":"976017abe67582af29d46cd54159ce0465c94caf348be35f26b6522cb48c4c9ce5ba9835e92873cf96a906605a032071360fc85beea815a8e4133a4f45d2bf0a"}
publishing to ws://localhost:8888... success.
2025-09-13 08:48:07 - Attempt 5: .260125188s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"b8069c80f98fff3780eaeb605baf1a5818c9ab05185c1776a28469d2b0b32c6a","pubkey":"b383f405d81860ec9b0eebf88612093ab18dc6abd322639b19ac79969599c8c4","created_at":1757767687,"tags":[],"content":"Performance test event 5","sig":"5130d3a0c778728747b12aae77f2516db5b055d8ec43f413a4b117fcadb6025a49b6f602307bbe758bd97557e326e8735631fd03dc45c9296509e94aa305adf2"}
2025-09-30 11:15:33 - Attempt 5: .242410067s - connecting to ws://localhost:8888... ok.
{"kind":1,"id":"c7cf6776000a325b1180240c61ef20b849b84dee3f5d2efed4c1a9e9fbdbd7b1","pubkey":"37d1a52ec83a837eb8c6ae46df5c892f338c65ae0c29eb4873e775082252a18a","created_at":1759245333,"tags":[],"content":"Performance test event 5","sig":"18b4575bd644146451dcf86607d75f358828ce2907e8904bd08b903ff5d79ec5a69ff60168735975cc406dcee788fd22fc7bf7c97fb7ac6dff3580eda56cee2e"}
publishing to ws://localhost:8888... success.
[SUCCESS] Performance test completed: 5/5 successful responses
[INFO] Test 7: Testing kind-specific NIP-42 authentication requirements
[INFO] Generated test keypair for kind-specific tests: test_pubkey
[INFO] Testing kind 1 event (regular note) - should work without authentication...
2025-09-13 08:48:08 - Kind 1 event result: connecting to ws://localhost:8888... ok.
{"kind":1,"id":"f2ac02a5290db3797c0b7b38435920d5db593d333e582454d8ed32da4c141b74","pubkey":"da031504ff61656d1829f723c52f526d7591400fb9e2aecb7b4ef5aeeea66fc7","created_at":1757767688,"tags":[],"content":"Regular note - should not require auth","sig":"8e4272d9cb258fc4b140eb8e8c2e802c3e8b62e34c17c9e545d83c68dfb86ffd2cdd4a8153660b663a46906459aa67719257ac263f21d1f8a6185806e055dcfd"}
2025-09-30 11:15:34 - Kind 1 event result: connecting to ws://localhost:8888... ok.
{"kind":1,"id":"012690335e48736fd29769669d2bda15a079183c1d0f27b8400366a54b5b9ddd","pubkey":"ad362b9bbf61b140c5f677a2d091d622fef6fa186c579e6600dd8b24a85a2260","created_at":1759245334,"tags":[],"content":"Regular note - should not require auth","sig":"a3a0ce218666d2a374983a343bc24da5a727ce251c23828171021f15a3ab441a0c86f56200321467914ce4bee9a987f1de301151467ae639d7f941bac7fbe68e"}
publishing to ws://localhost:8888... success.
[SUCCESS] Kind 1 event accepted without authentication (correct behavior)
[INFO] Testing kind 4 event (direct message) - should require authentication...
2025-09-13 08:48:18 - Kind 4 event result: connecting to ws://localhost:8888... ok.
{"kind":4,"id":"935af23e2bf7efd324d86a0c82631e5ebe492edf21920ed0f548faa73a18ac1d","pubkey":"da031504ff61656d1829f723c52f526d7591400fb9e2aecb7b4ef5aeeea66fc7","created_at":1757767688,"tags":[["p,test_pubkey"]],"content":"This is a direct message - should require auth","sig":"b2b86ee394b41505ddbd787c22f4223665770d84a21dd03e74bf4e8fa879ff82dd6b1f7d6921d93f8d89787102c3dc3012e6270d66ca5b5d4b87f1a545481e76"}
2025-09-30 11:15:44 - Kind 4 event result: connecting to ws://localhost:8888... ok.
{"kind":4,"id":"e629dd91320d48c1e3103ec16e40c707c2ee8143012c9ad8bb9d32f98610f447","pubkey":"ad362b9bbf61b140c5f677a2d091d622fef6fa186c579e6600dd8b24a85a2260","created_at":1759245334,"tags":[["p,test_pubkey"]],"content":"This is a direct message - should require auth","sig":"7677b3f2932fb4979bab3da6d241217b7ea2010411fc8bf5a51f6987f38696d5634f91a30b13e0f4861479ceabff995b3bb2eb2fc74af5f3d1175235d5448ce2"}
publishing to ws://localhost:8888...
[SUCCESS] Kind 4 event requested authentication (correct behavior for DMs)
[INFO] Testing kind 14 event (chat message) - should require authentication...
2025-09-13 08:48:28 - Kind 14 event result: connecting to ws://localhost:8888... ok.
{"kind":14,"id":"aeb1ac58dd465c90ce5a70c7b16e3cc32fae86c221bb2e86ca29934333604669","pubkey":"da031504ff61656d1829f723c52f526d7591400fb9e2aecb7b4ef5aeeea66fc7","created_at":1757767698,"tags":[["p,test_pubkey"]],"content":"Chat message - should require auth","sig":"24e23737e6684e4ef01c08d72304e6f235ce75875b94b37460065f9ead986438435585818ba104e7f78f14345406b5d03605c925042e9c06fed8c99369cd8694"}
2025-09-30 11:15:55 - Kind 14 event result: connecting to ws://localhost:8888... ok.
{"kind":14,"id":"a5398c5851dd72a8980723c91d35345bd0088b800102180dd41af7056f1cad50","pubkey":"ad362b9bbf61b140c5f677a2d091d622fef6fa186c579e6600dd8b24a85a2260","created_at":1759245344,"tags":[["p,test_pubkey"]],"content":"Chat message - should require auth","sig":"62d43f3f81755d4ef81cbfc8aca9abc11f28b0c45640f19d3dd41a09bae746fe7a4e9d8e458c416dcd2cab02deb090ce1e29e8426d9be5445d130eaa00d339f2"}
publishing to ws://localhost:8888...
[SUCCESS] Kind 14 event requested authentication (correct behavior for DMs)
[INFO] Testing other event kinds - should work without authentication...
2025-09-13 08:48:29 - Kind 0 event result: connecting to ws://localhost:8888... ok.
{"kind":0,"id":"3b2cc834dd874ebbe07c2da9e41c07b3f0c61a57b4d6b7299c2243dbad29f2ca","pubkey":"da031504ff61656d1829f723c52f526d7591400fb9e2aecb7b4ef5aeeea66fc7","created_at":1757767709,"tags":[],"content":"Test event kind 0 - should not require auth","sig":"4f2016fde84d72cf5a5aa4c0ec5de677ef06c7971ca2dd756b02a94c47604fae1c67254703a2df3d17b13fee2d9c45661b76086f29ac93820a4c062fc52dea74"}
2025-09-30 11:15:55 - Kind 0 event result: connecting to ws://localhost:8888... ok.
{"kind":0,"id":"069ac4db07da3230681aa37ab9e6a2aa48e2c199245259681e45ffb2f1b21846","pubkey":"ad362b9bbf61b140c5f677a2d091d622fef6fa186c579e6600dd8b24a85a2260","created_at":1759245355,"tags":[],"content":"Test event kind 0 - should not require auth","sig":"3c99b97c0ea2d18bc88fc07b2e95e213b6a6af804512d62158f8fd63cc24a3937533b830f59d38ccacccf98ba2fb0ed7467b16271154d4dd37fbc075eba32e49"}
publishing to ws://localhost:8888... success.
[SUCCESS] Kind 0 event accepted without authentication (correct)
2025-09-13 08:48:29 - Kind 3 event result: connecting to ws://localhost:8888... ok.
{"kind":3,"id":"6e1ea0b1cbf342feea030fa39226c316e730c5d333fa8333495748afd386ec80","pubkey":"da031504ff61656d1829f723c52f526d7591400fb9e2aecb7b4ef5aeeea66fc7","created_at":1757767709,"tags":[],"content":"Test event kind 3 - should not require auth","sig":"e5f66c5f022497f8888f003a8bfbb5e807a2520d314c80889548efa267f9d6de28d5ee7b0588cc8660f2963ab44e530c8a74d71a227148e5a6843fcef4de2197"}
2025-09-30 11:15:56 - Kind 3 event result: connecting to ws://localhost:8888... ok.
{"kind":3,"id":"1dd1ccb13ebd0d50b2aa79dbb938b408a24f0a4dd9f872b717ed91ae6729051c","pubkey":"ad362b9bbf61b140c5f677a2d091d622fef6fa186c579e6600dd8b24a85a2260","created_at":1759245355,"tags":[],"content":"Test event kind 3 - should not require auth","sig":"c205cc76f687c3957cf8b35cd8346fd8c2e44d9ef82324b95a7eef7f57429fb6f2ab1d0263dd5d00204dd90e626d5918a8710341b0d68a5095b41455f49cf0dd"}
publishing to ws://localhost:8888... success.
[SUCCESS] Kind 3 event accepted without authentication (correct)
2025-09-13 08:48:30 - Kind 7 event result: connecting to ws://localhost:8888... ok.
{"kind":7,"id":"a64466b9899cad257313e2dced357fd3f87f40bd7e13e29372689aae7c718919","pubkey":"da031504ff61656d1829f723c52f526d7591400fb9e2aecb7b4ef5aeeea66fc7","created_at":1757767710,"tags":[],"content":"Test event kind 7 - should not require auth","sig":"78d18bcb0c2b11b4e2b74bcdfb140564b4563945e983014a279977356e50b57f3c5a262fa55de26dbd4c8d8b9f5beafbe21af869be64079f54a712284f03d9ac"}
2025-09-30 11:15:56 - Kind 7 event result: connecting to ws://localhost:8888... ok.
{"kind":7,"id":"b6161b1da8a4d362e3c230df99c4f87b6311ef6e9f67e03a2476f8a6366352c1","pubkey":"ad362b9bbf61b140c5f677a2d091d622fef6fa186c579e6600dd8b24a85a2260","created_at":1759245356,"tags":[],"content":"Test event kind 7 - should not require auth","sig":"ab06c4b00a04d726109acd02d663e30188ff9ee854cf877e854fda90dd776a649ef3fab8ae5b530b4e6b5530490dd536a281a721e471bd3748a0dacc4eac9622"}
publishing to ws://localhost:8888... success.
[SUCCESS] Kind 7 event accepted without authentication (correct)
[INFO] Kind-specific authentication test completed

413
tests/white_black_test.sh Executable file
View File

@@ -0,0 +1,413 @@
#!/bin/bash
# C-Relay Whitelist/Blacklist Test Script
# Tests the relay's authentication functionality using nak
set -e # Exit on any error
# Configuration
RELAY_URL="ws://localhost:8888"
ADMIN_PRIVKEY="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
ADMIN_PUBKEY="6a04ab98d9e4774ad806e302dddeb63bea16b5cb5f223ee77478e861bb583eb3"
RELAY_PUBKEY="4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Check if nak is installed
check_nak() {
if ! command -v nak &> /dev/null; then
log_error "nak command not found. Please install nak first."
log_error "Visit: https://github.com/fiatjaf/nak"
exit 1
fi
log_success "nak is available"
}
# Generate test keypair
generate_test_keypair() {
log_info "Generating test keypair..."
# Generate private key
TEST_PRIVKEY=$(nak key generate 2>/dev/null)
if [ -z "$TEST_PRIVKEY" ]; then
log_error "Failed to generate private key"
exit 1
fi
# Derive public key from private key
TEST_PUBKEY=$(nak key public "$TEST_PRIVKEY" 2>/dev/null)
if [ -z "$TEST_PUBKEY" ]; then
log_error "Failed to derive public key from private key"
exit 1
fi
log_success "Generated test keypair:"
log_info " Private key: $TEST_PRIVKEY"
log_info " Public key: $TEST_PUBKEY"
}
# Create test event
create_test_event() {
local timestamp=$(date +%s)
local content="Test event at timestamp $timestamp"
log_info "Creating test event (kind 1) with content: '$content'"
# Create event using nak
EVENT_JSON=$(nak event \
--kind 1 \
--content "$content" \
--sec "$TEST_PRIVKEY" \
--tag 't=test')
# Extract event ID
EVENT_ID=$(echo "$EVENT_JSON" | jq -r '.id')
if [ -z "$EVENT_ID" ] || [ "$EVENT_ID" = "null" ]; then
log_error "Failed to create test event"
exit 1
fi
log_success "Created test event with ID: $EVENT_ID"
}
# Test 1: Post event and verify retrieval
test_post_and_retrieve() {
log_info "=== TEST 1: Post event and verify retrieval ==="
# Post the event
log_info "Posting test event to relay..."
POST_RESULT=$(echo "$EVENT_JSON" | nak event "$RELAY_URL")
if echo "$POST_RESULT" | grep -q "error\|failed\|denied"; then
log_error "Failed to post event: $POST_RESULT"
return 1
fi
log_success "Event posted successfully"
# Wait a moment for processing
sleep 2
# Try to retrieve the event
log_info "Retrieving event from relay..."
RETRIEVE_RESULT=$(nak req \
--id "$EVENT_ID" \
"$RELAY_URL")
if echo "$RETRIEVE_RESULT" | grep -q "$EVENT_ID"; then
log_success "Event successfully retrieved from relay"
return 0
else
log_error "Failed to retrieve event from relay"
log_error "Query result: $RETRIEVE_RESULT"
return 1
fi
}
# Send admin command to add user to blacklist
add_to_blacklist() {
log_info "Adding test user to blacklist..."
# Create the admin command
COMMAND="[\"blacklist\", \"pubkey\", \"$TEST_PUBKEY\"]"
# Encrypt the command using NIP-44
ENCRYPTED_COMMAND=$(nak encrypt "$COMMAND" \
--sec "$ADMIN_PRIVKEY" \
--recipient-pubkey "$RELAY_PUBKEY")
if [ -z "$ENCRYPTED_COMMAND" ]; then
log_error "Failed to encrypt admin command"
return 1
fi
# Create admin event
ADMIN_EVENT=$(nak event \
--kind 23456 \
--content "$ENCRYPTED_COMMAND" \
--sec "$ADMIN_PRIVKEY" \
--tag "p=$RELAY_PUBKEY")
# Post admin event
ADMIN_RESULT=$(echo "$ADMIN_EVENT" | nak event "$RELAY_URL")
if echo "$ADMIN_RESULT" | grep -q "error\|failed\|denied"; then
log_error "Failed to send admin command: $ADMIN_RESULT"
return 1
fi
log_success "Admin command sent successfully - user added to blacklist"
# Wait for the relay to process the admin command
sleep 3
}
# Send admin command to add user to whitelist
add_to_whitelist() {
local pubkey="$1"
log_info "Adding pubkey to whitelist: ${pubkey:0:16}..."
# Create the admin command
COMMAND="[\"whitelist\", \"pubkey\", \"$pubkey\"]"
# Encrypt the command using NIP-44
ENCRYPTED_COMMAND=$(nak encrypt "$COMMAND" \
--sec "$ADMIN_PRIVKEY" \
--recipient-pubkey "$RELAY_PUBKEY")
if [ -z "$ENCRYPTED_COMMAND" ]; then
log_error "Failed to encrypt admin command"
return 1
fi
# Create admin event
ADMIN_EVENT=$(nak event \
--kind 23456 \
--content "$ENCRYPTED_COMMAND" \
--sec "$ADMIN_PRIVKEY" \
--tag "p=$RELAY_PUBKEY")
# Post admin event
ADMIN_RESULT=$(echo "$ADMIN_EVENT" | nak event "$RELAY_URL")
if echo "$ADMIN_RESULT" | grep -q "error\|failed\|denied"; then
log_error "Failed to send admin command: $ADMIN_RESULT"
return 1
fi
log_success "Admin command sent successfully - user added to whitelist"
# Wait for the relay to process the admin command
sleep 3
}
# Clear all auth rules
clear_auth_rules() {
log_info "Clearing all auth rules..."
# Create the admin command
COMMAND="[\"system_command\", \"clear_all_auth_rules\"]"
# Encrypt the command using NIP-44
ENCRYPTED_COMMAND=$(nak encrypt "$COMMAND" \
--sec "$ADMIN_PRIVKEY" \
--recipient-pubkey "$RELAY_PUBKEY")
if [ -z "$ENCRYPTED_COMMAND" ]; then
log_error "Failed to encrypt admin command"
return 1
fi
# Create admin event
ADMIN_EVENT=$(nak event \
--kind 23456 \
--content "$ENCRYPTED_COMMAND" \
--sec "$ADMIN_PRIVKEY" \
--tag "p=$RELAY_PUBKEY")
# Post admin event
ADMIN_RESULT=$(echo "$ADMIN_EVENT" | nak event "$RELAY_URL")
if echo "$ADMIN_RESULT" | grep -q "error\|failed\|denied"; then
log_error "Failed to send admin command: $ADMIN_RESULT"
return 1
fi
log_success "Admin command sent successfully - all auth rules cleared"
# Wait for the relay to process the admin command
sleep 3
}
# Test 2: Try to post after blacklisting
test_blacklist_post() {
log_info "=== TEST 2: Attempt to post event after blacklisting ==="
# Create a new test event
local timestamp=$(date +%s)
local content="Blacklisted test event at timestamp $timestamp"
log_info "Creating new test event for blacklisted user..."
NEW_EVENT_JSON=$(nak event \
--kind 1 \
--content "$content" \
--sec "$TEST_PRIVKEY" \
--tag 't=blacklist-test')
NEW_EVENT_ID=$(echo "$NEW_EVENT_JSON" | jq -r '.id')
# Try to post the event
log_info "Attempting to post event with blacklisted user..."
POST_RESULT=$(echo "$NEW_EVENT_JSON" | nak event "$RELAY_URL" 2>&1)
# Check if posting failed (should fail for blacklisted user)
if echo "$POST_RESULT" | grep -q "error\|failed\|denied\|blocked"; then
log_success "Event posting correctly blocked for blacklisted user"
return 0
else
log_error "Event posting was not blocked - blacklist may not be working"
log_error "Post result: $POST_RESULT"
return 1
fi
}
# Test 3: Test whitelist functionality
test_whitelist_functionality() {
log_info "=== TEST 3: Test whitelist functionality ==="
# Generate a second test keypair for whitelist testing
log_info "Generating second test keypair for whitelist testing..."
WHITELIST_PRIVKEY=$(nak key generate 2>/dev/null)
WHITELIST_PUBKEY=$(nak key public "$WHITELIST_PRIVKEY" 2>/dev/null)
if [ -z "$WHITELIST_PUBKEY" ]; then
log_error "Failed to generate whitelist test keypair"
return 1
fi
log_success "Generated whitelist test keypair: ${WHITELIST_PUBKEY:0:16}..."
# Clear all auth rules first
if ! clear_auth_rules; then
log_error "Failed to clear auth rules for whitelist test"
return 1
fi
# Add the whitelist user to whitelist
if ! add_to_whitelist "$WHITELIST_PUBKEY"; then
log_error "Failed to add whitelist user"
return 1
fi
# Test 3a: Original test user should be blocked (not whitelisted)
log_info "Testing that non-whitelisted user is blocked..."
local timestamp=$(date +%s)
local content="Non-whitelisted test event at timestamp $timestamp"
NON_WHITELIST_EVENT=$(nak event \
--kind 1 \
--content "$content" \
--sec "$TEST_PRIVKEY" \
--tag 't=whitelist-test')
POST_RESULT=$(echo "$NON_WHITELIST_EVENT" | nak event "$RELAY_URL" 2>&1)
if echo "$POST_RESULT" | grep -q "error\|failed\|denied\|blocked"; then
log_success "Non-whitelisted user correctly blocked"
else
log_error "Non-whitelisted user was not blocked - whitelist may not be working"
log_error "Post result: $POST_RESULT"
return 1
fi
# Test 3b: Whitelisted user should be allowed
log_info "Testing that whitelisted user can post..."
content="Whitelisted test event at timestamp $timestamp"
WHITELIST_EVENT=$(nak event \
--kind 1 \
--content "$content" \
--sec "$WHITELIST_PRIVKEY" \
--tag 't=whitelist-test')
POST_RESULT=$(echo "$WHITELIST_EVENT" | nak event "$RELAY_URL" 2>&1)
if echo "$POST_RESULT" | grep -q "error\|failed\|denied\|blocked"; then
log_error "Whitelisted user was blocked - whitelist not working correctly"
log_error "Post result: $POST_RESULT"
return 1
else
log_success "Whitelisted user can post successfully"
fi
# Verify the whitelisted event can be retrieved
WHITELIST_EVENT_ID=$(echo "$WHITELIST_EVENT" | jq -r '.id')
sleep 2
RETRIEVE_RESULT=$(nak req \
--id "$WHITELIST_EVENT_ID" \
"$RELAY_URL")
if echo "$RETRIEVE_RESULT" | grep -q "$WHITELIST_EVENT_ID"; then
log_success "Whitelisted event successfully retrieved"
return 0
else
log_error "Failed to retrieve whitelisted event"
return 1
fi
}
# Main test function
main() {
log_info "Starting C-Relay Whitelist/Blacklist Test"
log_info "=========================================="
# Check prerequisites
check_nak
# Generate test keypair
generate_test_keypair
# Create test event
create_test_event
# Test 1: Post and retrieve
if test_post_and_retrieve; then
log_success "TEST 1 PASSED: Event posting and retrieval works"
else
log_error "TEST 1 FAILED: Event posting/retrieval failed"
exit 1
fi
# Add user to blacklist
if add_to_blacklist; then
log_success "Blacklist command sent successfully"
else
log_error "Failed to send blacklist command"
exit 1
fi
# Test 2: Try posting after blacklist
if test_blacklist_post; then
log_success "TEST 2 PASSED: Blacklist functionality works correctly"
else
log_error "TEST 2 FAILED: Blacklist functionality not working"
exit 1
fi
# Test 3: Test whitelist functionality
if test_whitelist_functionality; then
log_success "TEST 3 PASSED: Whitelist functionality works correctly"
else
log_error "TEST 3 FAILED: Whitelist functionality not working"
exit 1
fi
log_success "All tests passed! Whitelist/blacklist functionality is working correctly."
}
# Run main function
main "$@"