Compare commits
18 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
133bb2d002 | ||
|
|
edbc4f1359 | ||
|
|
5242f066e7 | ||
|
|
af186800fa | ||
|
|
2bff4a5f44 | ||
|
|
edb73d50cf | ||
|
|
3dc09d55fd | ||
|
|
079fb1b0f5 | ||
|
|
17b2aa8111 | ||
|
|
78d484cfe0 | ||
|
|
182e12817d | ||
|
|
9179d57cc9 | ||
|
|
9cb9b746d8 | ||
|
|
57a0089664 | ||
|
|
53f7608872 | ||
|
|
838ce5b45a | ||
|
|
e878b9557e | ||
|
|
6638d37d6f |
@@ -1 +1 @@
|
||||
src/embedded_web_content.c
|
||||
src/embedded_web_content.c
|
||||
|
||||
@@ -121,8 +121,8 @@ fuser -k 8888/tcp
|
||||
- Event filtering done at C level, not SQL level for NIP-40 expiration
|
||||
|
||||
### Configuration Override Behavior
|
||||
- CLI port override only affects first-time startup
|
||||
- After database creation, all config comes from events
|
||||
- CLI port override applies during first-time startup and existing relay restarts
|
||||
- After database creation, all config comes from events (but CLI overrides can still be applied)
|
||||
- Database path cannot be changed after initialization
|
||||
|
||||
## Non-Obvious Pitfalls
|
||||
|
||||
@@ -5,6 +5,9 @@ ARG DEBUG_BUILD=false
|
||||
|
||||
FROM alpine:3.19 AS builder
|
||||
|
||||
# Re-declare build argument in this stage
|
||||
ARG DEBUG_BUILD=false
|
||||
|
||||
# Install build dependencies
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
|
||||
511
NOSTR_RELEASE.md
Normal file
511
NOSTR_RELEASE.md
Normal file
@@ -0,0 +1,511 @@
|
||||
# Relay
|
||||
|
||||
I am releasing the code for the nostr relay that I wrote use myself. The code is free for anyone to use in any way that they wish.
|
||||
|
||||
Some of the features of this relay are conventional, and some are unconventional.
|
||||
|
||||
## The conventional
|
||||
|
||||
This relay is written in C99 with a sqlite database.
|
||||
|
||||
It implements the following NIPs.
|
||||
|
||||
- [X] NIP-01: Basic protocol flow implementation
|
||||
- [X] NIP-09: Event deletion
|
||||
- [X] NIP-11: Relay information document
|
||||
- [X] NIP-13: Proof of Work
|
||||
- [X] NIP-15: End of Stored Events Notice
|
||||
- [X] NIP-20: Command Results
|
||||
- [X] NIP-33: Parameterized Replaceable Events
|
||||
- [X] NIP-40: Expiration Timestamp
|
||||
- [X] NIP-42: Authentication of clients to relays
|
||||
- [X] NIP-45: Counting results
|
||||
- [X] NIP-50: Keywords filter
|
||||
- [X] NIP-70: Protected Events
|
||||
|
||||
## The unconventional
|
||||
|
||||
### The binaries are fully self contained.
|
||||
|
||||
It should just run in linux without having to worry about what you have on your system. I want to download and run. No docker. No dependency hell.
|
||||
|
||||
### The relay is a full nostr citizen with it's own public and private keys.
|
||||
|
||||
For example, you can see my implementation running here:
|
||||
|
||||
[https://primal.net/p/nprofile1qqswn2jsmm8lq8evas0v9vhqkdpn9nuujt90mtz60nqgsxndy66es4qjjnhr7](https://)
|
||||
|
||||
What this means in practice is that when you start the program, it generates keys for itself, and for it's administrator (You can specify these if you wish)
|
||||
|
||||
Now the program and the administrator can have verifed communication between the two. For example, the administrator can send DMs to the relay, asking it's status, and changing it's configuration through any client that can handle nip17 DMs. The relay can also send notifications to the administrator about it's current status, or it can publish it's status directly to NOSTR as kind-1 notes.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## Quick Start
|
||||
|
||||
Get your C-Relay up and running in minutes with a static binary (no dependencies required):
|
||||
|
||||
### 1. Download Static Binary
|
||||
|
||||
Download the latest static release from the [releases page](https://git.laantungir.net/laantungir/c-relay/releases):
|
||||
|
||||
```bash
|
||||
# Static binary - works on all Linux distributions (no dependencies)
|
||||
wget https://git.laantungir.net/laantungir/c-relay/releases/download/v0.6.0/c-relay-v0.6.0-linux-x86_64-static
|
||||
chmod +x c-relay-v0.6.0-linux-x86_64-static
|
||||
mv c-relay-v0.6.0-linux-x86_64-static c-relay
|
||||
```
|
||||
|
||||
### 2. Start the Relay
|
||||
|
||||
Simply run the binary - no configuration files needed:
|
||||
|
||||
```bash
|
||||
./c-relay
|
||||
```
|
||||
|
||||
On first startup, you'll see:
|
||||
|
||||
- **Admin Private Key**: Save this securely! You'll need it for administration
|
||||
- **Relay Public Key**: Your relay's identity on the Nostr network
|
||||
- **Port Information**: Default is 8888, or the next available port
|
||||
|
||||
### 3. Access the Web Interface
|
||||
|
||||
Open your browser and navigate to:
|
||||
|
||||
```
|
||||
http://localhost:8888/api/
|
||||
```
|
||||
|
||||
The web interface provides:
|
||||
|
||||
- Real-time configuration management
|
||||
- Database statistics dashboard
|
||||
- Auth rules management
|
||||
- Secure admin authentication with your Nostr identity
|
||||
|
||||
### 4. Test Your Relay
|
||||
|
||||
Test basic connectivity:
|
||||
|
||||
```bash
|
||||
# Test WebSocket connection
|
||||
curl -H "Accept: application/nostr+json" http://localhost:8888
|
||||
|
||||
# Test with a Nostr client
|
||||
# Add ws://localhost:8888 to your client's relay list
|
||||
```
|
||||
|
||||
### 5. Configure Your Relay (Optional)
|
||||
|
||||
Use the web interface or send admin commands to customize:
|
||||
|
||||
- Relay name and description
|
||||
- Authentication rules (whitelist/blacklist)
|
||||
- Connection limits
|
||||
- Proof-of-work requirements
|
||||
|
||||
**That's it!** Your relay is now running with zero configuration required. The event-based configuration system means you can adjust all settings through the web interface or admin API without editing config files.
|
||||
|
||||
## Web Admin Interface
|
||||
|
||||
C-Relay includes a **built-in web-based administration interface** accessible at `http://localhost:8888/api/`. The interface provides:
|
||||
|
||||
- **Real-time Configuration Management**: View and edit all relay settings through a web UI
|
||||
- **Database Statistics Dashboard**: Monitor event counts, storage usage, and performance metrics
|
||||
- **Auth Rules Management**: Configure whitelist/blacklist rules for pubkeys
|
||||
- **NIP-42 Authentication**: Secure access using your Nostr identity
|
||||
- **Event-Based Updates**: All changes are applied as cryptographically signed Nostr events
|
||||
|
||||
The web interface serves embedded static files with no external dependencies and includes proper CORS headers for browser compatibility.
|
||||
|
||||
## Administrator API
|
||||
|
||||
C-Relay uses an innovative **event-based administration system** where all configuration and management commands are sent as signed Nostr events using the admin private key generated during first startup. All admin commands use **NIP-44 encrypted command arrays** for security and compatibility.
|
||||
|
||||
### Authentication
|
||||
|
||||
All admin commands require signing with the admin private key displayed during first-time startup. **Save this key securely** - it cannot be recovered and is needed for all administrative operations.
|
||||
|
||||
### Event Structure
|
||||
|
||||
All admin commands use the same unified event structure with NIP-44 encrypted content:
|
||||
|
||||
**Admin Command Event:**
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "event_id",
|
||||
"pubkey": "admin_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23456,
|
||||
"content": "AqHBUgcM7dXFYLQuDVzGwMST1G8jtWYyVvYxXhVGEu4nAb4LVw...",
|
||||
"tags": [
|
||||
["p", "relay_public_key"]
|
||||
],
|
||||
"sig": "event_signature"
|
||||
}
|
||||
```
|
||||
|
||||
The `content` field contains a NIP-44 encrypted JSON array representing the command.
|
||||
|
||||
**Admin Response Event:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "BpKCVhfN8eYtRmPqSvWxZnMkL2gHjUiOp3rTyEwQaS5dFg...",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
The `content` field contains a NIP-44 encrypted JSON response object.
|
||||
|
||||
### Admin Commands
|
||||
|
||||
All commands are sent as NIP-44 encrypted JSON arrays in the event content. The following table lists all available commands:
|
||||
|
||||
|
||||
| Command Type | Command Format | Description |
|
||||
| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- |
|
||||
| **Configuration Management** | | |
|
||||
| `config_update` | `["config_update", [{"key": "auth_enabled", "value": "true", "data_type": "boolean", "category": "auth"}, {"key": "relay_description", "value": "My Relay", "data_type": "string", "category": "relay"}, ...]]` | Update relay configuration parameters (supports multiple updates) |
|
||||
| `config_query` | `["config_query", "all"]` | Query all configuration parameters |
|
||||
| **Auth Rules Management** | | |
|
||||
| `auth_add_blacklist` | `["blacklist", "pubkey", "abc123..."]` | Add pubkey to blacklist |
|
||||
| `auth_add_whitelist` | `["whitelist", "pubkey", "def456..."]` | Add pubkey to whitelist |
|
||||
| `auth_delete_rule` | `["delete_auth_rule", "blacklist", "pubkey", "abc123..."]` | Delete specific auth rule |
|
||||
| `auth_query_all` | `["auth_query", "all"]` | Query all auth rules |
|
||||
| `auth_query_type` | `["auth_query", "whitelist"]` | Query specific rule type |
|
||||
| `auth_query_pattern` | `["auth_query", "pattern", "abc123..."]` | Query specific pattern |
|
||||
| **System Commands** | | |
|
||||
| `system_clear_auth` | `["system_command", "clear_all_auth_rules"]` | Clear all auth rules |
|
||||
| `system_status` | `["system_command", "system_status"]` | Get system status |
|
||||
| `stats_query` | `["stats_query"]` | Get comprehensive database statistics |
|
||||
| **Database Queries** | | |
|
||||
| `sql_query` | `["sql_query", "SELECT * FROM events LIMIT 10"]` | Execute read-only SQL query against relay database |
|
||||
|
||||
### Available Configuration Keys
|
||||
|
||||
**Basic Relay Settings:**
|
||||
|
||||
- `relay_name`: Relay name (displayed in NIP-11)
|
||||
- `relay_description`: Relay description text
|
||||
- `relay_contact`: Contact information
|
||||
- `relay_software`: Software URL
|
||||
- `relay_version`: Software version
|
||||
- `supported_nips`: Comma-separated list of supported NIP numbers (e.g., "1,2,4,9,11,12,13,15,16,20,22,33,40,42")
|
||||
- `language_tags`: Comma-separated list of supported language tags (e.g., "en,es,fr" or "*" for all)
|
||||
- `relay_countries`: Comma-separated list of supported country codes (e.g., "US,CA,MX" or "*" for all)
|
||||
- `posting_policy`: Posting policy URL or text
|
||||
- `payments_url`: Payment URL for premium features
|
||||
- `max_connections`: Maximum concurrent connections
|
||||
- `max_subscriptions_per_client`: Max subscriptions per client
|
||||
- `max_event_tags`: Maximum tags per event
|
||||
- `max_content_length`: Maximum event content length
|
||||
|
||||
**Authentication & Access Control:**
|
||||
|
||||
- `auth_enabled`: Enable whitelist/blacklist auth rules (`true`/`false`)
|
||||
- `nip42_auth_required`: Enable NIP-42 cryptographic authentication (`true`/`false`)
|
||||
- `nip42_auth_required_kinds`: Event kinds requiring NIP-42 auth (comma-separated)
|
||||
- `nip42_challenge_timeout`: NIP-42 challenge expiration seconds
|
||||
|
||||
**Proof of Work & Validation:**
|
||||
|
||||
- `pow_min_difficulty`: Minimum proof-of-work difficulty
|
||||
- `nip40_expiration_enabled`: Enable event expiration (`true`/`false`)
|
||||
|
||||
**Monitoring Settings:**
|
||||
|
||||
- `kind_24567_reporting_throttle_sec`: Minimum seconds between monitoring events (default: 5)
|
||||
|
||||
### Dynamic Configuration Updates
|
||||
|
||||
C-Relay supports **dynamic configuration updates** without requiring a restart for most settings. Configuration parameters are categorized as either **dynamic** (can be updated immediately) or **restart-required** (require relay restart to take effect).
|
||||
|
||||
**Dynamic Configuration Parameters (No Restart Required):**
|
||||
|
||||
- All relay information (NIP-11) settings: `relay_name`, `relay_description`, `relay_contact`, `relay_software`, `relay_version`, `supported_nips`, `language_tags`, `relay_countries`, `posting_policy`, `payments_url`
|
||||
- Authentication settings: `auth_enabled`, `nip42_auth_required`, `nip42_auth_required_kinds`, `nip42_challenge_timeout`
|
||||
- Subscription limits: `max_subscriptions_per_client`, `max_total_subscriptions`
|
||||
- Event validation limits: `max_event_tags`, `max_content_length`, `max_message_length`
|
||||
- Proof of Work settings: `pow_min_difficulty`, `pow_mode`
|
||||
- Event expiration settings: `nip40_expiration_enabled`, `nip40_expiration_strict`, `nip40_expiration_filter`, `nip40_expiration_grace_period`
|
||||
|
||||
**Restart-Required Configuration Parameters:**
|
||||
|
||||
- Connection settings: `max_connections`, `relay_port`
|
||||
- Database and core system settings
|
||||
|
||||
When updating configuration, the admin API response will indicate whether a restart is required for each parameter. Dynamic updates take effect immediately and are reflected in NIP-11 relay information documents without restart.
|
||||
|
||||
### Response Format
|
||||
|
||||
All admin commands return **signed EVENT responses** via WebSocket following standard Nostr protocol. Responses use JSON content with structured data.
|
||||
|
||||
#### Response Examples
|
||||
|
||||
**Success Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"status\": \"success\", \"message\": \"Operation completed successfully\", \"timestamp\": 1234567890}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**Error Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"status\": \"error\", \"error\": \"invalid configuration value\", \"timestamp\": 1234567890}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**Auth Rules Query Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"auth_rules_all\", \"total_results\": 2, \"timestamp\": 1234567890, \"data\": [{\"rule_type\": \"blacklist\", \"pattern_type\": \"pubkey\", \"pattern_value\": \"abc123...\", \"action\": \"allow\"}]}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**Configuration Query Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"config_all\", \"total_results\": 27, \"timestamp\": 1234567890, \"data\": [{\"key\": \"auth_enabled\", \"value\": \"false\", \"data_type\": \"boolean\", \"category\": \"auth\", \"description\": \"Enable NIP-42 authentication\"}, {\"key\": \"relay_description\", \"value\": \"My Relay\", \"data_type\": \"string\", \"category\": \"relay\", \"description\": \"Relay description text\"}]}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**Configuration Update Success Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"total_results\": 2, \"timestamp\": 1234567890, \"status\": \"success\", \"data\": [{\"key\": \"auth_enabled\", \"value\": \"true\", \"status\": \"updated\"}, {\"key\": \"relay_description\", \"value\": \"My Updated Relay\", \"status\": \"updated\"}]}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**Configuration Update Error Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"config_update\", \"status\": \"error\", \"error\": \"field validation failed: invalid port number '99999' (must be 1-65535)\", \"timestamp\": 1234567890}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**Database Statistics Query Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"stats_query\", \"timestamp\": 1234567890, \"database_size_bytes\": 1048576, \"total_events\": 15432, \"database_created_at\": 1234567800, \"latest_event_at\": 1234567890, \"event_kinds\": [{\"kind\": 1, \"count\": 12000, \"percentage\": 77.8}, {\"kind\": 0, \"count\": 2500, \"percentage\": 16.2}], \"time_stats\": {\"total\": 15432, \"last_24h\": 234, \"last_7d\": 1456, \"last_30d\": 5432}, \"top_pubkeys\": [{\"pubkey\": \"abc123...\", \"event_count\": 1234, \"percentage\": 8.0}, {\"pubkey\": \"def456...\", \"event_count\": 987, \"percentage\": 6.4}]}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**SQL Query Response:**
|
||||
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"sql_query\", \"request_id\": \"request_event_id\", \"timestamp\": 1234567890, \"query\": \"SELECT * FROM events LIMIT 10\", \"execution_time_ms\": 45, \"row_count\": 10, \"columns\": [\"id\", \"pubkey\", \"created_at\", \"kind\", \"content\"], \"rows\": [[\"abc123...\", \"def456...\", 1234567890, 1, \"Hello world\"], ...]}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"],
|
||||
["e", "request_event_id"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
### SQL Query Command
|
||||
|
||||
The `sql_query` command allows administrators to execute read-only SQL queries against the relay database. This provides powerful analytics and debugging capabilities through the admin API.
|
||||
|
||||
**Request/Response Correlation:**
|
||||
|
||||
- Each response includes the request event ID in both the `tags` array (`["e", "request_event_id"]`) and the decrypted content (`"request_id": "request_event_id"`)
|
||||
- This allows proper correlation when multiple queries are submitted concurrently
|
||||
- Frontend can track pending queries and match responses to requests
|
||||
|
||||
**Security Features:**
|
||||
|
||||
- Only SELECT statements allowed (INSERT, UPDATE, DELETE, DROP, etc. are blocked)
|
||||
- Query timeout: 5 seconds (configurable)
|
||||
- Result row limit: 1000 rows (configurable)
|
||||
- All queries logged with execution time
|
||||
|
||||
**Available Tables and Views:**
|
||||
|
||||
- `events` - All Nostr events
|
||||
- `config` - Configuration parameters
|
||||
- `auth_rules` - Authentication rules
|
||||
- `subscription_events` - Subscription lifecycle log
|
||||
- `event_broadcasts` - Event broadcast log
|
||||
- `recent_events` - Last 1000 events (view)
|
||||
- `event_stats` - Event statistics by type (view)
|
||||
- `subscription_analytics` - Subscription metrics (view)
|
||||
- `active_subscriptions_log` - Currently active subscriptions (view)
|
||||
- `event_kinds_view` - Event distribution by kind (view)
|
||||
- `top_pubkeys_view` - Top 10 pubkeys by event count (view)
|
||||
- `time_stats_view` - Time-based statistics (view)
|
||||
|
||||
**Example Queries:**
|
||||
|
||||
```sql
|
||||
-- Recent events
|
||||
SELECT id, pubkey, created_at, kind FROM events ORDER BY created_at DESC LIMIT 20
|
||||
|
||||
-- Event distribution by kind
|
||||
SELECT * FROM event_kinds_view ORDER BY count DESC
|
||||
|
||||
-- Active subscriptions
|
||||
SELECT * FROM active_subscriptions_log ORDER BY created_at DESC
|
||||
|
||||
-- Database statistics
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM events) as total_events,
|
||||
(SELECT COUNT(*) FROM subscription_events) as total_subscriptions
|
||||
```
|
||||
|
||||
## Real-time Monitoring System
|
||||
|
||||
C-Relay includes a subscription-based monitoring system that broadcasts real-time relay statistics using ephemeral events (kind 24567).
|
||||
|
||||
### Activation
|
||||
|
||||
The monitoring system activates automatically when clients subscribe to kind 24567 events:
|
||||
|
||||
```json
|
||||
["REQ", "monitoring-sub", {"kinds": [24567]}]
|
||||
```
|
||||
|
||||
For specific monitoring types, use d-tag filters:
|
||||
|
||||
```json
|
||||
["REQ", "event-kinds-sub", {"kinds": [24567], "#d": ["event_kinds"]}]
|
||||
["REQ", "time-stats-sub", {"kinds": [24567], "#d": ["time_stats"]}]
|
||||
["REQ", "top-pubkeys-sub", {"kinds": [24567], "#d": ["top_pubkeys"]}]
|
||||
```
|
||||
|
||||
When no subscriptions exist, monitoring is dormant to conserve resources.
|
||||
|
||||
### Monitoring Event Types
|
||||
|
||||
|
||||
| Type | d Tag | Description |
|
||||
| ---------------------- | ------------------------ | ------------------------------------------- |
|
||||
| Event Distribution | `event_kinds` | Event count by kind with percentages |
|
||||
| Time Statistics | `time_stats` | Events in last 24h, 7d, 30d |
|
||||
| Top Publishers | `top_pubkeys` | Top 10 pubkeys by event count |
|
||||
| Active Subscriptions | `active_subscriptions` | Current subscription details (admin only) |
|
||||
| Subscription Details | `subscription_details` | Detailed subscription info (admin only) |
|
||||
| CPU Metrics | `cpu_metrics` | Process CPU and memory usage |
|
||||
|
||||
### Event Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": 24567,
|
||||
"pubkey": "<relay_pubkey>",
|
||||
"created_at": <timestamp>,
|
||||
"content": "{\"data_type\":\"event_kinds\",\"timestamp\":1234567890,...}",
|
||||
"tags": [
|
||||
["d", "event_kinds"]
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
- `kind_24567_reporting_throttle_sec`: Minimum seconds between monitoring events (default: 5)
|
||||
|
||||
### Web Dashboard Integration
|
||||
|
||||
The built-in web dashboard (`/api/`) automatically subscribes to monitoring events and displays real-time statistics.
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
- Monitoring events are ephemeral (not stored in database)
|
||||
- Throttling prevents excessive event generation
|
||||
- Automatic activation/deactivation based on subscriptions
|
||||
- Minimal overhead when no clients are monitoring
|
||||
|
||||
## Direct Messaging Admin System
|
||||
|
||||
In addition to the above admin API, c-relay allows the administrator to direct message the relay to get information or control some settings. As long as the administrator is signed in with any nostr client that allows sending nip-17 direct messages (DMs), they can control the relay.
|
||||
|
||||
The is possible because the relay is a full nostr citizen with it's own private and public key, and it knows the administrator's public key.
|
||||
|
||||
**Available DM commands**
|
||||
|
||||
The intent is not to be strict in the formatting of the DM. So for example if the relay receives any DM from the administrator with the words "stats" or "statistics" in it, it will respond to the administrator with a reply DM with the current relay statistics.
|
||||
|
||||
- `stats`|`statistics`: Relay statistics
|
||||
- `config`|`configuration`: Relay configuration
|
||||
65
README.md
65
README.md
@@ -195,6 +195,9 @@ All commands are sent as NIP-44 encrypted JSON arrays in the event content. The
|
||||
- `pow_min_difficulty`: Minimum proof-of-work difficulty
|
||||
- `nip40_expiration_enabled`: Enable event expiration (`true`/`false`)
|
||||
|
||||
**Monitoring Settings:**
|
||||
- `kind_24567_reporting_throttle_sec`: Minimum seconds between monitoring events (default: 5)
|
||||
|
||||
### Dynamic Configuration Updates
|
||||
|
||||
C-Relay supports **dynamic configuration updates** without requiring a restart for most settings. Configuration parameters are categorized as either **dynamic** (can be updated immediately) or **restart-required** (require relay restart to take effect).
|
||||
@@ -391,6 +394,68 @@ SELECT
|
||||
|
||||
|
||||
|
||||
## Real-time Monitoring System
|
||||
|
||||
C-Relay includes a subscription-based monitoring system that broadcasts real-time relay statistics using ephemeral events (kind 24567).
|
||||
|
||||
### Activation
|
||||
|
||||
The monitoring system activates automatically when clients subscribe to kind 24567 events:
|
||||
|
||||
```json
|
||||
["REQ", "monitoring-sub", {"kinds": [24567]}]
|
||||
```
|
||||
|
||||
For specific monitoring types, use d-tag filters:
|
||||
|
||||
```json
|
||||
["REQ", "event-kinds-sub", {"kinds": [24567], "#d": ["event_kinds"]}]
|
||||
["REQ", "time-stats-sub", {"kinds": [24567], "#d": ["time_stats"]}]
|
||||
["REQ", "top-pubkeys-sub", {"kinds": [24567], "#d": ["top_pubkeys"]}]
|
||||
```
|
||||
|
||||
When no subscriptions exist, monitoring is dormant to conserve resources.
|
||||
|
||||
### Monitoring Event Types
|
||||
|
||||
| Type | d Tag | Description |
|
||||
|------|-------|-------------|
|
||||
| Event Distribution | `event_kinds` | Event count by kind with percentages |
|
||||
| Time Statistics | `time_stats` | Events in last 24h, 7d, 30d |
|
||||
| Top Publishers | `top_pubkeys` | Top 10 pubkeys by event count |
|
||||
| Active Subscriptions | `active_subscriptions` | Current subscription details (admin only) |
|
||||
| Subscription Details | `subscription_details` | Detailed subscription info (admin only) |
|
||||
| CPU Metrics | `cpu_metrics` | Process CPU and memory usage |
|
||||
|
||||
### Event Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": 24567,
|
||||
"pubkey": "<relay_pubkey>",
|
||||
"created_at": <timestamp>,
|
||||
"content": "{\"data_type\":\"event_kinds\",\"timestamp\":1234567890,...}",
|
||||
"tags": [
|
||||
["d", "event_kinds"]
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
- `kind_24567_reporting_throttle_sec`: Minimum seconds between monitoring events (default: 5)
|
||||
|
||||
### Web Dashboard Integration
|
||||
|
||||
The built-in web dashboard (`/api/`) automatically subscribes to monitoring events and displays real-time statistics.
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
- Monitoring events are ephemeral (not stored in database)
|
||||
- Throttling prevents excessive event generation
|
||||
- Automatic activation/deactivation based on subscriptions
|
||||
- Minimal overhead when no clients are monitoring
|
||||
|
||||
## Direct Messaging Admin System
|
||||
|
||||
In addition to the above admin API, c-relay allows the administrator to direct message the relay to get information or control some settings. As long as the administrator is signed in with any nostr client that allows sending nip-17 direct messages (DMs), they can control the relay.
|
||||
|
||||
196
api/index.css
196
api/index.css
@@ -285,7 +285,7 @@ h1 {
|
||||
border-bottom: var(--border-width) solid var(--border-color);
|
||||
padding-bottom: 10px;
|
||||
margin-bottom: 30px;
|
||||
font-weight: normal;
|
||||
font-weight: bold;
|
||||
font-size: 24px;
|
||||
font-family: var(--font-family);
|
||||
color: var(--primary-color);
|
||||
@@ -293,32 +293,57 @@ h1 {
|
||||
|
||||
h2 {
|
||||
font-weight: normal;
|
||||
padding-left: 10px;
|
||||
text-align: center;
|
||||
font-size: 16px;
|
||||
font-family: var(--font-family);
|
||||
color: var(--primary-color);
|
||||
}
|
||||
|
||||
h3 {
|
||||
font-weight: normal;
|
||||
font-size: 12px;
|
||||
font-family: var(--font-family);
|
||||
color: var(--primary-color);
|
||||
padding-bottom: 10px;
|
||||
}
|
||||
|
||||
label {
|
||||
display: block;
|
||||
margin-bottom: 5px;
|
||||
font-weight: lighter;
|
||||
font-size: 10px;
|
||||
font-family: var(--font-family);
|
||||
color: var(--primary-color);
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
.section {
|
||||
background: var(--secondary-color);
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
padding: 20px;
|
||||
margin-bottom: 20px;
|
||||
margin-left: 5px;
|
||||
margin-right:5px;
|
||||
}
|
||||
|
||||
.section-header {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
padding-bottom: 15px;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.input-group {
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
label {
|
||||
display: block;
|
||||
margin-bottom: 5px;
|
||||
font-weight: bold;
|
||||
font-size: 14px;
|
||||
font-family: var(--font-family);
|
||||
color: var(--primary-color);
|
||||
}
|
||||
|
||||
input,
|
||||
textarea,
|
||||
@@ -491,6 +516,24 @@ button:disabled {
|
||||
border-radius: 0;
|
||||
}
|
||||
|
||||
/* Relay Events Styles */
|
||||
.status-message {
|
||||
margin-top: 10px;
|
||||
padding: 8px;
|
||||
border-radius: var(--border-radius);
|
||||
font-size: 14px;
|
||||
font-family: var(--font-family);
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.relay-entry {
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
padding: 10px;
|
||||
margin-bottom: 10px;
|
||||
background: var(--secondary-color);
|
||||
}
|
||||
|
||||
.config-value-input:focus {
|
||||
border: 1px solid var(--accent-color);
|
||||
background: var(--secondary-color);
|
||||
@@ -660,14 +703,7 @@ button:disabled {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.section-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
/* margin-bottom: 15px; */
|
||||
/* border-bottom: var(--border-width) solid var(--border-color); */
|
||||
/* padding-bottom: 10px; */
|
||||
}
|
||||
|
||||
|
||||
.countdown-btn {
|
||||
width: auto;
|
||||
@@ -948,10 +984,8 @@ button:disabled {
|
||||
padding: 6px 8px;
|
||||
text-align: left;
|
||||
font-family: var(--font-family);
|
||||
max-width: 200px;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
min-width: 100px;
|
||||
}
|
||||
|
||||
.sql-results-table th {
|
||||
@@ -1107,3 +1141,123 @@ body.dark-mode .sql-results-table tbody tr:nth-child(even) {
|
||||
border-radius: var(--border-radius);
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
/* ================================
|
||||
SIDE NAVIGATION MENU
|
||||
================================ */
|
||||
|
||||
.side-nav {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: -300px;
|
||||
width: 280px;
|
||||
height: 100vh;
|
||||
background: var(--secondary-color);
|
||||
border-right: var(--border-width) solid var(--border-color);
|
||||
z-index: 1000;
|
||||
transition: left 0.3s ease;
|
||||
overflow-y: auto;
|
||||
padding-top: 80px;
|
||||
}
|
||||
|
||||
.side-nav.open {
|
||||
left: 0;
|
||||
}
|
||||
|
||||
.side-nav-overlay {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
background: rgba(0, 0, 0, 0.5);
|
||||
z-index: 999;
|
||||
display: none;
|
||||
}
|
||||
|
||||
.side-nav-overlay.show {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.nav-menu {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.nav-menu li {
|
||||
border-bottom: var(--border-width) solid var(--muted-color);
|
||||
}
|
||||
|
||||
.nav-menu li:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
display: block;
|
||||
padding: 15px 20px;
|
||||
color: var(--primary-color);
|
||||
text-decoration: none;
|
||||
font-family: var(--font-family);
|
||||
font-size: 16px;
|
||||
font-weight: bold;
|
||||
transition: all 0.2s ease;
|
||||
cursor: pointer;
|
||||
border: 2px solid var(--secondary-color);
|
||||
background: none;
|
||||
width: 100%;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.nav-item:hover {
|
||||
border: 2px solid var(--secondary-color);
|
||||
background:var(--muted-color);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.nav-item.active {
|
||||
text-decoration: underline;
|
||||
padding-left: 16px;
|
||||
}
|
||||
|
||||
.nav-footer {
|
||||
position: absolute;
|
||||
bottom: 20px;
|
||||
left: 0;
|
||||
right: 0;
|
||||
padding: 0 20px;
|
||||
}
|
||||
|
||||
.nav-footer-btn {
|
||||
display: block;
|
||||
width: 100%;
|
||||
padding: 12px 20px;
|
||||
margin-bottom: 8px;
|
||||
color: var(--primary-color);
|
||||
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 4px;
|
||||
font-family: var(--font-family);
|
||||
font-size: 14px;
|
||||
font-weight: bold;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.nav-footer-btn:hover {
|
||||
background:var(--muted-color);
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.nav-footer-btn:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.header-title.clickable {
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.header-title.clickable:hover {
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
204
api/index.html
204
api/index.html
@@ -9,37 +9,54 @@
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<!-- Side Navigation Menu -->
|
||||
<nav class="side-nav" id="side-nav">
|
||||
<ul class="nav-menu">
|
||||
<li><button class="nav-item" data-page="statistics">Statistics</button></li>
|
||||
<li><button class="nav-item" data-page="subscriptions">Subscriptions</button></li>
|
||||
<li><button class="nav-item" data-page="configuration">Configuration</button></li>
|
||||
<li><button class="nav-item" data-page="authorization">Authorization</button></li>
|
||||
<li><button class="nav-item" data-page="relay-events">Relay Events</button></li>
|
||||
<li><button class="nav-item" data-page="dm">DM</button></li>
|
||||
<li><button class="nav-item" data-page="database">Database Query</button></li>
|
||||
</ul>
|
||||
<div class="nav-footer">
|
||||
<button class="nav-footer-btn" id="nav-dark-mode-btn">DARK MODE</button>
|
||||
<button class="nav-footer-btn" id="nav-logout-btn">LOGOUT</button>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
<!-- Side Navigation Overlay -->
|
||||
<div class="side-nav-overlay" id="side-nav-overlay"></div>
|
||||
|
||||
<!-- Header with title and profile display -->
|
||||
<div class="section">
|
||||
|
||||
<div class="header-content">
|
||||
<div class="header-title">
|
||||
<span class="relay-letter" data-letter="R">R</span>
|
||||
<span class="relay-letter" data-letter="E">E</span>
|
||||
<span class="relay-letter" data-letter="L">L</span>
|
||||
<span class="relay-letter" data-letter="A">A</span>
|
||||
<span class="relay-letter" data-letter="Y">Y</span>
|
||||
</div>
|
||||
<div class="relay-info">
|
||||
<div id="relay-name" class="relay-name">C-Relay</div>
|
||||
<div id="relay-description" class="relay-description">Loading...</div>
|
||||
<div id="relay-pubkey-container" class="relay-pubkey-container">
|
||||
<div id="relay-pubkey" class="relay-pubkey">Loading...</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="profile-area" id="profile-area" style="display: none;">
|
||||
<div class="admin-label">admin</div>
|
||||
<div class="profile-container">
|
||||
<img id="header-user-image" class="header-user-image" alt="Profile" style="display: none;">
|
||||
<span id="header-user-name" class="header-user-name">Loading...</span>
|
||||
</div>
|
||||
<!-- Logout dropdown -->
|
||||
<div class="logout-dropdown" id="logout-dropdown" style="display: none;">
|
||||
<button type="button" id="dark-mode-btn" class="logout-btn">🌙 DARK MODE</button>
|
||||
<button type="button" id="logout-btn" class="logout-btn">LOGOUT</button>
|
||||
</div>
|
||||
<div class="header-content">
|
||||
<div class="header-title clickable" id="header-title">
|
||||
<span class="relay-letter" data-letter="R">R</span>
|
||||
<span class="relay-letter" data-letter="E">E</span>
|
||||
<span class="relay-letter" data-letter="L">L</span>
|
||||
<span class="relay-letter" data-letter="A">A</span>
|
||||
<span class="relay-letter" data-letter="Y">Y</span>
|
||||
</div>
|
||||
<div class="relay-info">
|
||||
<div id="relay-name" class="relay-name">C-Relay</div>
|
||||
<div id="relay-description" class="relay-description">Loading...</div>
|
||||
<div id="relay-pubkey-container" class="relay-pubkey-container">
|
||||
<div id="relay-pubkey" class="relay-pubkey">Loading...</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="profile-area" id="profile-area" style="display: none;">
|
||||
<div class="admin-label">admin</div>
|
||||
<div class="profile-container">
|
||||
<img id="header-user-image" class="header-user-image" alt="Profile" style="display: none;">
|
||||
<span id="header-user-name" class="header-user-name">Loading...</span>
|
||||
</div>
|
||||
<!-- Logout dropdown -->
|
||||
<!-- Dropdown menu removed - buttons moved to sidebar -->
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
@@ -51,12 +68,10 @@
|
||||
</div>
|
||||
|
||||
<!-- DATABASE STATISTICS Section -->
|
||||
<!-- Subscribe to kind 24567 events to receive real-time monitoring data -->
|
||||
<div class="section flex-section" id="databaseStatisticsSection" style="display: none;">
|
||||
<div class="section-header">
|
||||
<h2>DATABASE STATISTICS</h2>
|
||||
<!-- Monitoring toggle button will be inserted here by JavaScript -->
|
||||
<!-- Temporarily disable auto-refresh button for real-time monitoring -->
|
||||
<!-- <button type="button" id="refresh-stats-btn" class="countdown-btn"></button> -->
|
||||
DATABASE STATISTICS
|
||||
</div>
|
||||
|
||||
<!-- Event Rate Graph Container -->
|
||||
@@ -81,10 +96,26 @@
|
||||
<td>Total Events</td>
|
||||
<td id="total-events">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Process ID</td>
|
||||
<td id="process-id">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Active Subscriptions</td>
|
||||
<td id="active-subscriptions">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Memory Usage</td>
|
||||
<td id="memory-usage">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CPU Core</td>
|
||||
<td id="cpu-core">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CPU Usage</td>
|
||||
<td id="cpu-usage">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Oldest Event</td>
|
||||
<td id="oldest-event">-</td>
|
||||
@@ -175,7 +206,7 @@
|
||||
<!-- SUBSCRIPTION DETAILS Section (Admin Only) -->
|
||||
<div class="section flex-section" id="subscriptionDetailsSection" style="display: none;">
|
||||
<div class="section-header">
|
||||
<h2>ACTIVE SUBSCRIPTION DETAILS</h2>
|
||||
ACTIVE SUBSCRIPTION DETAILS
|
||||
</div>
|
||||
|
||||
<div class="input-group">
|
||||
@@ -185,15 +216,14 @@
|
||||
<tr>
|
||||
<th>Subscription ID</th>
|
||||
<th>Client IP</th>
|
||||
<th>WSI Pointer</th>
|
||||
<th>Duration</th>
|
||||
<th>Events Sent</th>
|
||||
<th>Status</th>
|
||||
<th>Filters</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="subscription-details-table-body">
|
||||
<tr>
|
||||
<td colspan="6" style="text-align: center; font-style: italic;">No subscriptions active</td>
|
||||
<td colspan="5" style="text-align: center; font-style: italic;">No subscriptions active</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
@@ -203,7 +233,9 @@
|
||||
|
||||
<!-- Testing Section -->
|
||||
<div id="div_config" class="section flex-section" style="display: none;">
|
||||
<h2>RELAY CONFIGURATION</h2>
|
||||
<div class="section-header">
|
||||
RELAY CONFIGURATION
|
||||
</div>
|
||||
<div id="config-display" class="hidden">
|
||||
<div class="config-table-container">
|
||||
<table class="config-table" id="config-table">
|
||||
@@ -230,7 +262,7 @@
|
||||
<!-- Auth Rules Management - Moved after configuration -->
|
||||
<div class="section flex-section" id="authRulesSection" style="display: none;">
|
||||
<div class="section-header">
|
||||
<h2>AUTH RULES MANAGEMENT</h2>
|
||||
AUTH RULES MANAGEMENT
|
||||
</div>
|
||||
|
||||
<!-- Auth Rules Table -->
|
||||
@@ -256,23 +288,23 @@
|
||||
<!-- Combined Pubkey Auth Rule Section -->
|
||||
|
||||
|
||||
<div class="input-group">
|
||||
<label for="authRulePubkey">Pubkey (nsec or hex):</label>
|
||||
<input type="text" id="authRulePubkey" placeholder="nsec1... or 64-character hex pubkey">
|
||||
<div class="input-group">
|
||||
<label for="authRulePubkey">Pubkey (nsec or hex):</label>
|
||||
<input type="text" id="authRulePubkey" placeholder="nsec1... or 64-character hex pubkey">
|
||||
|
||||
</div>
|
||||
<div id="whitelistWarning" class="warning-box" style="display: none;">
|
||||
<strong>⚠️ WARNING:</strong> Adding whitelist rules changes relay behavior to whitelist-only
|
||||
mode.
|
||||
Only whitelisted users will be able to interact with the relay.
|
||||
</div>
|
||||
<div class="inline-buttons">
|
||||
<button type="button" id="addWhitelistBtn" onclick="addWhitelistRule()">ADD TO
|
||||
WHITELIST</button>
|
||||
<button type="button" id="addBlacklistBtn" onclick="addBlacklistRule()">ADD TO
|
||||
BLACKLIST</button>
|
||||
<button type="button" id="refreshAuthRulesBtn">REFRESH</button>
|
||||
</div>
|
||||
</div>
|
||||
<div id="whitelistWarning" class="warning-box" style="display: none;">
|
||||
<strong>⚠️ WARNING:</strong> Adding whitelist rules changes relay behavior to whitelist-only
|
||||
mode.
|
||||
Only whitelisted users will be able to interact with the relay.
|
||||
</div>
|
||||
<div class="inline-buttons">
|
||||
<button type="button" id="addWhitelistBtn" onclick="addWhitelistRule()">ADD TO
|
||||
WHITELIST</button>
|
||||
<button type="button" id="addBlacklistBtn" onclick="addBlacklistRule()">ADD TO
|
||||
BLACKLIST</button>
|
||||
<button type="button" id="refreshAuthRulesBtn">REFRESH</button>
|
||||
</div>
|
||||
|
||||
|
||||
</div>
|
||||
@@ -292,7 +324,7 @@
|
||||
</div>
|
||||
|
||||
<!-- Outbox -->
|
||||
<div class="input-group">
|
||||
<div>
|
||||
<label for="dm-outbox">Send Message to Relay:</label>
|
||||
<textarea id="dm-outbox" rows="4" placeholder="Enter your message to send to the relay..."></textarea>
|
||||
</div>
|
||||
@@ -311,6 +343,72 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- RELAY EVENTS Section -->
|
||||
<div class="section" id="relayEventsSection" style="display: none;">
|
||||
<div class="section-header">
|
||||
RELAY EVENTS MANAGEMENT
|
||||
</div>
|
||||
|
||||
<!-- Kind 0: User Metadata -->
|
||||
<div class="input-group">
|
||||
<h3>Kind 0: User Metadata</h3>
|
||||
<div class="form-group">
|
||||
<label for="kind0-name">Name:</label>
|
||||
<input type="text" id="kind0-name" placeholder="Relay Name">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="kind0-about">About:</label>
|
||||
<textarea id="kind0-about" rows="3" placeholder="Relay Description"></textarea>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="kind0-picture">Picture URL:</label>
|
||||
<input type="url" id="kind0-picture" placeholder="https://example.com/logo.png">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="kind0-banner">Banner URL:</label>
|
||||
<input type="url" id="kind0-banner" placeholder="https://example.com/banner.png">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="kind0-nip05">NIP-05:</label>
|
||||
<input type="text" id="kind0-nip05" placeholder="relay@example.com">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="kind0-website">Website:</label>
|
||||
<input type="url" id="kind0-website" placeholder="https://example.com">
|
||||
</div>
|
||||
<div class="inline-buttons">
|
||||
<button type="button" id="submit-kind0-btn">UPDATE METADATA</button>
|
||||
</div>
|
||||
<div id="kind0-status" class="status-message"></div>
|
||||
</div>
|
||||
|
||||
<!-- Kind 10050: DM Relay List -->
|
||||
<div class="input-group">
|
||||
<h3>Kind 10050: DM Relay List</h3>
|
||||
<div class="form-group">
|
||||
<label for="kind10050-relays">Relay URLs (one per line):</label>
|
||||
<textarea id="kind10050-relays" rows="4" placeholder="wss://relay1.com wss://relay2.com"></textarea>
|
||||
</div>
|
||||
<div class="inline-buttons">
|
||||
<button type="button" id="submit-kind10050-btn">UPDATE DM RELAYS</button>
|
||||
</div>
|
||||
<div id="kind10050-status" class="status-message"></div>
|
||||
</div>
|
||||
|
||||
<!-- Kind 10002: Relay List -->
|
||||
<div class="input-group">
|
||||
<h3>Kind 10002: Relay List</h3>
|
||||
<div id="kind10002-relay-entries">
|
||||
<!-- Dynamic relay entries will be added here -->
|
||||
</div>
|
||||
<div class="inline-buttons">
|
||||
<button type="button" id="add-relay-entry-btn">ADD RELAY</button>
|
||||
<button type="button" id="submit-kind10002-btn">UPDATE RELAYS</button>
|
||||
</div>
|
||||
<div id="kind10002-status" class="status-message"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- SQL QUERY Section -->
|
||||
<div class="section" id="sqlQuerySection" style="display: none;">
|
||||
<div class="section-header">
|
||||
|
||||
1311
api/index.js
1311
api/index.js
File diff suppressed because it is too large
Load Diff
@@ -18,6 +18,7 @@ class ASCIIBarChart {
|
||||
* @param {boolean} [options.useBinMode=false] - Enable time bin mode for data aggregation
|
||||
* @param {number} [options.binDuration=10000] - Duration of each time bin in milliseconds (10 seconds default)
|
||||
* @param {string} [options.xAxisLabelFormat='elapsed'] - X-axis label format: 'elapsed', 'bins', 'timestamps', 'ranges'
|
||||
* @param {boolean} [options.debug=false] - Enable debug logging
|
||||
*/
|
||||
constructor(containerId, options = {}) {
|
||||
this.container = document.getElementById(containerId);
|
||||
@@ -29,6 +30,7 @@ class ASCIIBarChart {
|
||||
this.xAxisLabel = options.xAxisLabel || '';
|
||||
this.yAxisLabel = options.yAxisLabel || '';
|
||||
this.autoFitWidth = options.autoFitWidth !== false; // Default to true
|
||||
this.debug = options.debug || false; // Debug logging option
|
||||
|
||||
// Time bin configuration
|
||||
this.useBinMode = options.useBinMode !== false; // Default to true
|
||||
@@ -55,32 +57,21 @@ class ASCIIBarChart {
|
||||
this.initializeBins();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Add a new data point to the chart
|
||||
* @param {number} value - The numeric value to add
|
||||
*/
|
||||
addValue(value) {
|
||||
if (this.useBinMode) {
|
||||
// Time bin mode: increment count in current active bin
|
||||
this.checkBinRotation(); // Ensure we have an active bin
|
||||
this.bins[this.currentBinIndex].count++;
|
||||
this.totalDataPoints++;
|
||||
} else {
|
||||
// Legacy mode: add individual values
|
||||
this.data.push(value);
|
||||
this.totalDataPoints++;
|
||||
|
||||
// Keep only the most recent data points
|
||||
if (this.data.length > this.maxDataPoints) {
|
||||
this.data.shift();
|
||||
}
|
||||
}
|
||||
// Time bin mode: add value to current active bin count
|
||||
this.checkBinRotation(); // Ensure we have an active bin
|
||||
this.bins[this.currentBinIndex].count += value; // Changed from ++ to += value
|
||||
this.totalDataPoints++;
|
||||
|
||||
this.render();
|
||||
this.updateInfo();
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Clear all data from the chart
|
||||
*/
|
||||
@@ -98,7 +89,7 @@ class ASCIIBarChart {
|
||||
this.render();
|
||||
this.updateInfo();
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Calculate the width of the chart in characters
|
||||
* @returns {number} The chart width in characters
|
||||
@@ -119,14 +110,14 @@ class ASCIIBarChart {
|
||||
const totalWidth = yAxisPadding + yAxisNumbers + separator + dataWidth + padding;
|
||||
|
||||
// Only log when width changes
|
||||
if (this.lastChartWidth !== totalWidth) {
|
||||
if (this.debug && this.lastChartWidth !== totalWidth) {
|
||||
console.log('getChartWidth changed:', { dataLength, totalWidth, previous: this.lastChartWidth });
|
||||
this.lastChartWidth = totalWidth;
|
||||
}
|
||||
|
||||
return totalWidth;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Adjust font size to fit container width
|
||||
* @private
|
||||
@@ -142,7 +133,7 @@ class ASCIIBarChart {
|
||||
// Calculate optimal font size
|
||||
// For monospace fonts, character width is approximately 0.6 * font size
|
||||
// Use a slightly smaller ratio to fit more content
|
||||
const charWidthRatio = 0.6;
|
||||
const charWidthRatio = 0.7;
|
||||
const padding = 30; // Reduce padding to fit more content
|
||||
const availableWidth = containerWidth - padding;
|
||||
const optimalFontSize = Math.floor((availableWidth / chartWidth) / charWidthRatio);
|
||||
@@ -151,7 +142,7 @@ class ASCIIBarChart {
|
||||
const fontSize = Math.max(4, Math.min(20, optimalFontSize));
|
||||
|
||||
// Only log when font size changes
|
||||
if (this.lastFontSize !== fontSize) {
|
||||
if (this.debug && this.lastFontSize !== fontSize) {
|
||||
console.log('fontSize changed:', { containerWidth, chartWidth, fontSize, previous: this.lastFontSize });
|
||||
this.lastFontSize = fontSize;
|
||||
}
|
||||
@@ -159,7 +150,7 @@ class ASCIIBarChart {
|
||||
this.container.style.fontSize = fontSize + 'px';
|
||||
this.container.style.lineHeight = '1.0';
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Render the chart to the container
|
||||
* @private
|
||||
@@ -190,7 +181,9 @@ class ASCIIBarChart {
|
||||
}
|
||||
});
|
||||
|
||||
console.log('render() dataToRender:', dataToRender, 'bins length:', this.bins.length);
|
||||
if (this.debug) {
|
||||
console.log('render() dataToRender:', dataToRender, 'bins length:', this.bins.length);
|
||||
}
|
||||
maxValue = Math.max(...dataToRender);
|
||||
minValue = Math.min(...dataToRender);
|
||||
valueRange = maxValue - minValue;
|
||||
@@ -219,12 +212,12 @@ class ASCIIBarChart {
|
||||
const yAxisPadding = this.yAxisLabel ? ' ' : '';
|
||||
|
||||
// Add title if provided (centered)
|
||||
if (this.title) {
|
||||
// const chartWidth = 4 + this.maxDataPoints * 2; // Y-axis numbers + data columns // TEMP: commented for no-space test
|
||||
const chartWidth = 4 + this.maxDataPoints; // Y-axis numbers + data columns // TEMP: adjusted for no-space columns
|
||||
const titlePadding = Math.floor((chartWidth - this.title.length) / 2);
|
||||
output += yAxisPadding + ' '.repeat(Math.max(0, titlePadding)) + this.title + '\n\n';
|
||||
}
|
||||
if (this.title) {
|
||||
// const chartWidth = 4 + this.maxDataPoints * 2; // Y-axis numbers + data columns // TEMP: commented for no-space test
|
||||
const chartWidth = 4 + this.maxDataPoints; // Y-axis numbers + data columns // TEMP: adjusted for no-space columns
|
||||
const titlePadding = Math.floor((chartWidth - this.title.length) / 2);
|
||||
output += yAxisPadding + ' '.repeat(Math.max(0, titlePadding)) + this.title + '\n\n';
|
||||
}
|
||||
|
||||
// Draw from top to bottom
|
||||
for (let row = scale; row > 0; row--) {
|
||||
@@ -243,8 +236,8 @@ class ASCIIBarChart {
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate the actual count value this row represents (0 at bottom, increasing upward)
|
||||
const rowCount = (row - 1) * scaleFactor;
|
||||
// Calculate the actual count value this row represents (1 at bottom, increasing upward)
|
||||
const rowCount = (row - 1) * scaleFactor + 1;
|
||||
|
||||
// Add Y-axis label (show actual count values)
|
||||
line += String(rowCount).padStart(3, ' ') + ' |';
|
||||
@@ -267,75 +260,75 @@ class ASCIIBarChart {
|
||||
}
|
||||
|
||||
// Draw X-axis
|
||||
// output += yAxisPadding + ' +' + '-'.repeat(this.maxDataPoints * 2) + '\n'; // TEMP: commented out for no-space test
|
||||
output += yAxisPadding + ' +' + '-'.repeat(this.maxDataPoints) + '\n'; // TEMP: back to original length
|
||||
// output += yAxisPadding + ' +' + '-'.repeat(this.maxDataPoints * 2) + '\n'; // TEMP: commented out for no-space test
|
||||
output += yAxisPadding + ' +' + '-'.repeat(this.maxDataPoints) + '\n'; // TEMP: back to original length
|
||||
|
||||
// Draw X-axis labels based on mode and format
|
||||
let xAxisLabels = yAxisPadding + ' '; // Initial padding to align with X-axis
|
||||
let xAxisLabels = yAxisPadding + ' '; // Initial padding to align with X-axis
|
||||
|
||||
// Determine label interval (every 5 columns)
|
||||
const labelInterval = 5;
|
||||
// Determine label interval (every 5 columns)
|
||||
const labelInterval = 5;
|
||||
|
||||
// Generate all labels first and store in array
|
||||
let labels = [];
|
||||
for (let i = 0; i < this.maxDataPoints; i++) {
|
||||
if (i % labelInterval === 0) {
|
||||
let label = '';
|
||||
if (this.useBinMode) {
|
||||
// For bin mode, show labels for all possible positions
|
||||
// i=0 is leftmost (most recent), i=maxDataPoints-1 is rightmost (oldest)
|
||||
const elapsedSec = (i * this.binDuration) / 1000;
|
||||
// Format with appropriate precision for sub-second bins
|
||||
if (this.binDuration < 1000) {
|
||||
// Show decimal seconds for sub-second bins
|
||||
label = elapsedSec.toFixed(1) + 's';
|
||||
} else {
|
||||
// Show whole seconds for 1+ second bins
|
||||
label = String(Math.round(elapsedSec)) + 's';
|
||||
}
|
||||
} else {
|
||||
// For legacy mode, show data point numbers
|
||||
const startIndex = Math.max(1, this.totalDataPoints - this.maxDataPoints + 1);
|
||||
label = String(startIndex + i);
|
||||
}
|
||||
labels.push(label);
|
||||
}
|
||||
}
|
||||
// Generate all labels first and store in array
|
||||
let labels = [];
|
||||
for (let i = 0; i < this.maxDataPoints; i++) {
|
||||
if (i % labelInterval === 0) {
|
||||
let label = '';
|
||||
if (this.useBinMode) {
|
||||
// For bin mode, show labels for all possible positions
|
||||
// i=0 is leftmost (most recent), i=maxDataPoints-1 is rightmost (oldest)
|
||||
const elapsedSec = (i * this.binDuration) / 1000;
|
||||
// Format with appropriate precision for sub-second bins
|
||||
if (this.binDuration < 1000) {
|
||||
// Show decimal seconds for sub-second bins
|
||||
label = elapsedSec.toFixed(1) + 's';
|
||||
} else {
|
||||
// Show whole seconds for 1+ second bins
|
||||
label = String(Math.round(elapsedSec)) + 's';
|
||||
}
|
||||
} else {
|
||||
// For legacy mode, show data point numbers
|
||||
const startIndex = Math.max(1, this.totalDataPoints - this.maxDataPoints + 1);
|
||||
label = String(startIndex + i);
|
||||
}
|
||||
labels.push(label);
|
||||
}
|
||||
}
|
||||
|
||||
// Build the label string with calculated spacing
|
||||
for (let i = 0; i < labels.length; i++) {
|
||||
const label = labels[i];
|
||||
xAxisLabels += label;
|
||||
// Build the label string with calculated spacing
|
||||
for (let i = 0; i < labels.length; i++) {
|
||||
const label = labels[i];
|
||||
xAxisLabels += label;
|
||||
|
||||
// Add spacing: labelInterval - label.length (except for last label)
|
||||
if (i < labels.length - 1) {
|
||||
const spacing = labelInterval - label.length;
|
||||
xAxisLabels += ' '.repeat(spacing);
|
||||
}
|
||||
}
|
||||
// Add spacing: labelInterval - label.length (except for last label)
|
||||
if (i < labels.length - 1) {
|
||||
const spacing = labelInterval - label.length;
|
||||
xAxisLabels += ' '.repeat(spacing);
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure the label line extends to match the X-axis dash line length
|
||||
// The dash line is this.maxDataPoints characters long, starting after " +"
|
||||
const dashLineLength = this.maxDataPoints;
|
||||
const minLabelLineLength = yAxisPadding.length + 4 + dashLineLength; // 4 for " "
|
||||
if (xAxisLabels.length < minLabelLineLength) {
|
||||
xAxisLabels += ' '.repeat(minLabelLineLength - xAxisLabels.length);
|
||||
}
|
||||
// Ensure the label line extends to match the X-axis dash line length
|
||||
// The dash line is this.maxDataPoints characters long, starting after " +"
|
||||
const dashLineLength = this.maxDataPoints;
|
||||
const minLabelLineLength = yAxisPadding.length + 4 + dashLineLength; // 4 for " "
|
||||
if (xAxisLabels.length < minLabelLineLength) {
|
||||
xAxisLabels += ' '.repeat(minLabelLineLength - xAxisLabels.length);
|
||||
}
|
||||
output += xAxisLabels + '\n';
|
||||
|
||||
// Add X-axis label if provided
|
||||
if (this.xAxisLabel) {
|
||||
// const labelPadding = Math.floor((this.maxDataPoints * 2 - this.xAxisLabel.length) / 2); // TEMP: commented for no-space test
|
||||
const labelPadding = Math.floor((this.maxDataPoints - this.xAxisLabel.length) / 2); // TEMP: adjusted for no-space columns
|
||||
output += '\n' + yAxisPadding + ' ' + ' '.repeat(Math.max(0, labelPadding)) + this.xAxisLabel + '\n';
|
||||
}
|
||||
if (this.xAxisLabel) {
|
||||
// const labelPadding = Math.floor((this.maxDataPoints * 2 - this.xAxisLabel.length) / 2); // TEMP: commented for no-space test
|
||||
const labelPadding = Math.floor((this.maxDataPoints - this.xAxisLabel.length) / 2); // TEMP: adjusted for no-space columns
|
||||
output += '\n' + yAxisPadding + ' ' + ' '.repeat(Math.max(0, labelPadding)) + this.xAxisLabel + '\n';
|
||||
}
|
||||
|
||||
this.container.textContent = output;
|
||||
|
||||
// Adjust font size to fit width (only once at initialization)
|
||||
if (this.autoFitWidth) {
|
||||
this.adjustFontSize();
|
||||
}
|
||||
if (this.autoFitWidth) {
|
||||
this.adjustFontSize();
|
||||
}
|
||||
|
||||
// Update the external info display
|
||||
if (this.useBinMode) {
|
||||
@@ -350,7 +343,7 @@ class ASCIIBarChart {
|
||||
document.getElementById('scale').textContent = `Min: ${minValue}, Max: ${maxValue}, Height: ${scale}`;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Update the info display
|
||||
* @private
|
||||
|
||||
72
debug.log
Normal file
72
debug.log
Normal file
@@ -0,0 +1,72 @@
|
||||
|
||||
=== NOSTR WebSocket Debug Log Started ===
|
||||
[14:16:28.243] SEND localhost:8888: ["EVENT", {
|
||||
"pubkey": "193279d1459ba1399aadb954422bf8595aa77367dccf482c682f5f208e435844",
|
||||
"created_at": 1761499411,
|
||||
"kind": 1059,
|
||||
"tags": [["p", "4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"]],
|
||||
"content": "AmWNi4P5J126kk69XH2o5mYvGj+69+Fjfr/nZx892I2z8edkwtp2IH7XAnPUqdGPu7x1xiZF1sNfr21VKThOhE54K/uQHLFydZN3acgUfX13sCeWhrvnQD0EvjvZC6QzW9DfFayYoYl+rEPYcra1/N68a+N1R7XnNcf1K/ZFh5Grcnln0H5YdXKRBhQI9aai4iFp1VGy2V0IR+6gDJGbJ7TbAbD3wgGWv1i77C03skH3RgzH+f2b7VBtm+vjKX6q7v6v8j3w1lRFE5Qh0Tqgedh3+UsnwqQta7OCzF9OyAVPK7EqKQBss5LzYSRUpcCE1vw5b7I7yeBFwU9WfnGLUW+uZxMJ2C3P4NBBrVO8UFIkBrPL2cqkoD5c8DgMLJjXGmc4EWfB4ZWb3KjbfLbgi6DVQ++cDjBbnCOPhX+/4qOnWq+gI28e/xk3cBvQtgUOkvWX3oGl3/Q33u4UGtxkFEXGfzdKHVDkR86kqf7RMZjIwTjLGpx4uov0cNmzj07hYEdoG/lJ4yA1v/GyF7viJdnnz3tE0hCZaViqSCev0rfUHWRDDMXJzJ9SS+OwpVswSG4NKvYsDhDM89BjhFs08HshTFdIh2AY45jR/16CsZM9JudH5BwqcX23wToYdZ+lrerOA0EkYb0DJUzGVe4lMpdJZoB8qXLHxMAKwKu0UEWEkeBnnZbvTGwCRbfGorxwPrnyqUCy9tzJx0GOLhRIzBmt6lki607VLDYjK97VIz0dff3fyWPAfy/yBlO2nHhVubUgpPaAjcaYNkO/iZwuP8oJkClWWmKwAQoNoxt+Ly2llrkz+Ne8oXMQdJSq416x6MLHo2JbKH8uwjx0yKG0oldLyWaz3A8OHYkJuxOi7HPVTlOOJrsjG4kMn2g97rVUXLs5v9F/StOjzxtiQWmCBtCsvK2LEEK/DzfavcJstEMxQztJjhiYRO3MJanL7lN2zu1ZHO149FJrgqGV6RQ8DDXf55yuabqHilBuUSDKpI0gl0+Efuor1my+L9J7MjJQ83aSwGizX7uXedMsGQRcvU3++Uvbw7sd2l67fb7IoYU04TPGZkIm120qwf7GAUpnDL7Lhulu/9LFMFs3UnGl9cLzY6EAJtDANHjMAoXbGbYclnoSiNW4yr3X9PBHO5o2YhIxfpTyEgLebJLOkzoziuCTpX8/MdhOhFtlIyo5B8Mbt5GDOHh4x1ZMKOl02J00Vvgui0hLw4Vri8Lz/ErPIRSlrEOB+8K5zPzJy/bD8XrOKlOwSbF5j9dsqs+8uCTC/v9YNQ0cC9wP7gVAxErQ3suJVeV7pzY+eGR051AcW7ppTs1gShhxDDaSaKdMlrkBdFDZcCJ+tomSgW56bOi45erpmk8Lcv4RrBzjBtq1hz+XSaTBAtEnGtHNH2uOn7KP/NNaD38dYkpb3N1VR3zuV67RcuPZeB+5WR9jhnLoSMGox2s=",
|
||||
"id": "c6c18d902744fc0aaa4ca9172b3bcd0dde3fd7d943b41b2a39a16927ede67804",
|
||||
"sig": "d67e0e914aa361c528510efd216548b6734a5fa68c46426571fbc87626bf19a9ec46e16883e7fad700f4fee5cfffd9bba03c3c08e57938fbca77a28b30a32bb7"
|
||||
}]
|
||||
[14:16:28.256] RECV localhost:8888: ["OK", "c6c18d902744fc0aaa4ca9172b3bcd0dde3fd7d943b41b2a39a16927ede67804", true, ""]
|
||||
|
||||
=== NOSTR WebSocket Debug Log Started ===
|
||||
[15:01:18.592] SEND localhost:8888: ["EVENT", {
|
||||
"pubkey": "ec9578ade9e74358ed35d8091d41bfa277e86d649614a8865e3725e38ebe5bc9",
|
||||
"created_at": 1761502101,
|
||||
"kind": 1059,
|
||||
"tags": [["p", "4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"]],
|
||||
"content": "AlgLnVwti8Dk2nu0e4bMrXeZiR/u+RPnA85kpts2svaFGfMByS4iap7xqdiSrXpSQPjQsix6jP9Qiy1a6rrvC6MutqTi3JfsMexLR61/ZKTK41sWTXNDTT3keH543vx3fVQH1mq+LgG4mjNzkPe0RqkYFvC8R0nxyAcCecHDxZUlmXQmAGiB5JB2GvstA4eoZLP1OI3fcLA3qaITLNRJOwRUoTYKqUkENHwz74CW0TnYDrKRZVe9zKNWQBLmtsgVoGd5CXNAVgXwmm2h0eCNIcRGnFqDHzpegpEGO+A7tvB0KJwlj4j/GmRgmnWO4pkrM2fmsTdlb5KNqe7NPuTVgYfdvld70zWpenp7jF/0psaEQEl8R7FbG2rNCv8fXtH+womvJQj4S0eUBxfvsUU1wWYmhusEzvyTfpV/nw0Er+pmAUZ2eGk7LEB2GMsJrkT+G5oohm0n+5c72iWJqW9A1eAzjR6Z21FkH4kAEJOl70fw9Xeig+s9rYk3GcKlMvj42zf7DepMXHPy62TbqUeclcm5W/semyasGP521GBuw152IN+dS67OVVmEvEJ89xhwiTeIty78enR4Gq7d1eNK+rqStdtJ7FN6kD/8gv4sFojUXyi0sIxzaSPrwI3ohOqbpEK1dTs6fmTUiyT/Buq++IhD9UwsZgz/kYpZfm1NVnWx+yTEv4I1H80FDxmMzbYnTHuIdRJFeh/NJRy9h+gXoZlZnteHkbwm1w2AejTkVnGs2Pz7aUZgC+1Za8fhtXq3Z9N3R8f8gtVFnnjRzApg5U89QrXUBS0R6F9dTqINk4qti94JWO4dYcPuudCutME5hfCYBoo+LHuRmdPKry7vSK1WgLYQsHuG+r313Ak8DhZYNbL+0d5UJ9kDFlFKaP3xLahSbEc/7u+AyuN68IyM1NwEehllxqVUsX8dsD4bZ2yW5rVjAQm9tT8Ypm73kJEb+DYVqT0WjFx88ee+HX89a9NgszWf1HE1KNQ9gWjn4eH6xbwrOkS4/v2O2tQoAd00vyPKAWly3Zlrz2cRnrSnxTZ5Lt7HtwAt6Err8MhD/w5rMLXHTBCMrroG1VfMo1OgL1YPafKDZmwVcHWacqtZiB0heRx742WipmTonqMjCOTNufdwxQcRPLLio0mtqiIrzgJqqIQenBXSa1jaG6Lvb5PCUKThbg4sSFfgssoUNKM7ytmBAe+PPmOVe11/gGaFWoQeUordbvmiCtzIPUYiKsuhfeK4I3jKvEofQU37hOam8ZxUczXvX6dgOOto002EWyCVfAzFgyey6wI+FGEbhXqlw7nB+azpqLQMJnHg7pfb1stXk3d8rjgVrRsVRJe/5KrXyZ5cd7ftJuJLxpTYmfFu6CKoUE0L5eRxXiwa16Pi0BehxOLaZteiTzttyfj+ClMKs2J/2/T1BVya1oGUW2Wg6ri/qS8oXv8bqiXBZ1/BwfI=",
|
||||
"id": "ea5bb419a8efea8ee86bb8696406a70a0387a7d0ac6e60760026d1aea28b427f",
|
||||
"sig": "0ffde3fd0d83c80693aa656668f2553807f8d474738ff3d9676090a5b8748a8e8e0c75a1d64963e4604046e18a806c4371a9cf2af2fd72f9db50f15bc78a4e25"
|
||||
}]
|
||||
[15:01:18.604] RECV localhost:8888: ["OK", "ea5bb419a8efea8ee86bb8696406a70a0387a7d0ac6e60760026d1aea28b427f", true, ""]
|
||||
|
||||
=== NOSTR WebSocket Debug Log Started ===
|
||||
[07:46:36.863] SEND localhost:8888: ["EVENT", {
|
||||
"pubkey": "99e37bc774d260b464e936ad8945deec62e8f5f8af53e9db662038a717d39bd5",
|
||||
"created_at": 1761562419,
|
||||
"kind": 1059,
|
||||
"tags": [["p", "4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"]],
|
||||
"content": "AjYCV8Esqa1L9LQE2G8cDVn+hSXjAJlFVp5nAaC6nuag/1AphKpsAJFGrWZvJ+rte5+4dbmdk+osvlxxfRQHtaZqjaTbVDKJA2b2KTYLgICe7O9rqTqR8oC4sYOVQViEjo30ox3IfDgdR5ONlaprvQ8r71E+oplWOjahUNvf5Yb9OOHbFphOqqWtbYRYqAqvO0bj3rB+tmyUJ8v3mU8NsKJtOTOIvN+jTIwU8cbN6AM6/A87WSi7J7X9/wLpFigBNxrx82MJ025ryApNWyt5PuBia3krPDa51F/A+jFVp1QicVwSt6tP01ktYJn3uyR3qizIiZiXzmxV9+TXopq+mOTlAiwcZBm1ZkS/PgfoUMDUOOVcCAW6ppZlg3oK5jScuDl1d4cZgTmmAPneaHhgB9A/DbWWr03W1vJAXCmDQRUoACfwsvLQf5esXkPcJV+ANgLl8sKd4EPmDAzr946uDcs2BUDftr4++jbdTkg9yIHb4SHnI10osbsqP7BqTrF1TbZHnxev4l5XyaIqhGm6WdQ90uGn1VDSUXXuou3IbqwhZReifYQAL9/5PAeSH6RrrL4neEzBjosSNkcMmtAxqBfd8dOHfqT6r15osKXc1eSmO9qDjZxUHUb5zIJjrkDW0jY8vAfiMqZhKd5Vl7stYf2iJJxdm04r4zpxGBjlYQva2LyrPclBIGxax6sTQTxoRyUBwyis7OnxUC0HO7bqr464RzCX1/OHMOYhFQu3BY8Rvytl9E1hS+rJpWgNkHsbjr5zs1l/B+qwt+zqnYlAtjZZ2T56Pjd+jZcRTt/NKDtyQnLPxreBWgNy8IykNI0q2fgFiWJwh7GnlbYrx5zco04Ory/P+nW2/Xosp0232I48e2KhxtH6L1e6dOWqbZXFQXBqzKNlVTRkPyS9ykSSs7NAVknRz/vF86+jJVJa2z32Y4oQtJna8vK5J5HA3rRSlSEINwmcSiFUmuxeFAcFjYjyVGlBhmH3B/98CtT2+JHgUYpMiG51+HR+OI9qBGgsF5SI9JKai7CFC3OqfaW1rZHN96uta4VVGQ1mJetz/xB3W+QThsZ0IJ6/wBnbUpPBoab4rfnYeeVwOMxiK5B2UIZ1+ihRrSMsjMC8DAEbUAn9XNABJHhDo0KJcYqtpHBIkQgbqfuSKTLmc4mZNJCp8wmry9Tc9ZQo2jT0dZa/NZO+qtqWXWqZWbMngXFer4AtR+Vethhg6BdhYOYI/j8gOW1m8qodBlj9BHiKEU3Ig9z6WawsOD95VosxhqrQDuyO07igXNWMNK5exRfvp2QiHgILuC9diZZGBXPRLIDlKERTotPc5IdutkTG6qVh6+r6wbwtVhiWJVmfVy/D0hvDvlaqzVk3FRVuRuMZI+LmF3OdNGIf0+lfMUeMAABhDNTWyyS8gG21JJZQOBxGc12x49xWvMLbXaPCKBKrqw4FLF4PTCc=",
|
||||
"id": "9899324517c0e1796ea513cfc9fa0a2592cf5532774abc7e2a1bac7bb16c4fbb",
|
||||
"sig": "0d73ac599d0d6d99dd9afa0c92d741e459bc53102557acba5d868089776bb36a521ae800303ce5ceceabc8d643116a74560744243b3a1c7749d6a52117343637"
|
||||
}]
|
||||
[07:46:36.876] RECV localhost:8888: ["OK", "9899324517c0e1796ea513cfc9fa0a2592cf5532774abc7e2a1bac7bb16c4fbb", true, ""]
|
||||
|
||||
=== NOSTR WebSocket Debug Log Started ===
|
||||
[07:46:57.426] SEND localhost:8888: ["EVENT", {
|
||||
"pubkey": "a1efe929139f3f195159389a6eb7199c127c88e32a0264cd826e95806a7c7db3",
|
||||
"created_at": 1761562440,
|
||||
"kind": 1059,
|
||||
"tags": [["p", "4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"]],
|
||||
"content": "ArP2HEobkU/QYXy2R94zSkKM1OfQT5SabPeebj9dQVGbUKaKDwuN2RUYTRJ5rD8euyiXat8YzYO7PJ0CHzxclXxO8AWpdN4P76srm3zJ5z6kpQpcCFgInV7k4v6LZmmCrrtdTWqLjuLTPJJd6W9J7HqbTA3Dt4200BSpA4Por1TQAncplwH7O4vBfbPYtdv9w1RL1uSInMWcGwxttTXlyTtAJ0G0hQNofowFCMWuQCKjV5LxPfoXdOCrsp/We6x+hYKDxphsDjQ1tbdtYFmj/YRy6MJm1id4mr+i8fEnyimshE/fAhavOXUg6239MmYj8nR2RT9LMuhVckX+V5MZnVyC3mZlfzkPJiTHxiDkEREjljNOX9I+9yChg6MvyU41s7GjBlPyWiyeXedPcU2Q8ypGsFLhBl5i+IGSn6wCcBH8+h3euG8jtBgKxIP1qBYsXPTlYpSXQcisIZlW2Rubcawf/RF7HYbIRu884mpdVnURcHqfN9yquoVyfvIgQR5Fs6IbKatJ64LfMLkLNs4UIlumtQRdW3NFjglgb/rF9btKYVFHRG9dWwOpZBd0zvXtWKbts2AKQFU30/WegaPh2LT5rN9HfMsA1tI9YwZm//T2NiLaPwJCuOFWBOUiB7jIObQKtOHrICI/jXIGOAfgox5+fcAE6CaysHHzluVcwiw7GioShidaIDsZ10rWJOv1HeRpuiAJJTWk2FOBJzpOxli6s6jGj2S481Xa99I13TihgL0wAPhjsnQhz0kh40g89mipzVO5hbki101zIJCEBrDeT4Ptabc9GminXedq9k1G98usM0JSHsgtdZdztme/UyvYyAKMdez1yNgOp7YgOU15Rpz/KGL6W5Wk3MbUwpuVRzUWEMoBcyMzssn5Sa3mkh1RQqpTcoQaktTNwkhR1R5bgedka61JmcK4Uq3Hi/HfKYHbeUeta6Olu+U19PEwZia1iq+y0ZQm5gMwCK8BsoV44OLsjeDKlyRGCtIjkTc/L2LyuAZFhw560vKflkigQVcajaQVtEDgaT5odgFwvYEMOjbBDloDs589hAn8ZLyRJo3tIXNwqhctKTSqbit5qs85pOHkXSC3gsRQvDfq4qVh8iWXFotmOHlBEh4OZk89xwAnP0wiv5kd8N2c2CTB84SB224GinMhs0gkaCIXPPYv8IfVcow9+3sjnNov4dRRIB80fRXP9X3IyR7tXYCuq1uQO2iWiWKhNaqJRoTM1BUhLv0ebKYjfPevSVHUuV51CcsoFakNT8S0UnW7QHfmsESvCJLLT8ttrJqpRX2tf6SpzofHmzQHVrHFn8C7WKMVelndptmaOt/9Lek2UrZiKmzRP0CtBL+HoPRmZHF9t7y0qEhoApkrB9FPukH/IGV6jx891rH4nC1fLKc6zgkdjnYB7HDB+lWp2JKpV8Z3CbZXtR28kwIvZZIABZ23/U5cFds=",
|
||||
"id": "c8cdf8992fbc17a0ccb74f6dcb7b851f3fdd53317f5a5ea4e202a91b22e15ac6",
|
||||
"sig": "b9efba3448d67de8855838044427396af1958269642a975129fe877e48e5c0e0818d638264f8aa80404886559a7d29464339f63704044dbf11ff09eb0bdeda2b"
|
||||
}]
|
||||
[07:46:57.439] RECV localhost:8888: ["OK", "c8cdf8992fbc17a0ccb74f6dcb7b851f3fdd53317f5a5ea4e202a91b22e15ac6", true, ""]
|
||||
|
||||
=== NOSTR WebSocket Debug Log Started ===
|
||||
[07:48:51.631] SEND localhost:8888: ["EVENT", {
|
||||
"pubkey": "52feea8d0da247ed1537c88e12b2f6bc88697b69abe33bf4f059f9f10c0f2b43",
|
||||
"created_at": 1761562554,
|
||||
"kind": 1059,
|
||||
"tags": [["p", "4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"]],
|
||||
"content": "AoU024VIO6IgceC43yYPvKxOb5PuuZQRAUQLC6Crdn5dtIVuHE8M/UUmmNXmWq3jB6kFbFWxNgFWuxCEG9sQHDEngk+tDOGlt+r0vx3jUZG09lzNzcghl/4l/Do48rcy0cfSm+mSHrJsy7N+MSAXQ1heKahKF2fSyfYFM+6EOSEl0sJSq09iWGFft0lWfeZ3AFpji0gp0Z5QY1hQPH3Te/TRuDCoR8GXG3NguLD22Ed8byOQzf/b7nWr70z2Sqg15zhMwyqkl//dOp9iIXE69FONqDfvFF0xttQ1L9PbzQizMt65CqbMuxGMiMA1zZsgQ8iN+xbIN4xv4DrzCtBZnYt3aJSt7cv+8Co5OmGNXu1RNUvxpZZTO8Dq/m08Y5JDYiRCvdh8kTASMVt0MfGsvWgmcHiiCINQUe7n5ynieayFpbl9j1Vtml0lmrfIOYnDQYmuqDDyG7PxhRt3G/SpiabWBwsqmTqCvrclXfTm4t5YYSr/5lKbwHzPk4qtdEs+LqH2/zd4egZnT4Xt/vIP/c55NrEWPmr50G37DpsSVbDxMQXs4dpldDntjEDFuL8VTkAzqibmiZSQnb6l+DpKYNdQCyg1S1ttnzYp2cRTAPzAVbRMqk1R6jagWnZyOs/JIK4on51JHczaUTCMypDLJFoOaFPTAedHfR/Hn2Nm05W8oQ/m/RGmxLgok6WgH3KJ8wvN+8X+XYpTgyxej/hYPqJnq3uaNqMbcD5katWRBmZtyZa3Cn2nZDqmFFJSABWacXNCyHL41Z+MhyYalYzvUev1ozUgx2NEWwxnSTvMkpOfvSDrs5Ncosle0itL9j0QVBrKFjHgq2BJ4FApv/Iq0af+8JEqhVEsMNpGwRJst1kn7kO+Q7O68PQF6PPlNqh7DNea0Bz1sN1QDt2yZTApi2b3IJTsbekye/WsOB0J+bLvxNQ/UoULcyq3SLRSqQEQMLBPz7JrijMzBdPglWpeZ58UDrbmd1KhnHzx7o1NvHyjPRuKj4M094+2/mTEFGOOIF+Ogjqj+wCSDnT5C0d/l2llQkIXCKcLONWT4bKkmTOjvNs6lX+VpBynaegGjzGOvJw1beRZIkRegTpV4pnMZH9833s175rcMcDjPnfT9FD+pDv1DkmXfww1k6MgfHTbwjSgj0K9862xFNwL3mC2g8XFNlcflC0Rd8PzXRg7TBn7855r+urqujCUqZrzdUMHvp08rEEzSJKljk4XN4DqZeWn2evv7UCbYjq46sVf2lEHCvHdqKoPf5ENU72y5Fb9tQJAyUoBTdPdb2SZ2Y2jSF+6+H2wVXrOlm8EwBquaREl25fs7Yqwjru7qz1rO9EA1jlNybFvALHEFQzHEpi8JeNi5T/mI+VleoUDrk/og2mucQVFqAQzRjASsaeDq9fZNqhv3Q3DBIpftmI4g6ZXqJhPRK8wF7Ym3mmC7eFHwalUprA=",
|
||||
"id": "9bc4b5ad293085272bf52ff17abb585f7e63bc155a5a39cfe1a5c046f141e571",
|
||||
"sig": "ee6b917761031a06bc50da0173aef881a61213473d4f533a8a4a96247edcdbd17dbf87919c4d92f8ea8719d5311d51a8028fbf62e3f40f9b8004ccbe9f3adabd"
|
||||
}]
|
||||
[07:49:01.659] RECV localhost:8888: ["OK", "9bc4b5ad293085272bf52ff17abb585f7e63bc155a5a39cfe1a5c046f141e571", false, "error: failed to store gift wrap event"]
|
||||
|
||||
=== NOSTR WebSocket Debug Log Started ===
|
||||
[07:50:47.319] SEND localhost:8888: ["EVENT", {
|
||||
"pubkey": "f206ef335cc3b360cf739680cd4540b852fb9d75aac552b58014a41cfc4c6c65",
|
||||
"created_at": 1761562670,
|
||||
"kind": 1059,
|
||||
"tags": [["p", "4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"]],
|
||||
"content": "AuzXkvYpH0IX/T0BKtOAjO5QtglT3nXGXF20awgDX6T1qoWV8qykYY01qPlGLSDkOuOvhG5NZuFPs/hPMnctmnskvHTHqKJdUeT10Qe0JmZiP5y6fZSlrtLMKfyoLpNYFDXOfwooSD2Q0UN8ePfTkkB61ri7avsq8w1WjoVTUSG5kouJfQAgvh75uXNkcvTNWSX5gbCXxoL1D7twPSUXvuZBJTN0iVdh3jg8X1uZWhvpZJIIXceZIUdxaIp2EmrYVW0VEZZSbAFGAldtKasHrP3cJKwTk1IenMFXaPkJnsbvyzUZWKBwTCeBLBhMNzbBWOp5A7SgFW1vf9gm00MSQE/JjwOzDZIaIQ3vRCMbO29XLlcOCevs8FwusZ86LuXZ8EQac9vnc/7gEul/SkOQaSq7v4oGzwfY4iQ8c8VdX/+syE+ClUEjiKdN3/lcRGWhdMGYgK/uajLd9jpfY8CBzP0BZM0Oq6ZgSJt4ydntPfiUq4PtDww+56+bUUQb5V3eZ9SUnQgebO1doRgtvb6LiLGN3D9XolEiHBE3KDt/InfWPUeuf1HEf5IbDc2w7zhgzWbvXK9G4NnsAmOHQNIGbPHXLWEOOhRtcEnrHILKYgs4wfuvSnfSUzfWxHVlhXkuXj4pq/EmJKmQg3zB1C2QzKMx7O/oHplnQFGUfAMDQY9GpBWLCOhQH3ZiHWQ02AjXSze7PGB9ac7KoPmyKafUAgXbASp/G9t7n3pxarzwBNm4zQ6wPRpR5OF+mFYQ9ClJ+3MlUDNq1T8fGuJVIduxFgyWMAgKoJBQe+xP5qHEuCxhG1B2KtoKHbzAtXpV+sduXQOlm0Jq7req59VgtgrIVVoLNyn7ulUFmJvWbaPWyuMrC1z+MdjNw/oJ1mE4+zYSB4Yho7DCwVdIxWrkFSx1yH9s1WPPyERCy4/UQfHpPmAD/JyAnnkKsc7v3MpAWAYCsiil1/PgFFPRRrO7jS+Ez+veQl5tx376ac9MwaN+ZbFADqYaf8CCcWXhMlAYl/zcMWLXKqL/wKb6orpTTHiWU/iJIvbuT0MIN68LIX+G/S5QCIcAQez+G35n5pDUkKikVQguKcJG51iDZqRAc+fnjSa7ifu8HBJ8HIKZjHEEyp6oGU0LCEWH60iIa7toKAwRx8rLPP2tWo+5u41nUrhpUXhUquQu8Dr+LrNdB30qYlH123R0NBBtXG7ngW8WDv2GQcul33ftiI/14QofOthA8SiExW5B7OsWJQON8sS1ZTc5l/M6f5B17CwqmAGd4NdKPQy1SZWGD61jkefwzKW/w4fZFXfploGwuYvFI/G8/YnaJ60p/k+2Aftcst9ikAHZF4xBtuJr4IrT6/f+snv12G4EdowmaSMjXRZv30d4yKwFmwiuoDHWLyYVwBkO+UO3r0WEe1DId0Z1FZnXfgdnM+zAZwITtCVQjZMcsOSNskKd1eE=",
|
||||
"id": "ce28dc9c653a4f5451266bc215942be9a54e4777a27862fddce351a59cc2dbf3",
|
||||
"sig": "539f314c0f0fd685647da358c4153272baf671f1a1bc42b8ff61231c4b5f1f03cb8d15a36fb78437dbf094c546e9ffe8e03de7ddb3b62a981c135a714ec57f93"
|
||||
}]
|
||||
[07:50:47.325] RECV localhost:8888: ["OK", "ce28dc9c653a4f5451266bc215942be9a54e4777a27862fddce351a59cc2dbf3", true, ""]
|
||||
@@ -1,3 +1,19 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copy the binary to the deployment location
|
||||
cp build/c_relay_x86 ~/Storage/c_relay/crelay
|
||||
|
||||
# Copy the local service file to systemd
|
||||
sudo cp systemd/c-relay-local.service /etc/systemd/system/
|
||||
|
||||
# Reload systemd daemon to pick up the new service
|
||||
sudo systemctl daemon-reload
|
||||
|
||||
# Enable the service (if not already enabled)
|
||||
sudo systemctl enable c-relay-local.service
|
||||
|
||||
# Restart the service
|
||||
sudo systemctl restart c-relay-local.service
|
||||
|
||||
# Show service status
|
||||
sudo systemctl status c-relay-local.service --no-pager -l
|
||||
|
||||
@@ -175,6 +175,18 @@ Configuration events follow the standard Nostr event format with kind 33334:
|
||||
- **Impact**: Allows some flexibility in expiration timing
|
||||
- **Example**: `"600"` (10 minute grace period)
|
||||
|
||||
### NIP-59 Gift Wrap Timestamp Configuration
|
||||
|
||||
#### `nip59_timestamp_max_delay_sec`
|
||||
- **Description**: Controls timestamp randomization for NIP-59 gift wraps
|
||||
- **Default**: `"0"` (no randomization)
|
||||
- **Range**: `0` to `604800` (7 days)
|
||||
- **Impact**: Affects compatibility with other Nostr clients for direct messaging
|
||||
- **Values**:
|
||||
- `"0"`: No randomization (maximum compatibility)
|
||||
- `"1-604800"`: Random timestamp between now and N seconds ago
|
||||
- **Example**: `"172800"` (2 days randomization for privacy)
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Basic Relay Setup
|
||||
|
||||
298
docs/libwebsockets_proper_pattern.md
Normal file
298
docs/libwebsockets_proper_pattern.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# Libwebsockets Proper Pattern - Message Queue Design
|
||||
|
||||
## Problem Analysis
|
||||
|
||||
### Current Violation
|
||||
We're calling `lws_write()` directly from multiple code paths:
|
||||
1. **Event broadcast** (subscriptions.c:667) - when events arrive
|
||||
2. **OK responses** (websockets.c:855) - when processing EVENT messages
|
||||
3. **EOSE responses** (websockets.c:976) - when processing REQ messages
|
||||
4. **COUNT responses** (websockets.c:1922) - when processing COUNT messages
|
||||
|
||||
This violates libwebsockets' design pattern which requires:
|
||||
- **`lws_write()` ONLY called from `LWS_CALLBACK_SERVER_WRITEABLE`**
|
||||
- Application queues messages and requests writeable callback
|
||||
- Libwebsockets handles write timing and socket buffer management
|
||||
|
||||
### Consequences of Violation
|
||||
1. Partial writes when socket buffer is full
|
||||
2. Multiple concurrent write attempts before callback fires
|
||||
3. "write already pending" errors with single buffer
|
||||
4. Frame corruption from interleaved partial writes
|
||||
5. "Invalid frame header" errors on client side
|
||||
|
||||
## Correct Architecture
|
||||
|
||||
### Message Queue Pattern
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Application Layer │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Event Arrives → Queue Message → Request Writeable Callback │
|
||||
│ REQ Received → Queue EOSE → Request Writeable Callback │
|
||||
│ EVENT Received→ Queue OK → Request Writeable Callback │
|
||||
│ COUNT Received→ Queue COUNT → Request Writeable Callback │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
lws_callback_on_writable(wsi)
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ LWS_CALLBACK_SERVER_WRITEABLE │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. Dequeue next message from queue │
|
||||
│ 2. Call lws_write() with message data │
|
||||
│ 3. If queue not empty, request another callback │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
libwebsockets handles:
|
||||
- Socket buffer management
|
||||
- Partial write handling
|
||||
- Frame atomicity
|
||||
```
|
||||
|
||||
## Data Structures
|
||||
|
||||
### Message Queue Node
|
||||
```c
|
||||
typedef struct message_queue_node {
|
||||
unsigned char* data; // Message data (with LWS_PRE space)
|
||||
size_t length; // Message length (without LWS_PRE)
|
||||
enum lws_write_protocol type; // LWS_WRITE_TEXT, etc.
|
||||
struct message_queue_node* next;
|
||||
} message_queue_node_t;
|
||||
```
|
||||
|
||||
### Per-Session Data Updates
|
||||
```c
|
||||
struct per_session_data {
|
||||
// ... existing fields ...
|
||||
|
||||
// Message queue (replaces single buffer)
|
||||
message_queue_node_t* message_queue_head;
|
||||
message_queue_node_t* message_queue_tail;
|
||||
int message_queue_count;
|
||||
int writeable_requested; // Flag to prevent duplicate requests
|
||||
};
|
||||
```
|
||||
|
||||
## Implementation Functions
|
||||
|
||||
### 1. Queue Message (Application Layer)
|
||||
```c
|
||||
int queue_message(struct lws* wsi, struct per_session_data* pss,
|
||||
const char* message, size_t length,
|
||||
enum lws_write_protocol type)
|
||||
{
|
||||
// Allocate node
|
||||
message_queue_node_t* node = malloc(sizeof(message_queue_node_t));
|
||||
|
||||
// Allocate buffer with LWS_PRE space
|
||||
node->data = malloc(LWS_PRE + length);
|
||||
memcpy(node->data + LWS_PRE, message, length);
|
||||
node->length = length;
|
||||
node->type = type;
|
||||
node->next = NULL;
|
||||
|
||||
// Add to queue (FIFO)
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
if (!pss->message_queue_head) {
|
||||
pss->message_queue_head = node;
|
||||
pss->message_queue_tail = node;
|
||||
} else {
|
||||
pss->message_queue_tail->next = node;
|
||||
pss->message_queue_tail = node;
|
||||
}
|
||||
pss->message_queue_count++;
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Request writeable callback (only if not already requested)
|
||||
if (!pss->writeable_requested) {
|
||||
pss->writeable_requested = 1;
|
||||
lws_callback_on_writable(wsi);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Process Queue (Writeable Callback)
|
||||
```c
|
||||
int process_message_queue(struct lws* wsi, struct per_session_data* pss)
|
||||
{
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
// Get next message from queue
|
||||
message_queue_node_t* node = pss->message_queue_head;
|
||||
if (!node) {
|
||||
pss->writeable_requested = 0;
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
return 0; // Queue empty
|
||||
}
|
||||
|
||||
// Remove from queue
|
||||
pss->message_queue_head = node->next;
|
||||
if (!pss->message_queue_head) {
|
||||
pss->message_queue_tail = NULL;
|
||||
}
|
||||
pss->message_queue_count--;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Write message (libwebsockets handles partial writes)
|
||||
int result = lws_write(wsi, node->data + LWS_PRE, node->length, node->type);
|
||||
|
||||
// Free node
|
||||
free(node->data);
|
||||
free(node);
|
||||
|
||||
// If queue not empty, request another callback
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
if (pss->message_queue_head) {
|
||||
lws_callback_on_writable(wsi);
|
||||
} else {
|
||||
pss->writeable_requested = 0;
|
||||
}
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
return (result < 0) ? -1 : 0;
|
||||
}
|
||||
```
|
||||
|
||||
## Refactoring Changes
|
||||
|
||||
### Before (WRONG - Direct Write)
|
||||
```c
|
||||
// websockets.c:855 - OK response
|
||||
int write_result = lws_write(wsi, buf + LWS_PRE, response_len, LWS_WRITE_TEXT);
|
||||
if (write_result < 0) {
|
||||
DEBUG_ERROR("Write failed");
|
||||
} else if ((size_t)write_result != response_len) {
|
||||
// Partial write - queue remaining data
|
||||
queue_websocket_write(wsi, pss, ...);
|
||||
}
|
||||
```
|
||||
|
||||
### After (CORRECT - Queue Message)
|
||||
```c
|
||||
// websockets.c:855 - OK response
|
||||
queue_message(wsi, pss, response_str, response_len, LWS_WRITE_TEXT);
|
||||
// That's it! Writeable callback will handle the actual write
|
||||
```
|
||||
|
||||
### Before (WRONG - Direct Write in Broadcast)
|
||||
```c
|
||||
// subscriptions.c:667 - EVENT broadcast
|
||||
int write_result = lws_write(current_temp->wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
if (write_result < 0) {
|
||||
DEBUG_ERROR("Write failed");
|
||||
} else if ((size_t)write_result != msg_len) {
|
||||
queue_websocket_write(...);
|
||||
}
|
||||
```
|
||||
|
||||
### After (CORRECT - Queue Message)
|
||||
```c
|
||||
// subscriptions.c:667 - EVENT broadcast
|
||||
struct per_session_data* pss = lws_wsi_user(current_temp->wsi);
|
||||
queue_message(current_temp->wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT);
|
||||
// Writeable callback will handle the actual write
|
||||
```
|
||||
|
||||
## Benefits of Correct Pattern
|
||||
|
||||
1. **No Partial Write Handling Needed**
|
||||
- Libwebsockets handles partial writes internally
|
||||
- We just queue complete messages
|
||||
|
||||
2. **No "Write Already Pending" Errors**
|
||||
- Queue can hold unlimited messages
|
||||
- Each processed sequentially from callback
|
||||
|
||||
3. **Thread Safety**
|
||||
- Queue operations protected by session lock
|
||||
- Write only from single callback thread
|
||||
|
||||
4. **Frame Atomicity**
|
||||
- Libwebsockets ensures complete frame transmission
|
||||
- No interleaved partial writes
|
||||
|
||||
5. **Simpler Code**
|
||||
- No complex partial write state machine
|
||||
- Just queue and forget
|
||||
|
||||
6. **Better Performance**
|
||||
- Libwebsockets optimizes write timing
|
||||
- Batches writes when socket ready
|
||||
|
||||
## Migration Steps
|
||||
|
||||
1. ✅ Identify all `lws_write()` call sites
|
||||
2. ✅ Confirm violation of libwebsockets pattern
|
||||
3. ⏳ Design message queue structure
|
||||
4. ⏳ Implement `queue_message()` function
|
||||
5. ⏳ Implement `process_message_queue()` function
|
||||
6. ⏳ Update `per_session_data` structure
|
||||
7. ⏳ Refactor OK response to use queue
|
||||
8. ⏳ Refactor EOSE response to use queue
|
||||
9. ⏳ Refactor COUNT response to use queue
|
||||
10. ⏳ Refactor EVENT broadcast to use queue
|
||||
11. ⏳ Update `LWS_CALLBACK_SERVER_WRITEABLE` handler
|
||||
12. ⏳ Add queue cleanup in `LWS_CALLBACK_CLOSED`
|
||||
13. ⏳ Remove old partial write code
|
||||
14. ⏳ Test with rapid multiple events
|
||||
15. ⏳ Test with large events (>4KB)
|
||||
16. ⏳ Test under load
|
||||
17. ⏳ Verify no frame errors
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Test 1: Multiple Rapid Events
|
||||
```bash
|
||||
# Send 10 events rapidly to same client
|
||||
for i in {1..10}; do
|
||||
echo '["EVENT",{"kind":1,"content":"test'$i'","created_at":'$(date +%s)',...}]' | \
|
||||
websocat ws://localhost:8888 &
|
||||
done
|
||||
```
|
||||
|
||||
**Expected**: All events queued and sent sequentially, no errors
|
||||
|
||||
### Test 2: Large Events
|
||||
```bash
|
||||
# Send event >4KB (forces multiple socket writes)
|
||||
nak event --content "$(head -c 5000 /dev/urandom | base64)" | \
|
||||
websocat ws://localhost:8888
|
||||
```
|
||||
|
||||
**Expected**: Event queued, libwebsockets handles partial writes internally
|
||||
|
||||
### Test 3: Concurrent Connections
|
||||
```bash
|
||||
# 100 concurrent connections, each sending events
|
||||
for i in {1..100}; do
|
||||
(echo '["REQ","sub'$i'",{}]'; sleep 1) | websocat ws://localhost:8888 &
|
||||
done
|
||||
```
|
||||
|
||||
**Expected**: All subscriptions work, events broadcast correctly
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- ✅ No `lws_write()` calls outside `LWS_CALLBACK_SERVER_WRITEABLE`
|
||||
- ✅ No "write already pending" errors in logs
|
||||
- ✅ No "Invalid frame header" errors on client side
|
||||
- ✅ All messages delivered in correct order
|
||||
- ✅ Large events (>4KB) handled correctly
|
||||
- ✅ Multiple rapid events to same client work
|
||||
- ✅ Concurrent connections stable under load
|
||||
|
||||
## References
|
||||
|
||||
- [libwebsockets documentation](https://libwebsockets.org/lws-api-doc-main/html/index.html)
|
||||
- [LWS_CALLBACK_SERVER_WRITEABLE](https://libwebsockets.org/lws-api-doc-main/html/group__callback-when-writeable.html)
|
||||
- [lws_callback_on_writable()](https://libwebsockets.org/lws-api-doc-main/html/group__callback-when-writeable.html#ga96f3ad8e1e2c3e0c8e0b0e5e5e5e5e5e)
|
||||
517
docs/nip59_timestamp_configuration_plan.md
Normal file
517
docs/nip59_timestamp_configuration_plan.md
Normal file
@@ -0,0 +1,517 @@
|
||||
# NIP-59 Timestamp Configuration Implementation Plan
|
||||
|
||||
## Overview
|
||||
Add configurable timestamp randomization for NIP-59 gift wraps to improve compatibility with Nostr apps that don't implement timestamp randomization.
|
||||
|
||||
## Problem Statement
|
||||
The NIP-59 protocol specifies that timestamps on gift wraps should have randomness to prevent time-analysis attacks. However, some Nostr platforms don't implement this, causing compatibility issues with direct messaging (NIP-17).
|
||||
|
||||
## Solution
|
||||
Add a configuration parameter `nip59_timestamp_max_delay_sec` that controls the maximum random delay applied to timestamps:
|
||||
- **Value = 0**: Use current timestamp (no randomization) for maximum compatibility
|
||||
- **Value > 0**: Use random timestamp between now and N seconds ago
|
||||
- **Default = 0**: Maximum compatibility mode (no randomization)
|
||||
|
||||
## Implementation Approach: Option B (Direct Parameter Addition)
|
||||
We chose Option B because:
|
||||
1. Explicit and stateless - value flows through call chain
|
||||
2. Thread-safe by design
|
||||
3. No global state needed in nostr_core_lib
|
||||
4. DMs are sent rarely, so database query per call is acceptable
|
||||
|
||||
---
|
||||
|
||||
## Detailed Implementation Steps
|
||||
|
||||
### Phase 1: Configuration Setup in c-relay
|
||||
|
||||
#### 1.1 Add Configuration Parameter
|
||||
**File:** `src/default_config_event.h`
|
||||
**Location:** Line 82 (after `trust_proxy_headers`)
|
||||
|
||||
```c
|
||||
// NIP-59 Gift Wrap Timestamp Configuration
|
||||
{"nip59_timestamp_max_delay_sec", "0"} // Default: 0 (no randomization for compatibility)
|
||||
```
|
||||
|
||||
**Rationale:**
|
||||
- Default of 0 seconds (no randomization) for maximum compatibility
|
||||
- Placed after proxy settings, before closing brace
|
||||
- Follows existing naming convention
|
||||
|
||||
#### 1.2 Add Configuration Validation
|
||||
**File:** `src/config.c`
|
||||
**Function:** `validate_config_field()` (around line 923)
|
||||
|
||||
Add validation case:
|
||||
```c
|
||||
else if (strcmp(key, "nip59_timestamp_max_delay_sec") == 0) {
|
||||
long value = strtol(value_str, NULL, 10);
|
||||
if (value < 0 || value > 604800) { // Max 7 days
|
||||
snprintf(error_msg, error_size,
|
||||
"nip59_timestamp_max_delay_sec must be between 0 and 604800 (7 days)");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Rationale:**
|
||||
- 0 = no randomization (compatibility mode)
|
||||
- 604800 = 7 days maximum (reasonable upper bound)
|
||||
- Prevents negative values or excessive delays
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Modify nostr_core_lib Functions
|
||||
|
||||
#### 2.1 Update random_past_timestamp() Function
|
||||
**File:** `nostr_core_lib/nostr_core/nip059.c`
|
||||
**Current Location:** Lines 31-36
|
||||
|
||||
**Current Code:**
|
||||
```c
|
||||
static time_t random_past_timestamp(void) {
|
||||
time_t now = time(NULL);
|
||||
// Random time up to 2 days (172800 seconds) in the past
|
||||
long random_offset = (long)(rand() % 172800);
|
||||
return now - random_offset;
|
||||
}
|
||||
```
|
||||
|
||||
**New Code:**
|
||||
```c
|
||||
static time_t random_past_timestamp(long max_delay_sec) {
|
||||
time_t now = time(NULL);
|
||||
|
||||
// If max_delay_sec is 0, return current timestamp (no randomization)
|
||||
if (max_delay_sec == 0) {
|
||||
return now;
|
||||
}
|
||||
|
||||
// Random time up to max_delay_sec in the past
|
||||
long random_offset = (long)(rand() % max_delay_sec);
|
||||
return now - random_offset;
|
||||
}
|
||||
```
|
||||
|
||||
**Changes:**
|
||||
- Add `long max_delay_sec` parameter
|
||||
- Handle special case: `max_delay_sec == 0` returns current time
|
||||
- Use `max_delay_sec` instead of hardcoded 172800
|
||||
|
||||
#### 2.2 Update nostr_nip59_create_seal() Function
|
||||
**File:** `nostr_core_lib/nostr_core/nip059.c`
|
||||
**Current Location:** Lines 144-215
|
||||
|
||||
**Function Signature Change:**
|
||||
```c
|
||||
// OLD:
|
||||
cJSON* nostr_nip59_create_seal(cJSON* rumor,
|
||||
const unsigned char* sender_private_key,
|
||||
const unsigned char* recipient_public_key);
|
||||
|
||||
// NEW:
|
||||
cJSON* nostr_nip59_create_seal(cJSON* rumor,
|
||||
const unsigned char* sender_private_key,
|
||||
const unsigned char* recipient_public_key,
|
||||
long max_delay_sec);
|
||||
```
|
||||
|
||||
**Code Change at Line 181:**
|
||||
```c
|
||||
// OLD:
|
||||
time_t seal_time = random_past_timestamp();
|
||||
|
||||
// NEW:
|
||||
time_t seal_time = random_past_timestamp(max_delay_sec);
|
||||
```
|
||||
|
||||
#### 2.3 Update nostr_nip59_create_gift_wrap() Function
|
||||
**File:** `nostr_core_lib/nostr_core/nip059.c`
|
||||
**Current Location:** Lines 220-323
|
||||
|
||||
**Function Signature Change:**
|
||||
```c
|
||||
// OLD:
|
||||
cJSON* nostr_nip59_create_gift_wrap(cJSON* seal,
|
||||
const char* recipient_public_key_hex);
|
||||
|
||||
// NEW:
|
||||
cJSON* nostr_nip59_create_gift_wrap(cJSON* seal,
|
||||
const char* recipient_public_key_hex,
|
||||
long max_delay_sec);
|
||||
```
|
||||
|
||||
**Code Change at Line 275:**
|
||||
```c
|
||||
// OLD:
|
||||
time_t wrap_time = random_past_timestamp();
|
||||
|
||||
// NEW:
|
||||
time_t wrap_time = random_past_timestamp(max_delay_sec);
|
||||
```
|
||||
|
||||
#### 2.4 Update nip059.h Header
|
||||
**File:** `nostr_core_lib/nostr_core/nip059.h`
|
||||
**Locations:** Lines 38-39 and 48
|
||||
|
||||
**Update Function Declarations:**
|
||||
```c
|
||||
// Line 38-39: Update nostr_nip59_create_seal
|
||||
cJSON* nostr_nip59_create_seal(cJSON* rumor,
|
||||
const unsigned char* sender_private_key,
|
||||
const unsigned char* recipient_public_key,
|
||||
long max_delay_sec);
|
||||
|
||||
// Line 48: Update nostr_nip59_create_gift_wrap
|
||||
cJSON* nostr_nip59_create_gift_wrap(cJSON* seal,
|
||||
const char* recipient_public_key_hex,
|
||||
long max_delay_sec);
|
||||
```
|
||||
|
||||
**Update Documentation Comments:**
|
||||
```c
|
||||
/**
|
||||
* NIP-59: Create a seal (kind 13) wrapping a rumor
|
||||
*
|
||||
* @param rumor The rumor event to seal (cJSON object)
|
||||
* @param sender_private_key 32-byte sender private key
|
||||
* @param recipient_public_key 32-byte recipient public key (x-only)
|
||||
* @param max_delay_sec Maximum random delay in seconds (0 = no randomization)
|
||||
* @return cJSON object representing the seal event, or NULL on error
|
||||
*/
|
||||
|
||||
/**
|
||||
* NIP-59: Create a gift wrap (kind 1059) wrapping a seal
|
||||
*
|
||||
* @param seal The seal event to wrap (cJSON object)
|
||||
* @param recipient_public_key_hex Recipient's public key in hex format
|
||||
* @param max_delay_sec Maximum random delay in seconds (0 = no randomization)
|
||||
* @return cJSON object representing the gift wrap event, or NULL on error
|
||||
*/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Update NIP-17 Integration
|
||||
|
||||
#### 3.1 Update nostr_nip17_send_dm() Function
|
||||
**File:** `nostr_core_lib/nostr_core/nip017.c`
|
||||
**Current Location:** Lines 260-320
|
||||
|
||||
**Function Signature Change:**
|
||||
```c
|
||||
// OLD:
|
||||
int nostr_nip17_send_dm(cJSON* dm_event,
|
||||
const char** recipient_pubkeys,
|
||||
int num_recipients,
|
||||
const unsigned char* sender_private_key,
|
||||
cJSON** gift_wraps_out,
|
||||
int max_gift_wraps);
|
||||
|
||||
// NEW:
|
||||
int nostr_nip17_send_dm(cJSON* dm_event,
|
||||
const char** recipient_pubkeys,
|
||||
int num_recipients,
|
||||
const unsigned char* sender_private_key,
|
||||
cJSON** gift_wraps_out,
|
||||
int max_gift_wraps,
|
||||
long max_delay_sec);
|
||||
```
|
||||
|
||||
**Code Changes:**
|
||||
|
||||
At line 281 (seal creation):
|
||||
```c
|
||||
// OLD:
|
||||
cJSON* seal = nostr_nip59_create_seal(dm_event, sender_private_key, recipient_public_key);
|
||||
|
||||
// NEW:
|
||||
cJSON* seal = nostr_nip59_create_seal(dm_event, sender_private_key, recipient_public_key, max_delay_sec);
|
||||
```
|
||||
|
||||
At line 287 (gift wrap creation):
|
||||
```c
|
||||
// OLD:
|
||||
cJSON* gift_wrap = nostr_nip59_create_gift_wrap(seal, recipient_pubkeys[i]);
|
||||
|
||||
// NEW:
|
||||
cJSON* gift_wrap = nostr_nip59_create_gift_wrap(seal, recipient_pubkeys[i], max_delay_sec);
|
||||
```
|
||||
|
||||
At line 306 (sender seal creation):
|
||||
```c
|
||||
// OLD:
|
||||
cJSON* sender_seal = nostr_nip59_create_seal(dm_event, sender_private_key, sender_public_key);
|
||||
|
||||
// NEW:
|
||||
cJSON* sender_seal = nostr_nip59_create_seal(dm_event, sender_private_key, sender_public_key, max_delay_sec);
|
||||
```
|
||||
|
||||
At line 309 (sender gift wrap creation):
|
||||
```c
|
||||
// OLD:
|
||||
cJSON* sender_gift_wrap = nostr_nip59_create_gift_wrap(sender_seal, sender_pubkey_hex);
|
||||
|
||||
// NEW:
|
||||
cJSON* sender_gift_wrap = nostr_nip59_create_gift_wrap(sender_seal, sender_pubkey_hex, max_delay_sec);
|
||||
```
|
||||
|
||||
#### 3.2 Update nip017.h Header
|
||||
**File:** `nostr_core_lib/nostr_core/nip017.h`
|
||||
**Location:** Lines 102-107
|
||||
|
||||
**Update Function Declaration:**
|
||||
```c
|
||||
int nostr_nip17_send_dm(cJSON* dm_event,
|
||||
const char** recipient_pubkeys,
|
||||
int num_recipients,
|
||||
const unsigned char* sender_private_key,
|
||||
cJSON** gift_wraps_out,
|
||||
int max_gift_wraps,
|
||||
long max_delay_sec);
|
||||
```
|
||||
|
||||
**Update Documentation Comment (lines 88-100):**
|
||||
```c
|
||||
/**
|
||||
* NIP-17: Send a direct message to recipients
|
||||
*
|
||||
* This function creates the appropriate rumor, seals it, gift wraps it,
|
||||
* and returns the final gift wrap events ready for publishing.
|
||||
*
|
||||
* @param dm_event The unsigned DM event (kind 14 or 15)
|
||||
* @param recipient_pubkeys Array of recipient public keys (hex strings)
|
||||
* @param num_recipients Number of recipients
|
||||
* @param sender_private_key 32-byte sender private key
|
||||
* @param gift_wraps_out Array to store resulting gift wrap events (caller must free)
|
||||
* @param max_gift_wraps Maximum number of gift wraps to create
|
||||
* @param max_delay_sec Maximum random timestamp delay in seconds (0 = no randomization)
|
||||
* @return Number of gift wrap events created, or -1 on error
|
||||
*/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Update c-relay Call Sites
|
||||
|
||||
#### 4.1 Update src/api.c
|
||||
**Location:** Line 1319
|
||||
|
||||
**Current Code:**
|
||||
```c
|
||||
int send_result = nostr_nip17_send_dm(
|
||||
dm_response, // dm_event
|
||||
recipient_pubkeys, // recipient_pubkeys
|
||||
1, // num_recipients
|
||||
relay_privkey, // sender_private_key
|
||||
gift_wraps, // gift_wraps_out
|
||||
1 // max_gift_wraps
|
||||
);
|
||||
```
|
||||
|
||||
**New Code:**
|
||||
```c
|
||||
// Get timestamp delay configuration
|
||||
long max_delay_sec = get_config_int("nip59_timestamp_max_delay_sec", 0);
|
||||
|
||||
int send_result = nostr_nip17_send_dm(
|
||||
dm_response, // dm_event
|
||||
recipient_pubkeys, // recipient_pubkeys
|
||||
1, // num_recipients
|
||||
relay_privkey, // sender_private_key
|
||||
gift_wraps, // gift_wraps_out
|
||||
1, // max_gift_wraps
|
||||
max_delay_sec // max_delay_sec
|
||||
);
|
||||
```
|
||||
|
||||
#### 4.2 Update src/dm_admin.c
|
||||
**Location:** Line 371
|
||||
|
||||
**Current Code:**
|
||||
```c
|
||||
int send_result = nostr_nip17_send_dm(
|
||||
success_dm, // dm_event
|
||||
sender_pubkey_array, // recipient_pubkeys
|
||||
1, // num_recipients
|
||||
relay_privkey, // sender_private_key
|
||||
success_gift_wraps, // gift_wraps_out
|
||||
1 // max_gift_wraps
|
||||
);
|
||||
```
|
||||
|
||||
**New Code:**
|
||||
```c
|
||||
// Get timestamp delay configuration
|
||||
long max_delay_sec = get_config_int("nip59_timestamp_max_delay_sec", 0);
|
||||
|
||||
int send_result = nostr_nip17_send_dm(
|
||||
success_dm, // dm_event
|
||||
sender_pubkey_array, // recipient_pubkeys
|
||||
1, // num_recipients
|
||||
relay_privkey, // sender_private_key
|
||||
success_gift_wraps, // gift_wraps_out
|
||||
1, // max_gift_wraps
|
||||
max_delay_sec // max_delay_sec
|
||||
);
|
||||
```
|
||||
|
||||
**Note:** Both files already include `config.h`, so `get_config_int()` is available.
|
||||
|
||||
---
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Test Case 1: No Randomization (Compatibility Mode)
|
||||
**Configuration:** `nip59_timestamp_max_delay_sec = 0`
|
||||
|
||||
**Expected Behavior:**
|
||||
- Gift wrap timestamps should equal current time
|
||||
- Seal timestamps should equal current time
|
||||
- No random delay applied
|
||||
|
||||
**Test Command:**
|
||||
```bash
|
||||
# Set config via admin API
|
||||
# Send test DM
|
||||
# Verify timestamps are current (within 1 second of send time)
|
||||
```
|
||||
|
||||
### Test Case 2: Custom Delay
|
||||
**Configuration:** `nip59_timestamp_max_delay_sec = 1000`
|
||||
|
||||
**Expected Behavior:**
|
||||
- Gift wrap timestamps should be between now and 1000 seconds ago
|
||||
- Seal timestamps should be between now and 1000 seconds ago
|
||||
- Random delay applied within specified range
|
||||
|
||||
**Test Command:**
|
||||
```bash
|
||||
# Set config via admin API
|
||||
# Send test DM
|
||||
# Verify timestamps are in past but within 1000 seconds
|
||||
```
|
||||
|
||||
### Test Case 3: Default Behavior
|
||||
**Configuration:** `nip59_timestamp_max_delay_sec = 0` (default)
|
||||
|
||||
**Expected Behavior:**
|
||||
- Gift wrap timestamps should equal current time
|
||||
- Seal timestamps should equal current time
|
||||
- No randomization (maximum compatibility)
|
||||
|
||||
**Test Command:**
|
||||
```bash
|
||||
# Use default config
|
||||
# Send test DM
|
||||
# Verify timestamps are current (within 1 second of send time)
|
||||
```
|
||||
|
||||
### Test Case 4: Configuration Validation
|
||||
**Test Invalid Values:**
|
||||
- Negative value: Should be rejected
|
||||
- Value > 604800: Should be rejected
|
||||
- Valid boundary values (0, 604800): Should be accepted
|
||||
|
||||
### Test Case 5: Interoperability
|
||||
**Test with Other Nostr Clients:**
|
||||
- Send DM with `max_delay_sec = 0` to clients that don't randomize
|
||||
- Send DM with `max_delay_sec = 172800` to clients that do randomize
|
||||
- Verify both scenarios work correctly
|
||||
|
||||
---
|
||||
|
||||
## Documentation Updates
|
||||
|
||||
### Update docs/configuration_guide.md
|
||||
|
||||
Add new section:
|
||||
|
||||
```markdown
|
||||
### NIP-59 Gift Wrap Timestamp Configuration
|
||||
|
||||
#### nip59_timestamp_max_delay_sec
|
||||
- **Type:** Integer
|
||||
- **Default:** 0 (no randomization)
|
||||
- **Range:** 0 to 604800 (7 days)
|
||||
- **Description:** Controls timestamp randomization for NIP-59 gift wraps
|
||||
|
||||
The NIP-59 protocol recommends randomizing timestamps on gift wraps to prevent
|
||||
time-analysis attacks. However, some Nostr platforms don't implement this,
|
||||
causing compatibility issues.
|
||||
|
||||
**Values:**
|
||||
- `0` (default): No randomization - uses current timestamp (maximum compatibility)
|
||||
- `1-604800`: Random timestamp between now and N seconds ago
|
||||
|
||||
**Use Cases:**
|
||||
- Keep default `0` for maximum compatibility with clients that don't randomize
|
||||
- Set to `172800` for privacy per NIP-59 specification (2 days randomization)
|
||||
- Set to custom value (e.g., `3600`) for 1-hour randomization window
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
["nip59_timestamp_max_delay_sec", "0"] // Default: compatibility mode
|
||||
["nip59_timestamp_max_delay_sec", "3600"] // 1 hour randomization
|
||||
["nip59_timestamp_max_delay_sec", "172800"] // 2 days randomization
|
||||
```
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### nostr_core_lib Changes
|
||||
- [ ] Modify `random_past_timestamp()` to accept `max_delay_sec` parameter
|
||||
- [ ] Update `nostr_nip59_create_seal()` signature and implementation
|
||||
- [ ] Update `nostr_nip59_create_gift_wrap()` signature and implementation
|
||||
- [ ] Update `nip059.h` function declarations and documentation
|
||||
- [ ] Update `nostr_nip17_send_dm()` signature and implementation
|
||||
- [ ] Update `nip017.h` function declaration and documentation
|
||||
|
||||
### c-relay Changes
|
||||
- [ ] Add `nip59_timestamp_max_delay_sec` to `default_config_event.h`
|
||||
- [ ] Add validation in `config.c` for new parameter
|
||||
- [ ] Update `src/api.c` call site to pass `max_delay_sec`
|
||||
- [ ] Update `src/dm_admin.c` call site to pass `max_delay_sec`
|
||||
|
||||
### Testing
|
||||
- [ ] Test with `max_delay_sec = 0` (no randomization)
|
||||
- [ ] Test with `max_delay_sec = 1000` (custom delay)
|
||||
- [ ] Test with `max_delay_sec = 172800` (default behavior)
|
||||
- [ ] Test configuration validation (invalid values)
|
||||
- [ ] Test interoperability with other Nostr clients
|
||||
|
||||
### Documentation
|
||||
- [ ] Update `docs/configuration_guide.md`
|
||||
- [ ] Add this implementation plan to docs
|
||||
- [ ] Update README if needed
|
||||
|
||||
---
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If issues arise:
|
||||
1. Revert nostr_core_lib changes (git revert in submodule)
|
||||
2. Revert c-relay changes
|
||||
3. Configuration parameter will be ignored if not used
|
||||
4. Default behavior (0) provides maximum compatibility
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- The configuration is read on each DM send, allowing runtime changes
|
||||
- No restart required when changing `nip59_timestamp_max_delay_sec`
|
||||
- Thread-safe by design (no global state)
|
||||
- Default value of 0 provides maximum compatibility with other Nostr clients
|
||||
- Can be changed to 172800 or other values for NIP-59 privacy features
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [NIP-59: Gift Wrap](https://github.com/nostr-protocol/nips/blob/master/59.md)
|
||||
- [NIP-17: Private Direct Messages](https://github.com/nostr-protocol/nips/blob/master/17.md)
|
||||
- [NIP-44: Versioned Encryption](https://github.com/nostr-protocol/nips/blob/master/44.md)
|
||||
209
docs/subscription_matching_debug_plan.md
Normal file
209
docs/subscription_matching_debug_plan.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Subscription Matching Debug Plan
|
||||
|
||||
## Problem
|
||||
The relay is not matching kind 1059 (NIP-17 gift wrap) events to subscriptions, even though a subscription exists with `kinds:[1059]` filter. The log shows:
|
||||
```
|
||||
Event broadcast complete: 0 subscriptions matched
|
||||
```
|
||||
|
||||
But we have this subscription:
|
||||
```
|
||||
sub:3 146.70.187.119 0x78edc9b43210 8m 27s kinds:[1059], since:10/23/2025, 4:27:59 PM, limit:50
|
||||
```
|
||||
|
||||
## Investigation Strategy
|
||||
|
||||
### 1. Add Debug Output to `event_matches_filter()` (lines 386-564)
|
||||
Add debug logging at each filter check to trace the matching logic:
|
||||
|
||||
- **Entry point**: Log the event kind and filter being tested
|
||||
- **Kinds filter check** (lines 392-415): Log whether kinds filter exists, the event kind value, and each filter kind being compared
|
||||
- **Authors filter check** (lines 417-442): Log if authors filter exists and matching results
|
||||
- **IDs filter check** (lines 444-469): Log if IDs filter exists and matching results
|
||||
- **Since filter check** (lines 471-482): Log the event timestamp vs filter since value
|
||||
- **Until filter check** (lines 484-495): Log the event timestamp vs filter until value
|
||||
- **Tag filters check** (lines 497-561): Log tag filter matching details
|
||||
- **Exit point**: Log whether the overall filter matched
|
||||
|
||||
### 2. Add Debug Output to `event_matches_subscription()` (lines 567-581)
|
||||
Add logging to show:
|
||||
- How many filters are in the subscription
|
||||
- Which filter (if any) matched
|
||||
- Overall subscription match result
|
||||
|
||||
### 3. Add Debug Output to `broadcast_event_to_subscriptions()` (lines 584-726)
|
||||
Add logging to show:
|
||||
- The event being broadcast (kind, id, created_at)
|
||||
- Total number of active subscriptions being checked
|
||||
- How many subscriptions matched after the first pass
|
||||
|
||||
### 4. Key Areas to Focus On
|
||||
|
||||
Based on the code analysis, the most likely issues are:
|
||||
|
||||
1. **Kind matching logic** (lines 392-415): The event kind might not be extracted correctly, or the comparison might be failing
|
||||
2. **Since timestamp** (lines 471-482): The subscription has a `since` filter - if the event timestamp is before this, it won't match
|
||||
3. **Event structure**: The event JSON might not have the expected structure
|
||||
|
||||
### 5. Specific Debug Additions
|
||||
|
||||
#### In `event_matches_filter()` at line 386:
|
||||
```c
|
||||
// Add at start of function
|
||||
cJSON* event_kind_obj = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at_obj = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("FILTER_MATCH: Testing event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind_obj ? (int)cJSON_GetNumberValue(event_kind_obj) : -1,
|
||||
event_id_obj && cJSON_IsString(event_id_obj) ? cJSON_GetStringValue(event_id_obj) : "null",
|
||||
event_created_at_obj ? (long)cJSON_GetNumberValue(event_created_at_obj) : 0);
|
||||
```
|
||||
|
||||
#### In kinds filter check (after line 392):
|
||||
```c
|
||||
if (filter->kinds && cJSON_IsArray(filter->kinds)) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking kinds filter with %d kinds", cJSON_GetArraySize(filter->kinds));
|
||||
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
if (!event_kind || !cJSON_IsNumber(event_kind)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid kind field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int event_kind_val = (int)cJSON_GetNumberValue(event_kind);
|
||||
DEBUG_TRACE("FILTER_MATCH: Event kind=%d", event_kind_val);
|
||||
|
||||
int kind_match = 0;
|
||||
cJSON* kind_item = NULL;
|
||||
cJSON_ArrayForEach(kind_item, filter->kinds) {
|
||||
if (cJSON_IsNumber(kind_item)) {
|
||||
int filter_kind = (int)cJSON_GetNumberValue(kind_item);
|
||||
DEBUG_TRACE("FILTER_MATCH: Comparing event kind %d with filter kind %d", event_kind_val, filter_kind);
|
||||
if (filter_kind == event_kind_val) {
|
||||
kind_match = 1;
|
||||
DEBUG_TRACE("FILTER_MATCH: Kind matched!");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!kind_match) {
|
||||
DEBUG_TRACE("FILTER_MATCH: No kind match, filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Kinds filter passed");
|
||||
}
|
||||
```
|
||||
|
||||
#### In since filter check (after line 472):
|
||||
```c
|
||||
if (filter->since > 0) {
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
if (!event_created_at || !cJSON_IsNumber(event_created_at)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid created_at field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
long event_timestamp = (long)cJSON_GetNumberValue(event_created_at);
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking since filter: event_ts=%ld filter_since=%ld",
|
||||
event_timestamp, filter->since);
|
||||
|
||||
if (event_timestamp < filter->since) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Event too old (before since), filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Since filter passed");
|
||||
}
|
||||
```
|
||||
|
||||
#### At end of `event_matches_filter()` (before line 563):
|
||||
```c
|
||||
DEBUG_TRACE("FILTER_MATCH: All filters passed, event matches!");
|
||||
return 1; // All filters passed
|
||||
```
|
||||
|
||||
#### In `event_matches_subscription()` at line 567:
|
||||
```c
|
||||
int event_matches_subscription(cJSON* event, subscription_t* subscription) {
|
||||
if (!event || !subscription || !subscription->filters) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: Testing subscription '%s'", subscription->id);
|
||||
|
||||
int filter_num = 0;
|
||||
subscription_filter_t* filter = subscription->filters;
|
||||
while (filter) {
|
||||
filter_num++;
|
||||
DEBUG_TRACE("SUB_MATCH: Testing filter #%d", filter_num);
|
||||
|
||||
if (event_matches_filter(event, filter)) {
|
||||
DEBUG_TRACE("SUB_MATCH: Filter #%d matched! Subscription '%s' matches",
|
||||
filter_num, subscription->id);
|
||||
return 1; // Match found (OR logic)
|
||||
}
|
||||
filter = filter->next;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: No filters matched for subscription '%s'", subscription->id);
|
||||
return 0; // No filters matched
|
||||
}
|
||||
```
|
||||
|
||||
#### In `broadcast_event_to_subscriptions()` at line 584:
|
||||
```c
|
||||
int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
if (!event) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Log event details
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("BROADCAST: Event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind ? (int)cJSON_GetNumberValue(event_kind) : -1,
|
||||
event_id && cJSON_IsString(event_id) ? cJSON_GetStringValue(event_id) : "null",
|
||||
event_created_at ? (long)cJSON_GetNumberValue(event_created_at) : 0);
|
||||
|
||||
// ... existing expiration check code ...
|
||||
|
||||
// After line 611 (before pthread_mutex_lock):
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
int total_subs = 0;
|
||||
subscription_t* count_sub = g_subscription_manager.active_subscriptions;
|
||||
while (count_sub) {
|
||||
total_subs++;
|
||||
count_sub = count_sub->next;
|
||||
}
|
||||
DEBUG_TRACE("BROADCAST: Checking %d active subscriptions", total_subs);
|
||||
|
||||
subscription_t* sub = g_subscription_manager.active_subscriptions;
|
||||
// ... rest of matching logic ...
|
||||
```
|
||||
|
||||
## Expected Outcome
|
||||
|
||||
With these debug additions, we should see output like:
|
||||
```
|
||||
BROADCAST: Event kind=1059 id=abc12345 created_at=1729712279
|
||||
BROADCAST: Checking 1 active subscriptions
|
||||
SUB_MATCH: Testing subscription 'sub:3'
|
||||
SUB_MATCH: Testing filter #1
|
||||
FILTER_MATCH: Testing event kind=1059 id=abc12345 created_at=1729712279
|
||||
FILTER_MATCH: Checking kinds filter with 1 kinds
|
||||
FILTER_MATCH: Event kind=1059
|
||||
FILTER_MATCH: Comparing event kind 1059 with filter kind 1059
|
||||
FILTER_MATCH: Kind matched!
|
||||
FILTER_MATCH: Kinds filter passed
|
||||
FILTER_MATCH: Checking since filter: event_ts=1729712279 filter_since=1729708079
|
||||
FILTER_MATCH: Since filter passed
|
||||
FILTER_MATCH: All filters passed, event matches!
|
||||
SUB_MATCH: Filter #1 matched! Subscription 'sub:3' matches
|
||||
Event broadcast complete: 1 subscriptions matched
|
||||
```
|
||||
|
||||
This will help us identify exactly where the matching is failing.
|
||||
200
docs/websocket_write_queue_design.md
Normal file
200
docs/websocket_write_queue_design.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# WebSocket Write Queue Design
|
||||
|
||||
## Problem Statement
|
||||
|
||||
The current partial write handling implementation uses a single buffer per session, which fails when multiple events need to be sent to the same client in rapid succession. This causes:
|
||||
|
||||
1. First event gets partial write → queued successfully
|
||||
2. Second event tries to write → **FAILS** with "write already pending"
|
||||
3. Subsequent events fail similarly, causing data loss
|
||||
|
||||
### Server Log Evidence
|
||||
```
|
||||
[WARN] WS_FRAME_PARTIAL: EVENT partial write, sub=1 sent=3210 expected=5333
|
||||
[TRACE] Queued partial write: len=2123
|
||||
[WARN] WS_FRAME_PARTIAL: EVENT partial write, sub=1 sent=3210 expected=5333
|
||||
[WARN] queue_websocket_write: write already pending, cannot queue new write
|
||||
[ERROR] Failed to queue partial EVENT write for sub=1
|
||||
```
|
||||
|
||||
## Root Cause
|
||||
|
||||
WebSocket frames must be sent **atomically** - you cannot interleave multiple frames. The current single-buffer approach correctly enforces this, but it rejects new writes instead of queuing them.
|
||||
|
||||
## Solution: Write Queue Architecture
|
||||
|
||||
### Design Principles
|
||||
|
||||
1. **Frame Atomicity**: Complete one WebSocket frame before starting the next
|
||||
2. **Sequential Processing**: Process queued writes in FIFO order
|
||||
3. **Memory Safety**: Proper cleanup on connection close or errors
|
||||
4. **Thread Safety**: Protect queue operations with existing session lock
|
||||
|
||||
### Data Structures
|
||||
|
||||
#### Write Queue Node
|
||||
```c
|
||||
struct write_queue_node {
|
||||
unsigned char* buffer; // Buffer with LWS_PRE space
|
||||
size_t total_len; // Total length of data to write
|
||||
size_t offset; // How much has been written so far
|
||||
int write_type; // LWS_WRITE_TEXT, etc.
|
||||
struct write_queue_node* next; // Next node in queue
|
||||
};
|
||||
```
|
||||
|
||||
#### Per-Session Write Queue
|
||||
```c
|
||||
struct per_session_data {
|
||||
// ... existing fields ...
|
||||
|
||||
// Write queue for handling multiple pending writes
|
||||
struct write_queue_node* write_queue_head; // First item to write
|
||||
struct write_queue_node* write_queue_tail; // Last item in queue
|
||||
int write_queue_length; // Number of items in queue
|
||||
int write_in_progress; // Flag: 1 if currently writing
|
||||
};
|
||||
```
|
||||
|
||||
### Algorithm Flow
|
||||
|
||||
#### 1. Enqueue Write (`queue_websocket_write`)
|
||||
|
||||
```
|
||||
IF write_queue is empty AND no write in progress:
|
||||
- Attempt immediate write with lws_write()
|
||||
- IF complete:
|
||||
- Return success
|
||||
- ELSE (partial write):
|
||||
- Create queue node with remaining data
|
||||
- Add to queue
|
||||
- Set write_in_progress flag
|
||||
- Request LWS_CALLBACK_SERVER_WRITEABLE
|
||||
ELSE:
|
||||
- Create queue node with full data
|
||||
- Append to queue tail
|
||||
- IF no write in progress:
|
||||
- Request LWS_CALLBACK_SERVER_WRITEABLE
|
||||
```
|
||||
|
||||
#### 2. Process Queue (`process_pending_write`)
|
||||
|
||||
```
|
||||
WHILE write_queue is not empty:
|
||||
- Get head node
|
||||
- Calculate remaining data (total_len - offset)
|
||||
- Attempt write with lws_write()
|
||||
|
||||
IF write fails (< 0):
|
||||
- Log error
|
||||
- Remove and free head node
|
||||
- Continue to next node
|
||||
|
||||
ELSE IF partial write (< remaining):
|
||||
- Update offset
|
||||
- Request LWS_CALLBACK_SERVER_WRITEABLE
|
||||
- Break (wait for next callback)
|
||||
|
||||
ELSE (complete write):
|
||||
- Remove and free head node
|
||||
- Continue to next node
|
||||
|
||||
IF queue is empty:
|
||||
- Clear write_in_progress flag
|
||||
```
|
||||
|
||||
#### 3. Cleanup (`LWS_CALLBACK_CLOSED`)
|
||||
|
||||
```
|
||||
WHILE write_queue is not empty:
|
||||
- Get head node
|
||||
- Free buffer
|
||||
- Free node
|
||||
- Move to next
|
||||
Clear queue pointers
|
||||
```
|
||||
|
||||
### Memory Management
|
||||
|
||||
1. **Allocation**: Each queue node allocates buffer with `LWS_PRE + data_len`
|
||||
2. **Ownership**: Queue owns all buffers until write completes or connection closes
|
||||
3. **Deallocation**: Free buffer and node when:
|
||||
- Write completes successfully
|
||||
- Write fails with error
|
||||
- Connection closes
|
||||
|
||||
### Thread Safety
|
||||
|
||||
- Use existing `pss->session_lock` to protect queue operations
|
||||
- Lock during:
|
||||
- Enqueue operations
|
||||
- Dequeue operations
|
||||
- Queue traversal for cleanup
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
1. **Queue Length Limit**: Implement max queue length (e.g., 100 items) to prevent memory exhaustion
|
||||
2. **Memory Pressure**: Monitor total queued bytes per session
|
||||
3. **Backpressure**: If queue exceeds limit, close connection with NOTICE
|
||||
|
||||
### Error Handling
|
||||
|
||||
1. **Allocation Failure**: Return error, log, send NOTICE to client
|
||||
2. **Write Failure**: Remove failed frame, continue with next
|
||||
3. **Queue Overflow**: Close connection with appropriate NOTICE
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Data Structure Changes
|
||||
1. Add `write_queue_node` structure to `websockets.h`
|
||||
2. Update `per_session_data` with queue fields
|
||||
3. Remove old single-buffer fields
|
||||
|
||||
### Phase 2: Queue Operations
|
||||
1. Implement `enqueue_write()` helper
|
||||
2. Implement `dequeue_write()` helper
|
||||
3. Update `queue_websocket_write()` to use queue
|
||||
4. Update `process_pending_write()` to process queue
|
||||
|
||||
### Phase 3: Integration
|
||||
1. Update all `lws_write()` call sites
|
||||
2. Update `LWS_CALLBACK_CLOSED` cleanup
|
||||
3. Add queue length monitoring
|
||||
|
||||
### Phase 4: Testing
|
||||
1. Test with rapid multiple events to same client
|
||||
2. Test with large events (>4KB)
|
||||
3. Test under load with concurrent connections
|
||||
4. Verify no "Invalid frame header" errors
|
||||
|
||||
## Expected Outcomes
|
||||
|
||||
1. **No More Rejections**: All writes queued successfully
|
||||
2. **Frame Integrity**: Complete frames sent atomically
|
||||
3. **Memory Safety**: Proper cleanup on all paths
|
||||
4. **Performance**: Minimal overhead for queue management
|
||||
|
||||
## Metrics to Monitor
|
||||
|
||||
1. Average queue length per session
|
||||
2. Maximum queue length observed
|
||||
3. Queue overflow events (if limit implemented)
|
||||
4. Write completion rate
|
||||
5. Partial write frequency
|
||||
|
||||
## Alternative Approaches Considered
|
||||
|
||||
### 1. Larger Single Buffer
|
||||
**Rejected**: Doesn't solve the fundamental problem of multiple concurrent writes
|
||||
|
||||
### 2. Immediate Write Retry
|
||||
**Rejected**: Could cause busy-waiting and CPU waste
|
||||
|
||||
### 3. Drop Frames on Conflict
|
||||
**Rejected**: Violates reliability requirements
|
||||
|
||||
## References
|
||||
|
||||
- libwebsockets documentation on `lws_write()` and `LWS_CALLBACK_SERVER_WRITEABLE`
|
||||
- WebSocket RFC 6455 on frame structure
|
||||
- Nostr NIP-01 on relay-to-client communication
|
||||
@@ -122,7 +122,7 @@ increment_version() {
|
||||
print_status "New version: $NEW_VERSION"
|
||||
|
||||
# Update version in src/main.h
|
||||
update_version_in_header "$NEW_VERSION" "$MAJOR" "$NEW_MINOR" "$NEW_PATCH"
|
||||
update_version_in_header "$NEW_VERSION" "$MAJOR" "${NEW_MINOR:-$MINOR}" "${NEW_PATCH:-$PATCH}"
|
||||
|
||||
# Export for use in other functions
|
||||
export NEW_VERSION
|
||||
@@ -150,7 +150,7 @@ update_version_in_header() {
|
||||
sed -i "s/#define VERSION_MAJOR [0-9]\+/#define VERSION_MAJOR $major/" src/main.h
|
||||
|
||||
# Update VERSION_MINOR macro
|
||||
sed -i "s/#define VERSION_MINOR [0-9]\+/#define VERSION_MINOR $minor/" src/main.h
|
||||
sed -i "s/#define VERSION_MINOR .*/#define VERSION_MINOR $minor/" src/main.h
|
||||
|
||||
# Update VERSION_PATCH macro
|
||||
sed -i "s/#define VERSION_PATCH [0-9]\+/#define VERSION_PATCH $patch/" src/main.h
|
||||
|
||||
@@ -133,6 +133,11 @@ if [ -n "$PORT_OVERRIDE" ]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Validate strict port flag (only makes sense with port override)
|
||||
if [ "$USE_TEST_KEYS" = true ] && [ -z "$PORT_OVERRIDE" ]; then
|
||||
echo "WARNING: --strict-port is always used with test keys. Consider specifying a custom port with -p."
|
||||
fi
|
||||
|
||||
# Validate debug level if provided
|
||||
if [ -n "$DEBUG_LEVEL" ]; then
|
||||
if ! [[ "$DEBUG_LEVEL" =~ ^[0-5]$ ]]; then
|
||||
@@ -163,6 +168,8 @@ if [ "$HELP" = true ]; then
|
||||
echo " $0 # Fresh start with random keys"
|
||||
echo " $0 -a <admin-hex> -r <relay-hex> # Use custom keys"
|
||||
echo " $0 -a <admin-hex> -p 9000 # Custom admin key on port 9000"
|
||||
echo " $0 -p 7777 --strict-port # Fail if port 7777 unavailable (no fallback)"
|
||||
echo " $0 -p 8080 --strict-port -d=3 # Custom port with strict binding and debug"
|
||||
echo " $0 --debug-level=3 # Start with debug level 3 (info)"
|
||||
echo " $0 -d=5 # Start with debug level 5 (trace)"
|
||||
echo " $0 --preserve-database # Preserve existing database and keys"
|
||||
|
||||
Submodule nostr_core_lib updated: 5066ba8dd0...a8dc2ed046
15
notes.txt
15
notes.txt
@@ -39,6 +39,11 @@ Even simpler: Use this one-liner
|
||||
cd /usr/local/bin/c_relay
|
||||
sudo -u c-relay ./c_relay --debug-level=5 & sleep 2 && sudo gdb -p $(pgrep c_relay)
|
||||
|
||||
Inside gdb, after attaching:
|
||||
|
||||
(gdb) continue
|
||||
Or shorter:
|
||||
(gdb) c
|
||||
|
||||
|
||||
How to View the Logs
|
||||
@@ -75,4 +80,12 @@ sudo systemctl status rsyslog
|
||||
|
||||
sudo -u c-relay ./c_relay --debug-level=5 -r 85d0b37e2ae822966dcadd06b2dc9368cde73865f90ea4d44f8b57d47ef0820a -a 1ec454734dcbf6fe54901ce25c0c7c6bca5edd89443416761fadc321d38df139
|
||||
|
||||
./c_relay_static_x86_64 -p 7889 --debug-level=5 -r 85d0b37e2ae822966dcadd06b2dc9368cde73865f90ea4d44f8b57d47ef0820a -a 1ec454734dcbf6fe54901ce25c0c7c6bca5edd89443416761fadc321d38df139
|
||||
./c_relay_static_x86_64 -p 7889 --debug-level=5 -r 85d0b37e2ae822966dcadd06b2dc9368cde73865f90ea4d44f8b57d47ef0820a -a 1ec454734dcbf6fe54901ce25c0c7c6bca5edd89443416761fadc321d38df139
|
||||
|
||||
|
||||
sudo ufw allow 8888/tcp
|
||||
sudo ufw delete allow 8888/tcp
|
||||
|
||||
lsof -i :7777
|
||||
kill $(lsof -t -i :7777)
|
||||
kill -9 $(lsof -t -i :7777)
|
||||
736
src/api.c
736
src/api.c
@@ -13,6 +13,7 @@
|
||||
#include <sys/stat.h>
|
||||
#include <unistd.h>
|
||||
#include <strings.h>
|
||||
#include <stdbool.h>
|
||||
#include "api.h"
|
||||
#include "embedded_web_content.h"
|
||||
#include "config.h"
|
||||
@@ -40,28 +41,17 @@ const char* get_config_value(const char* key);
|
||||
int get_config_bool(const char* key, int default_value);
|
||||
int update_config_in_table(const char* key, const char* value);
|
||||
|
||||
// Monitoring system state
|
||||
static time_t last_report_time = 0;
|
||||
// Monitoring system state (throttling now handled per-function)
|
||||
|
||||
// Forward declaration for monitoring helper function
|
||||
int generate_monitoring_event_for_type(const char* d_tag_value, cJSON* (*query_func)(void));
|
||||
|
||||
// Forward declaration for CPU metrics query function
|
||||
cJSON* query_cpu_metrics(void);
|
||||
|
||||
// Monitoring system helper functions
|
||||
int is_monitoring_enabled(void) {
|
||||
return get_config_bool("kind_34567_reporting_enabled", 0);
|
||||
}
|
||||
|
||||
int get_monitoring_throttle_seconds(void) {
|
||||
return get_config_int("kind_34567_reporting_throttling_sec", 5);
|
||||
}
|
||||
|
||||
int set_monitoring_enabled(int enabled) {
|
||||
const char* value = enabled ? "1" : "0";
|
||||
if (update_config_in_table("kind_34567_reporting_enabled", value) == 0) {
|
||||
DEBUG_INFO("Monitoring enabled state changed");
|
||||
return 0;
|
||||
}
|
||||
return -1;
|
||||
return get_config_int("kind_24567_reporting_throttle_sec", 5);
|
||||
}
|
||||
|
||||
// Query event kind distribution from database
|
||||
@@ -233,78 +223,32 @@ cJSON* query_top_pubkeys(void) {
|
||||
return top_pubkeys;
|
||||
}
|
||||
|
||||
// Query active subscriptions from in-memory manager (NO DATABASE QUERY)
|
||||
cJSON* query_active_subscriptions(void) {
|
||||
// Access the global subscription manager
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
int total_subs = g_subscription_manager.total_subscriptions;
|
||||
int max_subs = g_subscription_manager.max_total_subscriptions;
|
||||
int max_per_client = g_subscription_manager.max_subscriptions_per_client;
|
||||
|
||||
// Calculate per-client statistics by iterating through active subscriptions
|
||||
int client_count = 0;
|
||||
int most_subs_per_client = 0;
|
||||
|
||||
// Count subscriptions per WebSocket connection
|
||||
subscription_t* current = g_subscription_manager.active_subscriptions;
|
||||
struct lws* last_wsi = NULL;
|
||||
int current_client_subs = 0;
|
||||
|
||||
while (current) {
|
||||
if (current->wsi != last_wsi) {
|
||||
// New client
|
||||
if (last_wsi != NULL) {
|
||||
client_count++;
|
||||
if (current_client_subs > most_subs_per_client) {
|
||||
most_subs_per_client = current_client_subs;
|
||||
}
|
||||
}
|
||||
last_wsi = current->wsi;
|
||||
current_client_subs = 1;
|
||||
} else {
|
||||
current_client_subs++;
|
||||
}
|
||||
current = current->next;
|
||||
}
|
||||
|
||||
// Handle last client
|
||||
if (last_wsi != NULL) {
|
||||
client_count++;
|
||||
if (current_client_subs > most_subs_per_client) {
|
||||
most_subs_per_client = current_client_subs;
|
||||
}
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
// Calculate statistics
|
||||
double utilization_percentage = max_subs > 0 ? (total_subs * 100.0 / max_subs) : 0.0;
|
||||
double avg_subs_per_client = client_count > 0 ? (total_subs * 1.0 / client_count) : 0.0;
|
||||
|
||||
// Build JSON response matching the design spec
|
||||
cJSON* subscriptions = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(subscriptions, "data_type", "active_subscriptions");
|
||||
cJSON_AddNumberToObject(subscriptions, "timestamp", (double)time(NULL));
|
||||
|
||||
cJSON* data = cJSON_CreateObject();
|
||||
cJSON_AddNumberToObject(data, "total_subscriptions", total_subs);
|
||||
cJSON_AddNumberToObject(data, "max_subscriptions", max_subs);
|
||||
cJSON_AddNumberToObject(data, "utilization_percentage", utilization_percentage);
|
||||
cJSON_AddNumberToObject(data, "subscriptions_per_client_avg", avg_subs_per_client);
|
||||
cJSON_AddNumberToObject(data, "most_subscriptions_per_client", most_subs_per_client);
|
||||
cJSON_AddNumberToObject(data, "max_subscriptions_per_client", max_per_client);
|
||||
cJSON_AddNumberToObject(data, "active_clients", client_count);
|
||||
|
||||
cJSON_AddItemToObject(subscriptions, "data", data);
|
||||
|
||||
return subscriptions;
|
||||
}
|
||||
|
||||
// Query detailed subscription information from in-memory manager (ADMIN ONLY)
|
||||
// Query detailed subscription information from database log (ADMIN ONLY)
|
||||
// Uses subscriptions table instead of in-memory iteration to avoid mutex contention
|
||||
cJSON* query_subscription_details(void) {
|
||||
// Access the global subscription manager
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
extern sqlite3* g_db;
|
||||
if (!g_db) {
|
||||
DEBUG_ERROR("Database not available for subscription details query");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Query active subscriptions from the active_subscriptions_log view
|
||||
// This view properly handles deduplication of closed/expired subscriptions
|
||||
sqlite3_stmt* stmt;
|
||||
const char* sql =
|
||||
"SELECT * "
|
||||
"FROM active_subscriptions_log "
|
||||
"ORDER BY created_at DESC LIMIT 100";
|
||||
|
||||
// DEBUG: Log the query results for debugging subscription_details
|
||||
DEBUG_LOG("=== SUBSCRIPTION_DETAILS QUERY DEBUG ===");
|
||||
DEBUG_LOG("Query: %s", sql);
|
||||
|
||||
if (sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL) != SQLITE_OK) {
|
||||
DEBUG_ERROR("Failed to prepare subscription details query");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
time_t current_time = time(NULL);
|
||||
cJSON* subscriptions_data = cJSON_CreateObject();
|
||||
@@ -314,70 +258,50 @@ cJSON* query_subscription_details(void) {
|
||||
cJSON* data = cJSON_CreateObject();
|
||||
cJSON* subscriptions_array = cJSON_CreateArray();
|
||||
|
||||
// Iterate through all active subscriptions
|
||||
subscription_t* current = g_subscription_manager.active_subscriptions;
|
||||
while (current) {
|
||||
// Iterate through query results
|
||||
int row_count = 0;
|
||||
while (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
row_count++;
|
||||
cJSON* sub_obj = cJSON_CreateObject();
|
||||
|
||||
// Basic subscription info
|
||||
cJSON_AddStringToObject(sub_obj, "id", current->id);
|
||||
cJSON_AddStringToObject(sub_obj, "client_ip", current->client_ip);
|
||||
cJSON_AddNumberToObject(sub_obj, "created_at", (double)current->created_at);
|
||||
cJSON_AddNumberToObject(sub_obj, "duration_seconds", (double)(current_time - current->created_at));
|
||||
cJSON_AddNumberToObject(sub_obj, "events_sent", current->events_sent);
|
||||
cJSON_AddBoolToObject(sub_obj, "active", current->active);
|
||||
// Extract subscription data from database
|
||||
const char* sub_id = (const char*)sqlite3_column_text(stmt, 0);
|
||||
const char* client_ip = (const char*)sqlite3_column_text(stmt, 1);
|
||||
const char* filter_json = (const char*)sqlite3_column_text(stmt, 2);
|
||||
long long events_sent = sqlite3_column_int64(stmt, 3);
|
||||
long long created_at = sqlite3_column_int64(stmt, 4);
|
||||
long long duration_seconds = sqlite3_column_int64(stmt, 5);
|
||||
|
||||
// Extract filter details
|
||||
cJSON* filters_array = cJSON_CreateArray();
|
||||
subscription_filter_t* filter = current->filters;
|
||||
// DEBUG: Log each subscription found
|
||||
DEBUG_LOG("Row %d: sub_id=%s, client_ip=%s, events_sent=%lld, created_at=%lld",
|
||||
row_count, sub_id ? sub_id : "NULL", client_ip ? client_ip : "NULL",
|
||||
events_sent, created_at);
|
||||
|
||||
while (filter) {
|
||||
cJSON* filter_obj = cJSON_CreateObject();
|
||||
// Add basic subscription info
|
||||
cJSON_AddStringToObject(sub_obj, "id", sub_id ? sub_id : "");
|
||||
cJSON_AddStringToObject(sub_obj, "client_ip", client_ip ? client_ip : "");
|
||||
cJSON_AddNumberToObject(sub_obj, "created_at", (double)created_at);
|
||||
cJSON_AddNumberToObject(sub_obj, "duration_seconds", (double)duration_seconds);
|
||||
cJSON_AddNumberToObject(sub_obj, "events_sent", events_sent);
|
||||
cJSON_AddBoolToObject(sub_obj, "active", 1); // All from this view are active
|
||||
|
||||
// Add kinds array if present
|
||||
if (filter->kinds) {
|
||||
cJSON_AddItemToObject(filter_obj, "kinds", cJSON_Duplicate(filter->kinds, 1));
|
||||
// Parse and add filter JSON if available
|
||||
if (filter_json) {
|
||||
cJSON* filters = cJSON_Parse(filter_json);
|
||||
if (filters) {
|
||||
cJSON_AddItemToObject(sub_obj, "filters", filters);
|
||||
} else {
|
||||
// If parsing fails, add empty array
|
||||
cJSON_AddItemToObject(sub_obj, "filters", cJSON_CreateArray());
|
||||
}
|
||||
|
||||
// Add authors array if present
|
||||
if (filter->authors) {
|
||||
cJSON_AddItemToObject(filter_obj, "authors", cJSON_Duplicate(filter->authors, 1));
|
||||
}
|
||||
|
||||
// Add ids array if present
|
||||
if (filter->ids) {
|
||||
cJSON_AddItemToObject(filter_obj, "ids", cJSON_Duplicate(filter->ids, 1));
|
||||
}
|
||||
|
||||
// Add since/until timestamps if set
|
||||
if (filter->since > 0) {
|
||||
cJSON_AddNumberToObject(filter_obj, "since", (double)filter->since);
|
||||
}
|
||||
if (filter->until > 0) {
|
||||
cJSON_AddNumberToObject(filter_obj, "until", (double)filter->until);
|
||||
}
|
||||
|
||||
// Add limit if set
|
||||
if (filter->limit > 0) {
|
||||
cJSON_AddNumberToObject(filter_obj, "limit", filter->limit);
|
||||
}
|
||||
|
||||
// Add tag filters if present
|
||||
if (filter->tag_filters) {
|
||||
cJSON_AddItemToObject(filter_obj, "tag_filters", cJSON_Duplicate(filter->tag_filters, 1));
|
||||
}
|
||||
|
||||
cJSON_AddItemToArray(filters_array, filter_obj);
|
||||
filter = filter->next;
|
||||
} else {
|
||||
cJSON_AddItemToObject(sub_obj, "filters", cJSON_CreateArray());
|
||||
}
|
||||
|
||||
cJSON_AddItemToObject(sub_obj, "filters", filters_array);
|
||||
cJSON_AddItemToArray(subscriptions_array, sub_obj);
|
||||
|
||||
current = current->next;
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
sqlite3_finalize(stmt);
|
||||
|
||||
// Add subscriptions array and count to data
|
||||
cJSON_AddItemToObject(data, "subscriptions", subscriptions_array);
|
||||
@@ -385,11 +309,15 @@ cJSON* query_subscription_details(void) {
|
||||
|
||||
cJSON_AddItemToObject(subscriptions_data, "data", data);
|
||||
|
||||
// DEBUG: Log final summary
|
||||
DEBUG_LOG("Total subscriptions found: %d", cJSON_GetArraySize(subscriptions_array));
|
||||
DEBUG_LOG("=== END SUBSCRIPTION_DETAILS QUERY DEBUG ===");
|
||||
|
||||
return subscriptions_data;
|
||||
}
|
||||
|
||||
// Generate and broadcast monitoring event
|
||||
int generate_monitoring_event(void) {
|
||||
// Generate event-driven monitoring events (triggered by event storage)
|
||||
int generate_event_driven_monitoring(void) {
|
||||
// Generate event_kinds monitoring event
|
||||
if (generate_monitoring_event_for_type("event_kinds", query_event_kind_distribution) != 0) {
|
||||
DEBUG_ERROR("Failed to generate event_kinds monitoring event");
|
||||
@@ -408,22 +336,39 @@ int generate_monitoring_event(void) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Generate active_subscriptions monitoring event
|
||||
if (generate_monitoring_event_for_type("active_subscriptions", query_active_subscriptions) != 0) {
|
||||
DEBUG_ERROR("Failed to generate active_subscriptions monitoring event");
|
||||
|
||||
// Generate CPU metrics monitoring event (also triggered by event storage)
|
||||
if (generate_monitoring_event_for_type("cpu_metrics", query_cpu_metrics) != 0) {
|
||||
DEBUG_ERROR("Failed to generate cpu_metrics monitoring event");
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Generate subscription-driven monitoring events (triggered by subscription changes)
|
||||
int generate_subscription_driven_monitoring(void) {
|
||||
|
||||
// Generate subscription_details monitoring event (admin-only)
|
||||
if (generate_monitoring_event_for_type("subscription_details", query_subscription_details) != 0) {
|
||||
DEBUG_ERROR("Failed to generate subscription_details monitoring event");
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_INFO("Generated and broadcast all monitoring events");
|
||||
// Generate CPU metrics monitoring event (also triggered by subscription changes)
|
||||
if (generate_monitoring_event_for_type("cpu_metrics", query_cpu_metrics) != 0) {
|
||||
DEBUG_ERROR("Failed to generate cpu_metrics monitoring event");
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Generate and broadcast monitoring event (legacy function - now calls event-driven version)
|
||||
int generate_monitoring_event(void) {
|
||||
return generate_event_driven_monitoring();
|
||||
}
|
||||
|
||||
// Helper function to generate monitoring event for a specific type
|
||||
int generate_monitoring_event_for_type(const char* d_tag_value, cJSON* (*query_func)(void)) {
|
||||
// Query the monitoring data
|
||||
@@ -461,12 +406,12 @@ int generate_monitoring_event_for_type(const char* d_tag_value, cJSON* (*query_f
|
||||
}
|
||||
free(relay_privkey_hex);
|
||||
|
||||
// Create monitoring event (kind 34567)
|
||||
// Create monitoring event (kind 24567 - ephemeral)
|
||||
cJSON* monitoring_event = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(monitoring_event, "id", ""); // Will be set by signing
|
||||
cJSON_AddStringToObject(monitoring_event, "pubkey", relay_pubkey);
|
||||
cJSON_AddNumberToObject(monitoring_event, "created_at", (double)time(NULL));
|
||||
cJSON_AddNumberToObject(monitoring_event, "kind", 34567);
|
||||
cJSON_AddNumberToObject(monitoring_event, "kind", 24567);
|
||||
cJSON_AddStringToObject(monitoring_event, "content", content_json);
|
||||
|
||||
// Create tags array with d tag for identification
|
||||
@@ -482,7 +427,7 @@ int generate_monitoring_event_for_type(const char* d_tag_value, cJSON* (*query_f
|
||||
|
||||
// Use the library function to create and sign the event
|
||||
cJSON* signed_event = nostr_create_and_sign_event(
|
||||
34567, // kind
|
||||
24567, // kind (ephemeral)
|
||||
cJSON_GetStringValue(cJSON_GetObjectItem(monitoring_event, "content")), // content
|
||||
tags, // tags
|
||||
relay_privkey, // private key
|
||||
@@ -500,55 +445,58 @@ int generate_monitoring_event_for_type(const char* d_tag_value, cJSON* (*query_f
|
||||
cJSON_Delete(monitoring_event);
|
||||
monitoring_event = signed_event;
|
||||
|
||||
// Broadcast the event to active subscriptions
|
||||
// Broadcast the ephemeral event to active subscriptions (no database storage)
|
||||
broadcast_event_to_subscriptions(monitoring_event);
|
||||
|
||||
// Store in database
|
||||
int store_result = store_event(monitoring_event);
|
||||
|
||||
cJSON_Delete(monitoring_event);
|
||||
free(content_json);
|
||||
|
||||
if (store_result != 0) {
|
||||
DEBUG_ERROR("Failed to store monitoring event (%s)", d_tag_value);
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_LOG("Monitoring event broadcast (ephemeral kind 24567, type: %s)", d_tag_value);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Monitoring hook called when an event is stored
|
||||
void monitoring_on_event_stored(void) {
|
||||
// Check if monitoring is enabled
|
||||
if (!is_monitoring_enabled()) {
|
||||
// Check throttling first (cheapest check)
|
||||
static time_t last_monitoring_time = 0;
|
||||
time_t current_time = time(NULL);
|
||||
int throttle_seconds = get_monitoring_throttle_seconds();
|
||||
|
||||
if (current_time - last_monitoring_time < throttle_seconds) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Check throttling
|
||||
time_t now = time(NULL);
|
||||
// Check if anyone is subscribed to monitoring events (kind 24567)
|
||||
// This is the ONLY activation check needed - if someone subscribes, they want monitoring
|
||||
if (!has_subscriptions_for_kind(24567)) {
|
||||
return; // No subscribers = no expensive operations
|
||||
}
|
||||
|
||||
// Generate event-driven monitoring events only when someone is listening
|
||||
last_monitoring_time = current_time;
|
||||
generate_event_driven_monitoring();
|
||||
}
|
||||
|
||||
// Monitoring hook called when subscriptions change (create/close)
|
||||
void monitoring_on_subscription_change(void) {
|
||||
// Check throttling first (cheapest check)
|
||||
static time_t last_monitoring_time = 0;
|
||||
time_t current_time = time(NULL);
|
||||
int throttle_seconds = get_monitoring_throttle_seconds();
|
||||
|
||||
if (now - last_report_time < throttle_seconds) {
|
||||
return; // Too soon, skip this report
|
||||
if (current_time - last_monitoring_time < throttle_seconds) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Generate and broadcast monitoring event
|
||||
if (generate_monitoring_event() == 0) {
|
||||
last_report_time = now;
|
||||
// Check if anyone is subscribed to monitoring events (kind 24567)
|
||||
// This is the ONLY activation check needed - if someone subscribes, they want monitoring
|
||||
if (!has_subscriptions_for_kind(24567)) {
|
||||
return; // No subscribers = no expensive operations
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize monitoring system
|
||||
int init_monitoring_system(void) {
|
||||
last_report_time = 0;
|
||||
DEBUG_INFO("Monitoring system initialized");
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Cleanup monitoring system
|
||||
void cleanup_monitoring_system(void) {
|
||||
// No cleanup needed for monitoring system
|
||||
DEBUG_INFO("Monitoring system cleaned up");
|
||||
// Generate subscription-driven monitoring events only when someone is listening
|
||||
last_monitoring_time = current_time;
|
||||
generate_subscription_driven_monitoring();
|
||||
}
|
||||
|
||||
// Forward declaration for known_configs (defined in config.c)
|
||||
@@ -778,7 +726,7 @@ int send_admin_response(const char* sender_pubkey, const char* response_content,
|
||||
}
|
||||
|
||||
// Encrypt response content using NIP-44
|
||||
char encrypted_content[16384]; // Buffer for encrypted content (increased size)
|
||||
char encrypted_content[131072]; // Buffer for encrypted content (128KB to handle large SQL responses)
|
||||
int encrypt_result = nostr_nip44_encrypt(
|
||||
relay_privkey, // sender private key (bytes)
|
||||
sender_pubkey_bytes, // recipient public key (bytes)
|
||||
@@ -1140,6 +1088,68 @@ int handle_embedded_file_writeable(struct lws* wsi) {
|
||||
|
||||
return 0;
|
||||
}
|
||||
// Query CPU usage metrics
|
||||
cJSON* query_cpu_metrics(void) {
|
||||
cJSON* cpu_stats = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(cpu_stats, "data_type", "cpu_metrics");
|
||||
cJSON_AddNumberToObject(cpu_stats, "timestamp", (double)time(NULL));
|
||||
|
||||
// Read process CPU times from /proc/self/stat
|
||||
FILE* proc_stat = fopen("/proc/self/stat", "r");
|
||||
if (proc_stat) {
|
||||
unsigned long utime, stime; // user and system CPU time in clock ticks
|
||||
if (fscanf(proc_stat, "%*d %*s %*c %*d %*d %*d %*d %*d %*u %*u %*u %*u %*u %lu %lu", &utime, &stime) == 2) {
|
||||
unsigned long total_proc_time = utime + stime;
|
||||
|
||||
// Get system CPU times from /proc/stat
|
||||
FILE* sys_stat = fopen("/proc/stat", "r");
|
||||
if (sys_stat) {
|
||||
unsigned long user, nice, system, idle, iowait, irq, softirq;
|
||||
if (fscanf(sys_stat, "cpu %lu %lu %lu %lu %lu %lu %lu", &user, &nice, &system, &idle, &iowait, &irq, &softirq) == 7) {
|
||||
unsigned long total_sys_time = user + nice + system + idle + iowait + irq + softirq;
|
||||
|
||||
// Calculate CPU percentages (simplified - would need deltas for accuracy)
|
||||
// For now, just store the raw values - frontend can calculate deltas
|
||||
cJSON_AddNumberToObject(cpu_stats, "process_cpu_time", (double)total_proc_time);
|
||||
cJSON_AddNumberToObject(cpu_stats, "system_cpu_time", (double)total_sys_time);
|
||||
cJSON_AddNumberToObject(cpu_stats, "system_idle_time", (double)idle);
|
||||
}
|
||||
fclose(sys_stat);
|
||||
}
|
||||
|
||||
// Get current CPU core the process is running on
|
||||
int current_core = sched_getcpu();
|
||||
if (current_core >= 0) {
|
||||
cJSON_AddNumberToObject(cpu_stats, "current_cpu_core", current_core);
|
||||
}
|
||||
}
|
||||
fclose(proc_stat);
|
||||
}
|
||||
|
||||
// Get process ID
|
||||
pid_t pid = getpid();
|
||||
cJSON_AddNumberToObject(cpu_stats, "process_id", (double)pid);
|
||||
|
||||
// Get memory usage from /proc/self/status
|
||||
FILE* mem_stat = fopen("/proc/self/status", "r");
|
||||
if (mem_stat) {
|
||||
char line[256];
|
||||
while (fgets(line, sizeof(line), mem_stat)) {
|
||||
if (strncmp(line, "VmRSS:", 6) == 0) {
|
||||
unsigned long rss_kb;
|
||||
if (sscanf(line, "VmRSS: %lu kB", &rss_kb) == 1) {
|
||||
double rss_mb = rss_kb / 1024.0;
|
||||
cJSON_AddNumberToObject(cpu_stats, "memory_usage_mb", rss_mb);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
fclose(mem_stat);
|
||||
}
|
||||
|
||||
return cpu_stats;
|
||||
}
|
||||
|
||||
// Generate stats JSON from database queries
|
||||
char* generate_stats_json(void) {
|
||||
extern sqlite3* g_db;
|
||||
@@ -1304,6 +1314,9 @@ int send_nip17_response(const char* sender_pubkey, const char* response_content,
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Get timestamp delay configuration
|
||||
long max_delay_sec = get_config_int("nip59_timestamp_max_delay_sec", 0);
|
||||
|
||||
// Create and sign gift wrap using library function
|
||||
cJSON* gift_wraps[1];
|
||||
int send_result = nostr_nip17_send_dm(
|
||||
@@ -1312,7 +1325,8 @@ int send_nip17_response(const char* sender_pubkey, const char* response_content,
|
||||
1, // num_recipients
|
||||
relay_privkey, // sender_private_key
|
||||
gift_wraps, // gift_wraps_out
|
||||
1 // max_gift_wraps
|
||||
1, // max_gift_wraps
|
||||
max_delay_sec // max_delay_sec
|
||||
);
|
||||
|
||||
cJSON_Delete(dm_response);
|
||||
@@ -2221,6 +2235,306 @@ int process_config_change_request(const char* admin_pubkey, const char* message)
|
||||
return 1; // Confirmation sent
|
||||
}
|
||||
|
||||
// Forward declarations for relay event creation functions
|
||||
cJSON* create_relay_metadata_event(cJSON* metadata);
|
||||
cJSON* create_relay_dm_list_event(cJSON* dm_relays);
|
||||
cJSON* create_relay_list_event(cJSON* relays);
|
||||
|
||||
// Handle create_relay_event admin commands
|
||||
int handle_create_relay_event_command(cJSON* event, int kind, cJSON* event_data, char* error_message, size_t error_size, struct lws* wsi) {
|
||||
if (!event || !event_data || !error_message) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Get request event ID for response correlation
|
||||
cJSON* request_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
if (!request_id_obj || !cJSON_IsString(request_id_obj)) {
|
||||
snprintf(error_message, error_size, "Missing request event ID");
|
||||
return -1;
|
||||
}
|
||||
const char* request_id = cJSON_GetStringValue(request_id_obj);
|
||||
|
||||
// Get sender pubkey for response
|
||||
cJSON* sender_pubkey_obj = cJSON_GetObjectItem(event, "pubkey");
|
||||
if (!sender_pubkey_obj || !cJSON_IsString(sender_pubkey_obj)) {
|
||||
snprintf(error_message, error_size, "Missing sender pubkey");
|
||||
return -1;
|
||||
}
|
||||
const char* sender_pubkey = cJSON_GetStringValue(sender_pubkey_obj);
|
||||
|
||||
// Create the relay event based on kind
|
||||
cJSON* relay_event = NULL;
|
||||
switch (kind) {
|
||||
case 0: // User metadata
|
||||
relay_event = create_relay_metadata_event(event_data);
|
||||
break;
|
||||
case 10050: // DM relay list
|
||||
relay_event = create_relay_dm_list_event(event_data);
|
||||
break;
|
||||
case 10002: // Relay list
|
||||
relay_event = create_relay_list_event(event_data);
|
||||
break;
|
||||
default: {
|
||||
char response_content[256];
|
||||
snprintf(response_content, sizeof(response_content),
|
||||
"❌ Unsupported event kind: %d\n\nSupported kinds: 0 (metadata), 10050 (DM relays), 10002 (relays)",
|
||||
kind);
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
}
|
||||
|
||||
if (!relay_event) {
|
||||
char response_content[128];
|
||||
snprintf(response_content, sizeof(response_content),
|
||||
"❌ Failed to create relay event (kind %d)\n\nCheck relay logs for details.", kind);
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
|
||||
// Store the event in database
|
||||
int store_result = store_event(relay_event);
|
||||
if (store_result != 0) {
|
||||
cJSON_Delete(relay_event);
|
||||
char response_content[128];
|
||||
snprintf(response_content, sizeof(response_content),
|
||||
"❌ Failed to store relay event (kind %d) in database", kind);
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
|
||||
// Broadcast the event to connected clients
|
||||
broadcast_event_to_subscriptions(relay_event);
|
||||
|
||||
// Clean up
|
||||
cJSON_Delete(relay_event);
|
||||
|
||||
// Send success response (plain text like other admin commands)
|
||||
char response_content[256];
|
||||
const char* kind_name = (kind == 0) ? "metadata" : (kind == 10050) ? "DM relay list" : "relay list";
|
||||
snprintf(response_content, sizeof(response_content),
|
||||
"✅ Relay event created successfully\n\nKind: %d (%s)\n\nEvent has been stored and broadcast to subscribers.",
|
||||
kind, kind_name);
|
||||
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
|
||||
// Create a relay metadata event (kind 0)
|
||||
cJSON* create_relay_metadata_event(cJSON* metadata) {
|
||||
if (!metadata || !cJSON_IsObject(metadata)) {
|
||||
DEBUG_ERROR("Invalid metadata object for kind 0 event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Get relay keys
|
||||
const char* relay_pubkey = get_config_value("relay_pubkey");
|
||||
char* relay_privkey_hex = get_relay_private_key();
|
||||
if (!relay_pubkey || !relay_privkey_hex) {
|
||||
DEBUG_ERROR("Could not get relay keys for metadata event");
|
||||
if (relay_privkey_hex) free(relay_privkey_hex);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Convert relay private key to bytes
|
||||
unsigned char relay_privkey[32];
|
||||
if (nostr_hex_to_bytes(relay_privkey_hex, relay_privkey, sizeof(relay_privkey)) != 0) {
|
||||
free(relay_privkey_hex);
|
||||
DEBUG_ERROR("Failed to convert relay private key for metadata event");
|
||||
return NULL;
|
||||
}
|
||||
free(relay_privkey_hex);
|
||||
|
||||
// Create metadata content
|
||||
char* content = cJSON_Print(metadata);
|
||||
if (!content) {
|
||||
DEBUG_ERROR("Failed to serialize metadata for kind 0 event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Create and sign the event
|
||||
cJSON* signed_event = nostr_create_and_sign_event(
|
||||
0, // kind (metadata)
|
||||
content, // content
|
||||
NULL, // tags (none for kind 0)
|
||||
relay_privkey, // private key
|
||||
(time_t)time(NULL) // timestamp
|
||||
);
|
||||
|
||||
free(content);
|
||||
|
||||
if (!signed_event) {
|
||||
DEBUG_ERROR("Failed to create and sign metadata event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
DEBUG_LOG("Created relay metadata event (kind 0)");
|
||||
return signed_event;
|
||||
}
|
||||
|
||||
// Create a relay DM list event (kind 10050)
|
||||
cJSON* create_relay_dm_list_event(cJSON* dm_relays) {
|
||||
if (!dm_relays || !cJSON_IsObject(dm_relays)) {
|
||||
DEBUG_ERROR("Invalid DM relays object for kind 10050 event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Get relay keys
|
||||
const char* relay_pubkey = get_config_value("relay_pubkey");
|
||||
char* relay_privkey_hex = get_relay_private_key();
|
||||
if (!relay_pubkey || !relay_privkey_hex) {
|
||||
DEBUG_ERROR("Could not get relay keys for DM list event");
|
||||
if (relay_privkey_hex) free(relay_privkey_hex);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Convert relay private key to bytes
|
||||
unsigned char relay_privkey[32];
|
||||
if (nostr_hex_to_bytes(relay_privkey_hex, relay_privkey, sizeof(relay_privkey)) != 0) {
|
||||
free(relay_privkey_hex);
|
||||
DEBUG_ERROR("Failed to convert relay private key for DM list event");
|
||||
return NULL;
|
||||
}
|
||||
free(relay_privkey_hex);
|
||||
|
||||
// Create empty content for kind 10050
|
||||
const char* content = "";
|
||||
|
||||
// Create tags from relay list
|
||||
cJSON* tags = cJSON_CreateArray();
|
||||
if (!tags) {
|
||||
DEBUG_ERROR("Failed to create tags array for DM list event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Extract relays array
|
||||
cJSON* relays_array = cJSON_GetObjectItem(dm_relays, "relays");
|
||||
if (relays_array && cJSON_IsArray(relays_array)) {
|
||||
cJSON* relay_item = NULL;
|
||||
cJSON_ArrayForEach(relay_item, relays_array) {
|
||||
if (cJSON_IsString(relay_item)) {
|
||||
const char* relay_url = cJSON_GetStringValue(relay_item);
|
||||
if (relay_url && strlen(relay_url) > 0) {
|
||||
cJSON* tag = cJSON_CreateArray();
|
||||
cJSON_AddItemToArray(tag, cJSON_CreateString("relay"));
|
||||
cJSON_AddItemToArray(tag, cJSON_CreateString(relay_url));
|
||||
cJSON_AddItemToArray(tags, tag);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create and sign the event
|
||||
cJSON* signed_event = nostr_create_and_sign_event(
|
||||
10050, // kind (DM relay list)
|
||||
content, // content (empty)
|
||||
tags, // tags
|
||||
relay_privkey, // private key
|
||||
(time_t)time(NULL) // timestamp
|
||||
);
|
||||
|
||||
cJSON_Delete(tags);
|
||||
|
||||
if (!signed_event) {
|
||||
DEBUG_ERROR("Failed to create and sign DM list event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
DEBUG_LOG("Created relay DM list event (kind 10050)");
|
||||
return signed_event;
|
||||
}
|
||||
|
||||
// Create a relay list event (kind 10002)
|
||||
cJSON* create_relay_list_event(cJSON* relays) {
|
||||
if (!relays || !cJSON_IsObject(relays)) {
|
||||
DEBUG_ERROR("Invalid relays object for kind 10002 event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Get relay keys
|
||||
const char* relay_pubkey = get_config_value("relay_pubkey");
|
||||
char* relay_privkey_hex = get_relay_private_key();
|
||||
if (!relay_pubkey || !relay_privkey_hex) {
|
||||
DEBUG_ERROR("Could not get relay keys for relay list event");
|
||||
if (relay_privkey_hex) free(relay_privkey_hex);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Convert relay private key to bytes
|
||||
unsigned char relay_privkey[32];
|
||||
if (nostr_hex_to_bytes(relay_privkey_hex, relay_privkey, sizeof(relay_privkey)) != 0) {
|
||||
free(relay_privkey_hex);
|
||||
DEBUG_ERROR("Failed to convert relay private key for relay list event");
|
||||
return NULL;
|
||||
}
|
||||
free(relay_privkey_hex);
|
||||
|
||||
// Create empty content for kind 10002
|
||||
const char* content = "";
|
||||
|
||||
// Create tags from relay list
|
||||
cJSON* tags = cJSON_CreateArray();
|
||||
if (!tags) {
|
||||
DEBUG_ERROR("Failed to create tags array for relay list event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Extract relays array
|
||||
cJSON* relays_array = cJSON_GetObjectItem(relays, "relays");
|
||||
if (relays_array && cJSON_IsArray(relays_array)) {
|
||||
cJSON* relay_item = NULL;
|
||||
cJSON_ArrayForEach(relay_item, relays_array) {
|
||||
if (cJSON_IsObject(relay_item)) {
|
||||
cJSON* url = cJSON_GetObjectItem(relay_item, "url");
|
||||
cJSON* read = cJSON_GetObjectItem(relay_item, "read");
|
||||
cJSON* write = cJSON_GetObjectItem(relay_item, "write");
|
||||
|
||||
if (url && cJSON_IsString(url)) {
|
||||
const char* relay_url = cJSON_GetStringValue(url);
|
||||
int read_flag = read && cJSON_IsBool(read) ? cJSON_IsTrue(read) : true;
|
||||
int write_flag = write && cJSON_IsBool(write) ? cJSON_IsTrue(write) : true;
|
||||
|
||||
// Create marker string
|
||||
const char* marker = NULL;
|
||||
if (read_flag && write_flag) {
|
||||
marker = ""; // No marker means both read and write
|
||||
} else if (read_flag) {
|
||||
marker = "read";
|
||||
} else if (write_flag) {
|
||||
marker = "write";
|
||||
} else {
|
||||
// Skip invalid entries
|
||||
continue;
|
||||
}
|
||||
|
||||
cJSON* tag = cJSON_CreateArray();
|
||||
cJSON_AddItemToArray(tag, cJSON_CreateString("r"));
|
||||
cJSON_AddItemToArray(tag, cJSON_CreateString(relay_url));
|
||||
if (marker[0] != '\0') {
|
||||
cJSON_AddItemToArray(tag, cJSON_CreateString(marker));
|
||||
}
|
||||
cJSON_AddItemToArray(tags, tag);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create and sign the event
|
||||
cJSON* signed_event = nostr_create_and_sign_event(
|
||||
10002, // kind (relay list)
|
||||
content, // content (empty)
|
||||
tags, // tags
|
||||
relay_privkey, // private key
|
||||
(time_t)time(NULL) // timestamp
|
||||
);
|
||||
|
||||
cJSON_Delete(tags);
|
||||
|
||||
if (!signed_event) {
|
||||
DEBUG_ERROR("Failed to create and sign relay list event");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
DEBUG_LOG("Created relay list event (kind 10002)");
|
||||
return signed_event;
|
||||
}
|
||||
|
||||
// Handle monitoring system admin commands
|
||||
int handle_monitoring_command(cJSON* event, const char* command, char* error_message, size_t error_size, struct lws* wsi) {
|
||||
if (!event || !command || !error_message) {
|
||||
@@ -2267,24 +2581,8 @@ int handle_monitoring_command(cJSON* event, const char* command, char* error_mes
|
||||
if (*p >= 'A' && *p <= 'Z') *p = *p + 32;
|
||||
}
|
||||
|
||||
// Handle commands
|
||||
if (strcmp(cmd, "enable_monitoring") == 0) {
|
||||
if (set_monitoring_enabled(1) == 0) {
|
||||
char* response_content = "✅ Monitoring enabled\n\nReal-time monitoring events will now be generated.";
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
} else {
|
||||
char* response_content = "❌ Failed to enable monitoring";
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
} else if (strcmp(cmd, "disable_monitoring") == 0) {
|
||||
if (set_monitoring_enabled(0) == 0) {
|
||||
char* response_content = "✅ Monitoring disabled\n\nReal-time monitoring events will no longer be generated.";
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
} else {
|
||||
char* response_content = "❌ Failed to disable monitoring";
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
} else if (strcmp(cmd, "set_monitoring_throttle") == 0) {
|
||||
// Handle set_monitoring_throttle command (only remaining monitoring command)
|
||||
if (strcmp(cmd, "set_monitoring_throttle") == 0) {
|
||||
if (arg[0] == '\0') {
|
||||
char* response_content = "❌ Missing throttle value\n\nUsage: set_monitoring_throttle <seconds>";
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
@@ -2300,44 +2598,28 @@ int handle_monitoring_command(cJSON* event, const char* command, char* error_mes
|
||||
char throttle_str[16];
|
||||
snprintf(throttle_str, sizeof(throttle_str), "%ld", throttle_seconds);
|
||||
|
||||
if (update_config_in_table("kind_34567_reporting_throttling_sec", throttle_str) == 0) {
|
||||
if (update_config_in_table("kind_24567_reporting_throttle_sec", throttle_str) == 0) {
|
||||
char response_content[256];
|
||||
snprintf(response_content, sizeof(response_content),
|
||||
"✅ Monitoring throttle updated\n\nMinimum interval between monitoring events: %ld seconds", throttle_seconds);
|
||||
"✅ Monitoring throttle updated\n\n"
|
||||
"Minimum interval between monitoring events: %ld seconds\n\n"
|
||||
"ℹ️ Monitoring activates automatically when you subscribe to kind 24567 events.",
|
||||
throttle_seconds);
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
} else {
|
||||
char* response_content = "❌ Failed to update monitoring throttle";
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
} else if (strcmp(cmd, "monitoring_status") == 0) {
|
||||
int enabled = is_monitoring_enabled();
|
||||
int throttle = get_monitoring_throttle_seconds();
|
||||
|
||||
char response_content[512];
|
||||
snprintf(response_content, sizeof(response_content),
|
||||
"📊 Monitoring Status\n"
|
||||
"━━━━━━━━━━━━━━━━━━━━\n"
|
||||
"\n"
|
||||
"Enabled: %s\n"
|
||||
"Throttle: %d seconds\n"
|
||||
"\n"
|
||||
"Commands:\n"
|
||||
"• enable_monitoring\n"
|
||||
"• disable_monitoring\n"
|
||||
"• set_monitoring_throttle <seconds>\n"
|
||||
"• monitoring_status",
|
||||
enabled ? "Yes" : "No", throttle);
|
||||
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
} else {
|
||||
char response_content[256];
|
||||
char response_content[1024];
|
||||
snprintf(response_content, sizeof(response_content),
|
||||
"❌ Unknown monitoring command: %s\n\n"
|
||||
"Available commands:\n"
|
||||
"• enable_monitoring\n"
|
||||
"• disable_monitoring\n"
|
||||
"• set_monitoring_throttle <seconds>\n"
|
||||
"• monitoring_status", cmd);
|
||||
"Available command:\n"
|
||||
"• set_monitoring_throttle <seconds>\n\n"
|
||||
"ℹ️ Monitoring is now subscription-based:\n"
|
||||
"Subscribe to kind 24567 events to receive real-time monitoring data.\n"
|
||||
"Monitoring automatically activates when subscriptions exist and deactivates when they close.",
|
||||
cmd);
|
||||
return send_admin_response(sender_pubkey, response_content, request_id, error_message, error_size, wsi);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -60,11 +60,8 @@ char* execute_sql_query(const char* query, const char* request_id, char* error_m
|
||||
int handle_sql_query_unified(cJSON* event, const char* query, char* error_message, size_t error_size, struct lws* wsi);
|
||||
|
||||
// Monitoring system functions
|
||||
int init_monitoring_system(void);
|
||||
void cleanup_monitoring_system(void);
|
||||
void monitoring_on_event_stored(void);
|
||||
int set_monitoring_enabled(int enabled);
|
||||
int is_monitoring_enabled(void);
|
||||
void monitoring_on_subscription_change(void);
|
||||
int get_monitoring_throttle_seconds(void);
|
||||
|
||||
#endif // API_H
|
||||
196
src/config.c
196
src/config.c
@@ -3,6 +3,19 @@
|
||||
#include "debug.h"
|
||||
#include "default_config_event.h"
|
||||
#include "dm_admin.h"
|
||||
|
||||
// Undefine VERSION macros before including nostr_core.h to avoid redefinition warnings
|
||||
// This must come AFTER default_config_event.h so that RELAY_VERSION macro expansion works correctly
|
||||
#ifdef VERSION
|
||||
#undef VERSION
|
||||
#endif
|
||||
#ifdef VERSION_MINOR
|
||||
#undef VERSION_MINOR
|
||||
#endif
|
||||
#ifdef VERSION_PATCH
|
||||
#undef VERSION_PATCH
|
||||
#endif
|
||||
|
||||
#include "../nostr_core_lib/nostr_core/nostr_core.h"
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
@@ -72,6 +85,7 @@ int migrate_config_from_events_to_table(void);
|
||||
int populate_config_table_from_event(const cJSON* event);
|
||||
int handle_config_query_unified(cJSON* event, const char* query_type, char* error_message, size_t error_size, struct lws* wsi);
|
||||
int handle_config_set_unified(cJSON* event, const char* config_key, const char* config_value, char* error_message, size_t error_size, struct lws* wsi);
|
||||
int handle_create_relay_event_unified(cJSON* event, const char* kind_str, const char* event_data_json, char* error_message, size_t error_size, struct lws* wsi);
|
||||
|
||||
// Forward declarations for tag parsing utilities
|
||||
const char* get_first_tag_name(cJSON* event);
|
||||
@@ -79,6 +93,7 @@ const char* get_tag_value(cJSON* event, const char* tag_name, int value_index);
|
||||
int parse_auth_query_parameters(cJSON* event, char** query_type, char** pattern_value);
|
||||
int handle_config_update_unified(cJSON* event, char* error_message, size_t error_size, struct lws* wsi);
|
||||
int handle_stats_query_unified(cJSON* event, char* error_message, size_t error_size, struct lws* wsi);
|
||||
int handle_sql_query_unified(cJSON* event, const char* query, char* error_message, size_t error_size, struct lws* wsi);
|
||||
|
||||
|
||||
// Current configuration cache
|
||||
@@ -801,7 +816,7 @@ int first_time_startup_sequence(const cli_options_t* cli_options, char* admin_pu
|
||||
return 0;
|
||||
}
|
||||
|
||||
int startup_existing_relay(const char* relay_pubkey, const cli_options_t* cli_options) {
|
||||
int startup_existing_relay(const char* relay_pubkey, const cli_options_t* cli_options __attribute__((unused))) {
|
||||
if (!relay_pubkey) {
|
||||
DEBUG_ERROR("Invalid relay pubkey for existing relay startup");
|
||||
return -1;
|
||||
@@ -824,26 +839,7 @@ int startup_existing_relay(const char* relay_pubkey, const cli_options_t* cli_op
|
||||
|
||||
// NOTE: Database is already initialized in main.c before calling this function
|
||||
// Config table should already exist with complete configuration
|
||||
|
||||
// Check if CLI overrides need to be applied
|
||||
int has_overrides = 0;
|
||||
if (cli_options) {
|
||||
if (cli_options->port_override > 0) has_overrides = 1;
|
||||
if (cli_options->admin_pubkey_override[0] != '\0') has_overrides = 1;
|
||||
if (cli_options->relay_privkey_override[0] != '\0') has_overrides = 1;
|
||||
}
|
||||
|
||||
if (has_overrides) {
|
||||
// Apply CLI overrides to existing database
|
||||
DEBUG_INFO("Applying CLI overrides to existing database");
|
||||
if (apply_cli_overrides_atomic(cli_options) != 0) {
|
||||
DEBUG_ERROR("Failed to apply CLI overrides to existing database");
|
||||
return -1;
|
||||
}
|
||||
} else {
|
||||
// No CLI overrides - config table is already available
|
||||
DEBUG_INFO("No CLI overrides - config table is already available");
|
||||
}
|
||||
// CLI overrides will be applied after this function returns in main.c
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -1148,6 +1144,20 @@ static int validate_config_field(const char* key, const char* value, char* error
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
// NIP-59 Gift Wrap Timestamp Configuration
|
||||
if (strcmp(key, "nip59_timestamp_max_delay_sec") == 0) {
|
||||
if (!is_valid_non_negative_integer(value)) {
|
||||
snprintf(error_msg, error_size, "invalid nip59_timestamp_max_delay_sec '%s' (must be non-negative integer)", value);
|
||||
return -1;
|
||||
}
|
||||
long val = strtol(value, NULL, 10);
|
||||
if (val > 604800) { // Max 7 days
|
||||
snprintf(error_msg, error_size, "nip59_timestamp_max_delay_sec '%s' too large (max 604800 seconds = 7 days)", value);
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (strcmp(key, "nip42_auth_required_kinds") == 0) {
|
||||
// Validate comma-separated list of kind numbers
|
||||
@@ -2542,7 +2552,7 @@ int handle_kind_23456_unified(cJSON* event, char* error_message, size_t error_si
|
||||
}
|
||||
|
||||
// Perform NIP-44 decryption (relay as recipient, admin as sender)
|
||||
char decrypted_text[4096]; // Buffer for decrypted content
|
||||
char decrypted_text[16384]; // Buffer for decrypted content (16KB)
|
||||
int decrypt_result = nostr_nip44_decrypt(relay_privkey_bytes, sender_pubkey_bytes, content, decrypted_text, sizeof(decrypted_text));
|
||||
|
||||
// Clean up private key immediately after use
|
||||
@@ -2555,51 +2565,17 @@ int handle_kind_23456_unified(cJSON* event, char* error_message, size_t error_si
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Check if decrypted content is a direct command array (DM control system)
|
||||
cJSON* potential_command_array = cJSON_Parse(decrypted_text);
|
||||
|
||||
if (potential_command_array && cJSON_IsArray(potential_command_array)) {
|
||||
// Route to DM admin system
|
||||
int dm_result = process_dm_admin_command(potential_command_array, event, error_message, error_size, wsi);
|
||||
cJSON_Delete(potential_command_array);
|
||||
memset(decrypted_text, 0, sizeof(decrypted_text)); // Clear sensitive data
|
||||
return dm_result;
|
||||
}
|
||||
|
||||
// If not a direct command array, try parsing as inner event JSON (NIP-17)
|
||||
cJSON* inner_event = potential_command_array; // Reuse the parsed JSON
|
||||
|
||||
if (!inner_event || !cJSON_IsObject(inner_event)) {
|
||||
DEBUG_ERROR("error: decrypted content is not valid inner event JSON");
|
||||
cJSON_Delete(inner_event);
|
||||
snprintf(error_message, error_size, "error: decrypted content is not valid inner event JSON");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Extract content from inner event
|
||||
cJSON* inner_content_obj = cJSON_GetObjectItem(inner_event, "content");
|
||||
if (!inner_content_obj || !cJSON_IsString(inner_content_obj)) {
|
||||
DEBUG_ERROR("error: inner event missing content field");
|
||||
cJSON_Delete(inner_event);
|
||||
snprintf(error_message, error_size, "error: inner event missing content field");
|
||||
return -1;
|
||||
}
|
||||
|
||||
const char* inner_content = cJSON_GetStringValue(inner_content_obj);
|
||||
|
||||
// Parse inner content as JSON array (the command array)
|
||||
decrypted_content = cJSON_Parse(inner_content);
|
||||
// Parse decrypted content as command array directly (NOT as NIP-17 inner event)
|
||||
// Kind 23456 events contain direct command arrays: ["command_name", arg1, arg2, ...]
|
||||
decrypted_content = cJSON_Parse(decrypted_text);
|
||||
|
||||
if (!decrypted_content || !cJSON_IsArray(decrypted_content)) {
|
||||
DEBUG_ERROR("error: inner content is not valid JSON array");
|
||||
cJSON_Delete(inner_event);
|
||||
snprintf(error_message, error_size, "error: inner content is not valid JSON array");
|
||||
DEBUG_ERROR("error: decrypted content is not valid command array");
|
||||
cJSON_Delete(decrypted_content);
|
||||
snprintf(error_message, error_size, "error: decrypted content is not valid command array");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Clean up inner event
|
||||
cJSON_Delete(inner_event);
|
||||
|
||||
// Replace event content with decrypted command array for processing
|
||||
cJSON_DeleteItemFromObject(event, "content");
|
||||
cJSON_AddStringToObject(event, "content", "decrypted");
|
||||
@@ -2616,10 +2592,26 @@ int handle_kind_23456_unified(cJSON* event, char* error_message, size_t error_si
|
||||
cJSON_AddItemToArray(command_tag, cJSON_Duplicate(first_item, 1));
|
||||
|
||||
// Add remaining items as tag values
|
||||
// Convert non-string items (objects, arrays, numbers) to JSON strings
|
||||
for (int i = 1; i < cJSON_GetArraySize(decrypted_content); i++) {
|
||||
cJSON* item = cJSON_GetArrayItem(decrypted_content, i);
|
||||
if (item) {
|
||||
cJSON_AddItemToArray(command_tag, cJSON_Duplicate(item, 1));
|
||||
if (cJSON_IsString(item)) {
|
||||
// Keep strings as-is
|
||||
cJSON_AddItemToArray(command_tag, cJSON_Duplicate(item, 1));
|
||||
} else if (cJSON_IsNumber(item)) {
|
||||
// Convert numbers to strings
|
||||
char num_str[32];
|
||||
snprintf(num_str, sizeof(num_str), "%.0f", cJSON_GetNumberValue(item));
|
||||
cJSON_AddItemToArray(command_tag, cJSON_CreateString(num_str));
|
||||
} else if (cJSON_IsObject(item) || cJSON_IsArray(item)) {
|
||||
// Convert objects/arrays to JSON strings
|
||||
char* json_str = cJSON_PrintUnformatted(item);
|
||||
if (json_str) {
|
||||
cJSON_AddItemToArray(command_tag, cJSON_CreateString(json_str));
|
||||
free(json_str);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2696,6 +2688,25 @@ int handle_kind_23456_unified(cJSON* event, char* error_message, size_t error_si
|
||||
else if (strcmp(action_type, "stats_query") == 0) {
|
||||
return handle_stats_query_unified(event, error_message, error_size, wsi);
|
||||
}
|
||||
else if (strcmp(action_type, "create_relay_event") == 0) {
|
||||
const char* kind_str = get_tag_value(event, action_type, 1);
|
||||
const char* event_data_json = get_tag_value(event, action_type, 2);
|
||||
if (!kind_str || !event_data_json) {
|
||||
DEBUG_ERROR("invalid: missing kind or event data");
|
||||
snprintf(error_message, error_size, "invalid: missing kind or event data");
|
||||
return -1;
|
||||
}
|
||||
return handle_create_relay_event_unified(event, kind_str, event_data_json, error_message, error_size, wsi);
|
||||
}
|
||||
else if (strcmp(action_type, "sql_query") == 0) {
|
||||
const char* query = get_tag_value(event, action_type, 1);
|
||||
if (!query) {
|
||||
DEBUG_ERROR("invalid: missing sql_query parameter");
|
||||
snprintf(error_message, error_size, "invalid: missing sql_query parameter");
|
||||
return -1;
|
||||
}
|
||||
return handle_sql_query_unified(event, query, error_message, error_size, wsi);
|
||||
}
|
||||
else if (strcmp(action_type, "whitelist") == 0 || strcmp(action_type, "blacklist") == 0) {
|
||||
// Handle auth rule modifications (existing logic from process_admin_auth_event)
|
||||
return handle_auth_rule_modification_unified(event, error_message, error_size, wsi);
|
||||
@@ -3495,6 +3506,41 @@ int handle_stats_query_unified(cJSON* event, char* error_message, size_t error_s
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Unified create relay event handler
|
||||
int handle_create_relay_event_unified(cJSON* event, const char* kind_str, const char* event_data_json, char* error_message, size_t error_size, struct lws* wsi) {
|
||||
// Suppress unused parameter warning
|
||||
(void)wsi;
|
||||
|
||||
if (!event || !kind_str || !event_data_json) {
|
||||
snprintf(error_message, error_size, "invalid: missing parameters for create_relay_event");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Parse kind string to integer
|
||||
char* endptr;
|
||||
int kind = (int)strtol(kind_str, &endptr, 10);
|
||||
if (endptr == kind_str || *endptr != '\0') {
|
||||
snprintf(error_message, error_size, "invalid: kind must be a valid integer");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Parse event data JSON
|
||||
cJSON* event_data = cJSON_Parse(event_data_json);
|
||||
if (!event_data) {
|
||||
snprintf(error_message, error_size, "invalid: event_data must be valid JSON");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Call the existing implementation from api.c
|
||||
extern int handle_create_relay_event_command(cJSON* event, int kind, cJSON* event_data, char* error_message, size_t error_size, struct lws* wsi);
|
||||
int result = handle_create_relay_event_command(event, kind, event_data, error_message, error_size, wsi);
|
||||
|
||||
// Clean up
|
||||
cJSON_Delete(event_data);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Unified config update handler - handles multiple config objects in single atomic command
|
||||
int handle_config_update_unified(cJSON* event, char* error_message, size_t error_size, struct lws* wsi) {
|
||||
// Suppress unused parameter warning
|
||||
@@ -4099,32 +4145,18 @@ int populate_all_config_values_atomic(const char* admin_pubkey, const char* rela
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Insert monitoring system config entries
|
||||
// Insert monitoring system config entry (ephemeral kind 24567)
|
||||
// Note: Monitoring is automatically activated when clients subscribe to kind 24567
|
||||
sqlite3_reset(stmt);
|
||||
sqlite3_bind_text(stmt, 1, "kind_34567_reporting_enabled", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, "false", -1, SQLITE_STATIC); // boolean, default false
|
||||
sqlite3_bind_text(stmt, 3, "boolean", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 4, "Enable real-time monitoring event generation", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 5, "monitoring", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_int(stmt, 6, 0); // does not require restart
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc != SQLITE_DONE) {
|
||||
DEBUG_ERROR("Failed to insert kind_34567_reporting_enabled: %s", sqlite3_errmsg(g_db));
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_exec(g_db, "ROLLBACK;", NULL, NULL, NULL);
|
||||
return -1;
|
||||
}
|
||||
|
||||
sqlite3_reset(stmt);
|
||||
sqlite3_bind_text(stmt, 1, "kind_34567_reporting_throttling_sec", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 1, "kind_24567_reporting_throttle_sec", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, "5", -1, SQLITE_STATIC); // integer, default 5 seconds
|
||||
sqlite3_bind_text(stmt, 3, "integer", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 4, "Minimum seconds between monitoring event reports", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 4, "Minimum seconds between monitoring event reports (ephemeral kind 24567)", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 5, "monitoring", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_int(stmt, 6, 0); // does not require restart
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc != SQLITE_DONE) {
|
||||
DEBUG_ERROR("Failed to insert kind_34567_reporting_throttling_sec: %s", sqlite3_errmsg(g_db));
|
||||
DEBUG_ERROR("Failed to insert kind_24567_reporting_throttle_sec: %s", sqlite3_errmsg(g_db));
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_exec(g_db, "ROLLBACK;", NULL, NULL, NULL);
|
||||
return -1;
|
||||
|
||||
@@ -72,7 +72,16 @@ static const struct {
|
||||
|
||||
// Performance Settings
|
||||
{"default_limit", "500"},
|
||||
{"max_limit", "5000"}
|
||||
{"max_limit", "5000"},
|
||||
|
||||
// Proxy Settings
|
||||
// Trust proxy headers (X-Forwarded-For, X-Real-IP) for accurate client IP detection
|
||||
// Safe for informational/debugging use. Only becomes a security concern if you implement
|
||||
// IP-based rate limiting or access control (which would require firewall protection anyway)
|
||||
{"trust_proxy_headers", "true"},
|
||||
|
||||
// NIP-59 Gift Wrap Timestamp Configuration
|
||||
{"nip59_timestamp_max_delay_sec", "0"}
|
||||
};
|
||||
|
||||
// Number of default configuration values
|
||||
|
||||
BIN
src/default_config_event.h.gch
Normal file
BIN
src/default_config_event.h.gch
Normal file
Binary file not shown.
@@ -80,6 +80,7 @@ extern int handle_sql_query_unified(cJSON* event, const char* query, char* error
|
||||
|
||||
// Process direct command arrays (DM control system)
|
||||
// This handles commands sent as direct JSON arrays, not wrapped in inner events
|
||||
// Note: create_relay_event is NOT supported via DMs - use Kind 23456 events only
|
||||
int process_dm_admin_command(cJSON* command_array, cJSON* event, char* error_message, size_t error_size, struct lws* wsi) {
|
||||
if (!command_array || !cJSON_IsArray(command_array) || !event) {
|
||||
DEBUG_ERROR("DM Admin: Invalid command array or event");
|
||||
@@ -231,19 +232,27 @@ cJSON* process_nip17_admin_message(cJSON* gift_wrap_event, char* error_message,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Received potential NIP-17 gift wrap event for processing");
|
||||
|
||||
// Step 1: Validate it's addressed to us
|
||||
if (!is_nip17_gift_wrap_for_relay(gift_wrap_event)) {
|
||||
DEBUG_INFO("DM_ADMIN: Event is not a valid gift wrap for this relay - rejecting");
|
||||
strncpy(error_message, "NIP-17: Event is not a valid gift wrap for this relay", error_size - 1);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Valid NIP-17 gift wrap confirmed for this relay");
|
||||
|
||||
// Step 2: Get relay private key for decryption
|
||||
char* relay_privkey_hex = get_relay_private_key();
|
||||
if (!relay_privkey_hex) {
|
||||
DEBUG_INFO("DM_ADMIN: Could not get relay private key for decryption");
|
||||
strncpy(error_message, "NIP-17: Could not get relay private key for decryption", error_size - 1);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Retrieved relay private key for decryption");
|
||||
|
||||
// Convert hex private key to bytes
|
||||
unsigned char relay_privkey[32];
|
||||
if (nostr_hex_to_bytes(relay_privkey_hex, relay_privkey, sizeof(relay_privkey)) != 0) {
|
||||
@@ -254,10 +263,13 @@ cJSON* process_nip17_admin_message(cJSON* gift_wrap_event, char* error_message,
|
||||
}
|
||||
free(relay_privkey_hex);
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Converted relay private key to bytes successfully");
|
||||
|
||||
// Step 3: Decrypt and parse inner event using library function
|
||||
DEBUG_INFO("DM_ADMIN: Attempting to decrypt NIP-17 gift wrap using nostr_nip17_receive_dm");
|
||||
cJSON* inner_dm = nostr_nip17_receive_dm(gift_wrap_event, relay_privkey);
|
||||
if (!inner_dm) {
|
||||
DEBUG_ERROR("NIP-17: nostr_nip17_receive_dm returned NULL");
|
||||
DEBUG_INFO("DM_ADMIN: nostr_nip17_receive_dm returned NULL - decryption failed");
|
||||
// Debug: Print the gift wrap event
|
||||
char* gift_wrap_debug = cJSON_Print(gift_wrap_event);
|
||||
if (gift_wrap_debug) {
|
||||
@@ -273,12 +285,17 @@ cJSON* process_nip17_admin_message(cJSON* gift_wrap_event, char* error_message,
|
||||
}
|
||||
privkey_hex[64] = '\0';
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: NIP-17 decryption failed - returning error");
|
||||
strncpy(error_message, "NIP-17: Failed to decrypt and parse inner DM event", error_size - 1);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Successfully decrypted NIP-17 gift wrap, processing inner DM");
|
||||
|
||||
// Step 4: Process admin command
|
||||
DEBUG_INFO("DM_ADMIN: Processing decrypted admin command");
|
||||
int result = process_nip17_admin_command(inner_dm, error_message, error_size, wsi);
|
||||
DEBUG_INFO("DM_ADMIN: Admin command processing completed with result: %d", result);
|
||||
|
||||
// Step 5: For plain text commands (stats/config), the response is already handled
|
||||
// Only create a generic response for other command types that don't handle their own responses
|
||||
@@ -351,13 +368,17 @@ cJSON* process_nip17_admin_message(cJSON* gift_wrap_event, char* error_message,
|
||||
|
||||
if (success_dm) {
|
||||
cJSON* success_gift_wraps[1];
|
||||
// Get timestamp delay configuration
|
||||
long max_delay_sec = get_config_int("nip59_timestamp_max_delay_sec", 0);
|
||||
|
||||
int send_result = nostr_nip17_send_dm(
|
||||
success_dm, // dm_event
|
||||
(const char**)&sender_pubkey, // recipient_pubkeys
|
||||
1, // num_recipients
|
||||
relay_privkey, // sender_private_key
|
||||
success_gift_wraps, // gift_wraps_out
|
||||
1 // max_gift_wraps
|
||||
1, // max_gift_wraps
|
||||
max_delay_sec // max_delay_sec
|
||||
);
|
||||
|
||||
cJSON_Delete(success_dm);
|
||||
@@ -457,18 +478,23 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Processing NIP-17 admin command from decrypted DM");
|
||||
|
||||
// Extract content from DM
|
||||
cJSON* content_obj = cJSON_GetObjectItem(dm_event, "content");
|
||||
if (!content_obj || !cJSON_IsString(content_obj)) {
|
||||
DEBUG_INFO("DM_ADMIN: DM missing content field");
|
||||
strncpy(error_message, "NIP-17: DM missing content", error_size - 1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
const char* dm_content = cJSON_GetStringValue(content_obj);
|
||||
DEBUG_INFO("DM_ADMIN: Extracted DM content: %.100s%s", dm_content, strlen(dm_content) > 100 ? "..." : "");
|
||||
|
||||
// Check if sender is admin before processing any commands
|
||||
cJSON* sender_pubkey_obj = cJSON_GetObjectItem(dm_event, "pubkey");
|
||||
if (!sender_pubkey_obj || !cJSON_IsString(sender_pubkey_obj)) {
|
||||
DEBUG_INFO("DM_ADMIN: DM missing sender pubkey - treating as user DM");
|
||||
return 0; // Not an error, just treat as user DM
|
||||
}
|
||||
const char* sender_pubkey = cJSON_GetStringValue(sender_pubkey_obj);
|
||||
@@ -477,11 +503,16 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
const char* admin_pubkey = get_config_value("admin_pubkey");
|
||||
int is_admin = admin_pubkey && strlen(admin_pubkey) > 0 && strcmp(sender_pubkey, admin_pubkey) == 0;
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Sender pubkey: %.16s... (admin: %s)", sender_pubkey, is_admin ? "YES" : "NO");
|
||||
|
||||
// Parse DM content as JSON array of commands
|
||||
DEBUG_INFO("DM_ADMIN: Attempting to parse DM content as JSON command array");
|
||||
cJSON* command_array = cJSON_Parse(dm_content);
|
||||
if (!command_array || !cJSON_IsArray(command_array)) {
|
||||
DEBUG_INFO("DM_ADMIN: Content is not a JSON array, checking for plain text commands");
|
||||
// If content is not a JSON array, check for plain text commands
|
||||
if (is_admin) {
|
||||
DEBUG_INFO("DM_ADMIN: Processing plain text admin command");
|
||||
// Convert content to lowercase for case-insensitive matching
|
||||
char content_lower[256];
|
||||
size_t content_len = strlen(dm_content);
|
||||
@@ -498,47 +529,84 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
|
||||
// Check for stats commands
|
||||
if (strstr(content_lower, "stats") != NULL || strstr(content_lower, "statistics") != NULL) {
|
||||
DEBUG_INFO("DM_ADMIN: Processing stats command");
|
||||
char* stats_text = generate_stats_text();
|
||||
if (!stats_text) {
|
||||
DEBUG_INFO("DM_ADMIN: Failed to generate stats text");
|
||||
return -1;
|
||||
}
|
||||
|
||||
char error_msg[256];
|
||||
int result = send_nip17_response(sender_pubkey, stats_text, error_msg, sizeof(error_msg));
|
||||
free(stats_text);
|
||||
|
||||
|
||||
if (result != 0) {
|
||||
DEBUG_ERROR(error_msg);
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Stats command processed successfully");
|
||||
return 0;
|
||||
}
|
||||
// Check for config commands
|
||||
else if (strstr(content_lower, "config") != NULL || strstr(content_lower, "configuration") != NULL) {
|
||||
DEBUG_INFO("DM_ADMIN: Processing config command");
|
||||
char* config_text = generate_config_text();
|
||||
if (!config_text) {
|
||||
DEBUG_INFO("DM_ADMIN: Failed to generate config text");
|
||||
return -1;
|
||||
}
|
||||
|
||||
char error_msg[256];
|
||||
int result = send_nip17_response(sender_pubkey, config_text, error_msg, sizeof(error_msg));
|
||||
free(config_text);
|
||||
|
||||
|
||||
if (result != 0) {
|
||||
DEBUG_ERROR(error_msg);
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Config command processed successfully");
|
||||
return 0;
|
||||
}
|
||||
// Check for status commands
|
||||
else if (strstr(content_lower, "status") != NULL) {
|
||||
DEBUG_INFO("DM_ADMIN: Processing status command");
|
||||
|
||||
// Create synthetic event for system_command handler
|
||||
cJSON* synthetic_event = cJSON_CreateObject();
|
||||
cJSON_AddNumberToObject(synthetic_event, "kind", 23456);
|
||||
cJSON_AddStringToObject(synthetic_event, "pubkey", sender_pubkey);
|
||||
|
||||
// Create tags array with system_command
|
||||
cJSON* tags = cJSON_CreateArray();
|
||||
cJSON* cmd_tag = cJSON_CreateArray();
|
||||
cJSON_AddItemToArray(cmd_tag, cJSON_CreateString("system_command"));
|
||||
cJSON_AddItemToArray(cmd_tag, cJSON_CreateString("system_status"));
|
||||
cJSON_AddItemToArray(tags, cmd_tag);
|
||||
cJSON_AddItemToObject(synthetic_event, "tags", tags);
|
||||
|
||||
char error_msg[256];
|
||||
int result = handle_system_command_unified(synthetic_event, "system_status", error_msg, sizeof(error_msg), wsi);
|
||||
cJSON_Delete(synthetic_event);
|
||||
|
||||
if (result != 0) {
|
||||
DEBUG_ERROR(error_msg);
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Status command processed successfully");
|
||||
return 0;
|
||||
}
|
||||
else {
|
||||
DEBUG_INFO("DM_ADMIN: Checking for confirmation or config change requests");
|
||||
// Check if it's a confirmation response (yes/no)
|
||||
int confirmation_result = handle_config_confirmation(sender_pubkey, dm_content);
|
||||
if (confirmation_result != 0) {
|
||||
if (confirmation_result > 0) {
|
||||
// Configuration confirmation processed successfully
|
||||
DEBUG_INFO("DM_ADMIN: Configuration confirmation processed successfully");
|
||||
} else if (confirmation_result == -2) {
|
||||
DEBUG_INFO("DM_ADMIN: No pending changes to confirm");
|
||||
// No pending changes
|
||||
char no_pending_msg[256];
|
||||
snprintf(no_pending_msg, sizeof(no_pending_msg),
|
||||
@@ -558,6 +626,7 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
int config_result = process_config_change_request(sender_pubkey, dm_content);
|
||||
if (config_result != 0) {
|
||||
if (config_result > 0) {
|
||||
DEBUG_INFO("DM_ADMIN: Configuration change request processed successfully");
|
||||
return 1; // Return positive value to indicate response was handled
|
||||
} else {
|
||||
DEBUG_ERROR("NIP-17: Configuration change request failed");
|
||||
@@ -565,22 +634,28 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
}
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Unrecognized plain text admin command");
|
||||
return 0; // Admin sent unrecognized plain text, treat as user DM
|
||||
}
|
||||
} else {
|
||||
DEBUG_INFO("DM_ADMIN: Non-admin user sent plain text - treating as user DM");
|
||||
// Not admin, treat as user DM
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Successfully parsed JSON command array");
|
||||
|
||||
// Check if this is a "stats" command
|
||||
if (cJSON_GetArraySize(command_array) > 0) {
|
||||
cJSON* first_item = cJSON_GetArrayItem(command_array, 0);
|
||||
if (cJSON_IsString(first_item) && strcmp(cJSON_GetStringValue(first_item), "stats") == 0) {
|
||||
DEBUG_INFO("DM_ADMIN: Processing JSON stats command");
|
||||
// Get sender pubkey for response
|
||||
cJSON* sender_pubkey_obj = cJSON_GetObjectItem(dm_event, "pubkey");
|
||||
if (!sender_pubkey_obj || !cJSON_IsString(sender_pubkey_obj)) {
|
||||
cJSON_Delete(command_array);
|
||||
DEBUG_INFO("DM_ADMIN: DM missing sender pubkey for stats command");
|
||||
strncpy(error_message, "NIP-17: DM missing sender pubkey", error_size - 1);
|
||||
return -1;
|
||||
}
|
||||
@@ -590,6 +665,7 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
char* stats_json = generate_stats_json();
|
||||
if (!stats_json) {
|
||||
cJSON_Delete(command_array);
|
||||
DEBUG_INFO("DM_ADMIN: Failed to generate stats JSON");
|
||||
strncpy(error_message, "NIP-17: Failed to generate stats", error_size - 1);
|
||||
return -1;
|
||||
}
|
||||
@@ -598,17 +674,19 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
int result = send_nip17_response(sender_pubkey, stats_json, error_msg, sizeof(error_msg));
|
||||
free(stats_json);
|
||||
cJSON_Delete(command_array);
|
||||
|
||||
|
||||
if (result != 0) {
|
||||
DEBUG_ERROR(error_msg);
|
||||
strncpy(error_message, error_msg, error_size - 1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: JSON stats command processed successfully");
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Delegating to unified admin processing for command array");
|
||||
// For other commands, delegate to existing admin processing
|
||||
// Create a synthetic kind 23456 event with the DM content
|
||||
cJSON* synthetic_event = cJSON_CreateObject();
|
||||
@@ -628,10 +706,12 @@ int process_nip17_admin_command(cJSON* dm_event, char* error_message, size_t err
|
||||
}
|
||||
|
||||
// Process as regular admin event
|
||||
DEBUG_INFO("DM_ADMIN: Processing synthetic admin event");
|
||||
int result = process_admin_event_in_config(synthetic_event, error_message, error_size, wsi);
|
||||
|
||||
cJSON_Delete(synthetic_event);
|
||||
cJSON_Delete(command_array);
|
||||
|
||||
DEBUG_INFO("DM_ADMIN: Unified admin processing completed with result: %d", result);
|
||||
return result;
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
405
src/main.c
405
src/main.c
@@ -95,7 +95,6 @@ void update_subscription_manager_config(void);
|
||||
void log_subscription_created(const subscription_t* sub);
|
||||
void log_subscription_closed(const char* sub_id, const char* client_ip, const char* reason);
|
||||
void log_subscription_disconnected(const char* client_ip);
|
||||
void log_event_broadcast(const char* event_id, const char* sub_id, const char* client_ip);
|
||||
void update_subscription_events_sent(const char* sub_id, int events_sent);
|
||||
|
||||
// Forward declarations for NIP-01 event handling
|
||||
@@ -148,10 +147,9 @@ int mark_event_as_deleted(const char* event_id, const char* deletion_event_id, c
|
||||
|
||||
// Forward declaration for database functions
|
||||
int store_event(cJSON* event);
|
||||
cJSON* retrieve_event(const char* event_id);
|
||||
|
||||
// Forward declarations for monitoring system
|
||||
void init_monitoring_system(void);
|
||||
void cleanup_monitoring_system(void);
|
||||
// Forward declaration for monitoring system
|
||||
void monitoring_on_event_stored(void);
|
||||
|
||||
// Forward declarations for NIP-11 relay information handling
|
||||
@@ -211,23 +209,21 @@ void signal_handler(int sig) {
|
||||
// Send NOTICE message to client (NIP-01)
|
||||
void send_notice_message(struct lws* wsi, const char* message) {
|
||||
if (!wsi || !message) return;
|
||||
|
||||
|
||||
cJSON* notice_msg = cJSON_CreateArray();
|
||||
cJSON_AddItemToArray(notice_msg, cJSON_CreateString("NOTICE"));
|
||||
cJSON_AddItemToArray(notice_msg, cJSON_CreateString(message));
|
||||
|
||||
|
||||
char* msg_str = cJSON_Print(notice_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, NULL, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue NOTICE message");
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
|
||||
|
||||
cJSON_Delete(notice_msg);
|
||||
}
|
||||
|
||||
@@ -317,14 +313,35 @@ int init_database(const char* database_path_override) {
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
// Check config table row count immediately after database open
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
int rc = sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
DEBUG_LOG("Config table row count immediately after sqlite3_open(): %d", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
} else {
|
||||
DEBUG_LOG("Config table does not exist yet (first-time startup)");
|
||||
// Capture and log the actual SQLite error instead of assuming table doesn't exist
|
||||
const char* err_msg = sqlite3_errmsg(g_db);
|
||||
DEBUG_LOG("Failed to prepare config table query: %s (error code: %d)", err_msg, rc);
|
||||
|
||||
// Check if it's actually a missing table vs other error
|
||||
if (rc == SQLITE_ERROR) {
|
||||
// Try to check if config table exists
|
||||
sqlite3_stmt* check_stmt;
|
||||
int check_rc = sqlite3_prepare_v2(g_db, "SELECT name FROM sqlite_master WHERE type='table' AND name='config'", -1, &check_stmt, NULL);
|
||||
if (check_rc == SQLITE_OK) {
|
||||
int has_table = (sqlite3_step(check_stmt) == SQLITE_ROW);
|
||||
sqlite3_finalize(check_stmt);
|
||||
if (has_table) {
|
||||
DEBUG_LOG("Config table EXISTS but query failed - possible database corruption or locking issue");
|
||||
} else {
|
||||
DEBUG_LOG("Config table does not exist yet (first-time startup)");
|
||||
}
|
||||
} else {
|
||||
DEBUG_LOG("Failed to check table existence: %s (error code: %d)", sqlite3_errmsg(g_db), check_rc);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// DEBUG_GUARD_END
|
||||
@@ -571,93 +588,6 @@ const char* extract_d_tag_value(cJSON* tags) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Check and handle replaceable events according to NIP-01
|
||||
int check_and_handle_replaceable_event(int kind, const char* pubkey, long created_at) {
|
||||
if (!g_db || !pubkey) return 0;
|
||||
|
||||
const char* sql =
|
||||
"SELECT created_at FROM events WHERE kind = ? AND pubkey = ? ORDER BY created_at DESC LIMIT 1";
|
||||
|
||||
sqlite3_stmt* stmt;
|
||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
if (rc != SQLITE_OK) {
|
||||
return 0; // Allow storage on DB error
|
||||
}
|
||||
|
||||
sqlite3_bind_int(stmt, 1, kind);
|
||||
sqlite3_bind_text(stmt, 2, pubkey, -1, SQLITE_STATIC);
|
||||
|
||||
int result = 0;
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
long existing_created_at = sqlite3_column_int64(stmt, 0);
|
||||
if (created_at <= existing_created_at) {
|
||||
result = -1; // Older or same timestamp, reject
|
||||
} else {
|
||||
// Delete older versions
|
||||
const char* delete_sql = "DELETE FROM events WHERE kind = ? AND pubkey = ? AND created_at < ?";
|
||||
sqlite3_stmt* delete_stmt;
|
||||
if (sqlite3_prepare_v2(g_db, delete_sql, -1, &delete_stmt, NULL) == SQLITE_OK) {
|
||||
sqlite3_bind_int(delete_stmt, 1, kind);
|
||||
sqlite3_bind_text(delete_stmt, 2, pubkey, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_int64(delete_stmt, 3, created_at);
|
||||
sqlite3_step(delete_stmt);
|
||||
sqlite3_finalize(delete_stmt);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sqlite3_finalize(stmt);
|
||||
return result;
|
||||
}
|
||||
|
||||
// Check and handle addressable events according to NIP-01
|
||||
int check_and_handle_addressable_event(int kind, const char* pubkey, const char* d_tag_value, long created_at) {
|
||||
if (!g_db || !pubkey) return 0;
|
||||
|
||||
// If no d tag, treat as regular replaceable
|
||||
if (!d_tag_value) {
|
||||
return check_and_handle_replaceable_event(kind, pubkey, created_at);
|
||||
}
|
||||
|
||||
const char* sql =
|
||||
"SELECT created_at FROM events WHERE kind = ? AND pubkey = ? AND json_extract(tags, '$[*][1]') = ? "
|
||||
"AND json_extract(tags, '$[*][0]') = 'd' ORDER BY created_at DESC LIMIT 1";
|
||||
|
||||
sqlite3_stmt* stmt;
|
||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
if (rc != SQLITE_OK) {
|
||||
return 0; // Allow storage on DB error
|
||||
}
|
||||
|
||||
sqlite3_bind_int(stmt, 1, kind);
|
||||
sqlite3_bind_text(stmt, 2, pubkey, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 3, d_tag_value, -1, SQLITE_STATIC);
|
||||
|
||||
int result = 0;
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
long existing_created_at = sqlite3_column_int64(stmt, 0);
|
||||
if (created_at <= existing_created_at) {
|
||||
result = -1; // Older or same timestamp, reject
|
||||
} else {
|
||||
// Delete older versions with same kind, pubkey, and d tag
|
||||
const char* delete_sql =
|
||||
"DELETE FROM events WHERE kind = ? AND pubkey = ? AND created_at < ? "
|
||||
"AND json_extract(tags, '$[*][1]') = ? AND json_extract(tags, '$[*][0]') = 'd'";
|
||||
sqlite3_stmt* delete_stmt;
|
||||
if (sqlite3_prepare_v2(g_db, delete_sql, -1, &delete_stmt, NULL) == SQLITE_OK) {
|
||||
sqlite3_bind_int(delete_stmt, 1, kind);
|
||||
sqlite3_bind_text(delete_stmt, 2, pubkey, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_int64(delete_stmt, 3, created_at);
|
||||
sqlite3_bind_text(delete_stmt, 4, d_tag_value, -1, SQLITE_STATIC);
|
||||
sqlite3_step(delete_stmt);
|
||||
sqlite3_finalize(delete_stmt);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sqlite3_finalize(stmt);
|
||||
return result;
|
||||
}
|
||||
|
||||
// Store event in database
|
||||
int store_event(cJSON* event) {
|
||||
@@ -681,7 +611,14 @@ int store_event(cJSON* event) {
|
||||
|
||||
// Classify event type
|
||||
event_type_t type = classify_event_kind((int)cJSON_GetNumberValue(kind));
|
||||
|
||||
|
||||
// EPHEMERAL EVENTS (kinds 20000-29999) should NOT be stored
|
||||
if (type == EVENT_TYPE_EPHEMERAL) {
|
||||
DEBUG_LOG("Ephemeral event (kind %d) - broadcasting only, not storing",
|
||||
(int)cJSON_GetNumberValue(kind));
|
||||
return 0; // Success - event was handled but not stored
|
||||
}
|
||||
|
||||
// Serialize tags to JSON (use empty array if no tags)
|
||||
char* tags_json = NULL;
|
||||
if (tags && cJSON_IsArray(tags)) {
|
||||
@@ -720,11 +657,36 @@ int store_event(cJSON* event) {
|
||||
|
||||
// Execute statement
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc != SQLITE_DONE) {
|
||||
const char* err_msg = sqlite3_errmsg(g_db);
|
||||
int extended_errcode = sqlite3_extended_errcode(g_db);
|
||||
DEBUG_ERROR("INSERT failed: rc=%d, extended_errcode=%d, msg=%s", rc, extended_errcode, err_msg);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
|
||||
if (rc != SQLITE_DONE) {
|
||||
if (rc == SQLITE_CONSTRAINT) {
|
||||
DEBUG_WARN("Event already exists in database");
|
||||
|
||||
// Add TRACE level debug to show both events
|
||||
if (g_debug_level >= DEBUG_LEVEL_TRACE) {
|
||||
// Get the existing event from database
|
||||
cJSON* existing_event = retrieve_event(cJSON_GetStringValue(id));
|
||||
if (existing_event) {
|
||||
char* existing_json = cJSON_Print(existing_event);
|
||||
DEBUG_TRACE("EXISTING EVENT: %s", existing_json ? existing_json : "NULL");
|
||||
free(existing_json);
|
||||
cJSON_Delete(existing_event);
|
||||
} else {
|
||||
DEBUG_TRACE("EXISTING EVENT: Could not retrieve existing event");
|
||||
}
|
||||
|
||||
// Show the event we're trying to insert
|
||||
char* new_json = cJSON_Print(event);
|
||||
DEBUG_TRACE("NEW EVENT: %s", new_json ? new_json : "NULL");
|
||||
free(new_json);
|
||||
}
|
||||
|
||||
free(tags_json);
|
||||
return 0; // Not an error, just duplicate
|
||||
}
|
||||
@@ -916,12 +878,11 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
||||
char* msg_str = cJSON_Print(event_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, NULL, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue config EVENT message");
|
||||
} else {
|
||||
config_events_sent++;
|
||||
free(buf);
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
@@ -959,11 +920,9 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
||||
char* closed_str = cJSON_Print(closed_msg);
|
||||
if (closed_str) {
|
||||
size_t closed_len = strlen(closed_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + closed_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, closed_str, closed_len);
|
||||
lws_write(wsi, buf + LWS_PRE, closed_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, closed_str, closed_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue CLOSED message");
|
||||
}
|
||||
free(closed_str);
|
||||
}
|
||||
@@ -1289,19 +1248,17 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
||||
cJSON_AddItemToArray(event_msg, cJSON_CreateString("EVENT"));
|
||||
cJSON_AddItemToArray(event_msg, cJSON_CreateString(sub_id));
|
||||
cJSON_AddItemToArray(event_msg, event);
|
||||
|
||||
|
||||
char* msg_str = cJSON_Print(event_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue EVENT message for sub=%s", sub_id);
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
|
||||
|
||||
cJSON_Delete(event_msg);
|
||||
events_sent++;
|
||||
}
|
||||
@@ -1437,7 +1394,7 @@ void print_usage(const char* program_name) {
|
||||
printf("Options:\n");
|
||||
printf(" -h, --help Show this help message\n");
|
||||
printf(" -v, --version Show version information\n");
|
||||
printf(" -p, --port PORT Override relay port (first-time startup only)\n");
|
||||
printf(" -p, --port PORT Override relay port (first-time startup and existing relay restarts)\n");
|
||||
printf(" --strict-port Fail if exact port is unavailable (no port increment)\n");
|
||||
printf(" -a, --admin-pubkey KEY Override admin public key (64-char hex or npub)\n");
|
||||
printf(" -r, --relay-privkey KEY Override relay private key (64-char hex or nsec)\n");
|
||||
@@ -1447,13 +1404,14 @@ void print_usage(const char* program_name) {
|
||||
printf("Configuration:\n");
|
||||
printf(" This relay uses event-based configuration stored in the database.\n");
|
||||
printf(" On first startup, keys are automatically generated and printed once.\n");
|
||||
printf(" Command line options like --port only apply during first-time setup.\n");
|
||||
printf(" Command line options like --port apply during first-time setup and existing relay restarts.\n");
|
||||
printf(" After initial setup, all configuration is managed via database events.\n");
|
||||
printf(" Database file: <relay_pubkey>.db (created automatically)\n");
|
||||
printf("\n");
|
||||
printf("Port Binding:\n");
|
||||
printf(" Default: Try up to 10 consecutive ports if requested port is busy\n");
|
||||
printf(" --strict-port: Fail immediately if exact requested port is unavailable\n");
|
||||
printf(" --strict-port works with any custom port specified via -p or --port\n");
|
||||
printf("\n");
|
||||
printf("Examples:\n");
|
||||
printf(" %s # Start relay (auto-configure on first run)\n", program_name);
|
||||
@@ -1702,70 +1660,7 @@ int main(int argc, char* argv[]) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
// COMMENTED OUT: Old incremental config building code replaced by unified startup sequence
|
||||
// The new first_time_startup_sequence() function handles all config creation atomically
|
||||
/*
|
||||
// Handle configuration setup after database is initialized
|
||||
// Always populate defaults directly in config table (abandoning legacy event signing)
|
||||
|
||||
// Populate default config values in table
|
||||
if (populate_default_config_values() != 0) {
|
||||
DEBUG_ERROR("Failed to populate default config values");
|
||||
cleanup_configuration_system();
|
||||
nostr_cleanup();
|
||||
close_database();
|
||||
return 1;
|
||||
}
|
||||
|
||||
// DEBUG_GUARD_START
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
DEBUG_LOG("Config table row count after populate_default_config_values(): %d", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
// DEBUG_GUARD_END
|
||||
|
||||
// Apply CLI overrides now that database is available
|
||||
if (cli_options.port_override > 0) {
|
||||
char port_str[16];
|
||||
snprintf(port_str, sizeof(port_str), "%d", cli_options.port_override);
|
||||
if (update_config_in_table("relay_port", port_str) != 0) {
|
||||
DEBUG_ERROR("Failed to update relay port override in config table");
|
||||
cleanup_configuration_system();
|
||||
nostr_cleanup();
|
||||
close_database();
|
||||
return 1;
|
||||
}
|
||||
printf(" Port: %d (overriding default)\n", cli_options.port_override);
|
||||
}
|
||||
|
||||
// Add pubkeys to config table (single authoritative call)
|
||||
if (add_pubkeys_to_config_table() != 0) {
|
||||
DEBUG_ERROR("Failed to add pubkeys to config table");
|
||||
cleanup_configuration_system();
|
||||
nostr_cleanup();
|
||||
close_database();
|
||||
return 1;
|
||||
}
|
||||
|
||||
// DEBUG_GUARD_START
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
DEBUG_LOG("Config table row count after add_pubkeys_to_config_table() (first-time): %d", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
// DEBUG_GUARD_END
|
||||
*/
|
||||
} else {
|
||||
// Find existing database file
|
||||
char** existing_files = find_existing_db_files();
|
||||
@@ -1800,7 +1695,7 @@ int main(int argc, char* argv[]) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Setup existing relay (sets database path and loads config)
|
||||
// Setup existing relay FIRST (sets database path)
|
||||
if (startup_existing_relay(relay_pubkey, &cli_options) != 0) {
|
||||
DEBUG_ERROR("Failed to setup existing relay");
|
||||
cleanup_configuration_system();
|
||||
@@ -1813,23 +1708,7 @@ int main(int argc, char* argv[]) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Check config table row count before database initialization
|
||||
{
|
||||
sqlite3* temp_db = NULL;
|
||||
if (sqlite3_open(g_database_path, &temp_db) == SQLITE_OK) {
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(temp_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
printf(" Config table row count before database initialization: %d\n", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
sqlite3_close(temp_db);
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize database with existing database path
|
||||
// Initialize database with the database path set by startup_existing_relay()
|
||||
DEBUG_TRACE("Initializing existing database");
|
||||
if (init_database(g_database_path) != 0) {
|
||||
DEBUG_ERROR("Failed to initialize existing database");
|
||||
@@ -1844,6 +1723,20 @@ int main(int argc, char* argv[]) {
|
||||
}
|
||||
DEBUG_LOG("Existing database initialized");
|
||||
|
||||
// Apply CLI overrides atomically (now that database is initialized)
|
||||
if (apply_cli_overrides_atomic(&cli_options) != 0) {
|
||||
DEBUG_ERROR("Failed to apply CLI overrides for existing relay");
|
||||
cleanup_configuration_system();
|
||||
free(relay_pubkey);
|
||||
for (int i = 0; existing_files[i]; i++) {
|
||||
free(existing_files[i]);
|
||||
}
|
||||
free(existing_files);
|
||||
nostr_cleanup();
|
||||
close_database();
|
||||
return 1;
|
||||
}
|
||||
|
||||
// DEBUG_GUARD_START
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
sqlite3_stmt* stmt;
|
||||
@@ -1855,103 +1748,7 @@ int main(int argc, char* argv[]) {
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
// DEBUG_GUARD_END
|
||||
|
||||
// COMMENTED OUT: Old incremental config building code replaced by unified startup sequence
|
||||
// The new startup_existing_relay() function handles all config loading atomically
|
||||
/*
|
||||
// Ensure default configuration values are populated (for any missing keys)
|
||||
// This must be done AFTER database initialization
|
||||
// COMMENTED OUT: Don't modify existing database config on restart
|
||||
// if (populate_default_config_values() != 0) {
|
||||
// DEBUG_WARN("Failed to populate default config values for existing relay - continuing");
|
||||
// }
|
||||
|
||||
// Load configuration from database
|
||||
cJSON* config_event = load_config_event_from_database(relay_pubkey);
|
||||
if (config_event) {
|
||||
if (apply_configuration_from_event(config_event) != 0) {
|
||||
DEBUG_WARN("Failed to apply configuration from database");
|
||||
}
|
||||
cJSON_Delete(config_event);
|
||||
} else {
|
||||
// This is expected for relays using table-based configuration
|
||||
// No longer a warning - just informational
|
||||
}
|
||||
|
||||
// DEBUG_GUARD_START
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
DEBUG_LOG("Config table row count before checking pubkeys: %d", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
// DEBUG_GUARD_END
|
||||
|
||||
// Ensure pubkeys are in config table for existing relay
|
||||
// This handles migration from old event-based config to table-based config
|
||||
const char* admin_pubkey_from_table = get_config_value_from_table("admin_pubkey");
|
||||
const char* relay_pubkey_from_table = get_config_value_from_table("relay_pubkey");
|
||||
|
||||
int need_to_add_pubkeys = 0;
|
||||
|
||||
// Check if admin_pubkey is missing or invalid
|
||||
if (!admin_pubkey_from_table || strlen(admin_pubkey_from_table) != 64) {
|
||||
DEBUG_WARN("Admin pubkey missing or invalid in config table - will regenerate from cache");
|
||||
need_to_add_pubkeys = 1;
|
||||
}
|
||||
if (admin_pubkey_from_table) free((char*)admin_pubkey_from_table);
|
||||
|
||||
// Check if relay_pubkey is missing or invalid
|
||||
if (!relay_pubkey_from_table || strlen(relay_pubkey_from_table) != 64) {
|
||||
DEBUG_WARN("Relay pubkey missing or invalid in config table - will regenerate from cache");
|
||||
need_to_add_pubkeys = 1;
|
||||
}
|
||||
if (relay_pubkey_from_table) free((char*)relay_pubkey_from_table);
|
||||
|
||||
// If either pubkey is missing, call add_pubkeys_to_config_table to populate both
|
||||
if (need_to_add_pubkeys) {
|
||||
if (add_pubkeys_to_config_table() != 0) {
|
||||
DEBUG_ERROR("Failed to add pubkeys to config table for existing relay");
|
||||
cleanup_configuration_system();
|
||||
nostr_cleanup();
|
||||
close_database();
|
||||
return 1;
|
||||
}
|
||||
|
||||
// DEBUG_GUARD_START
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
DEBUG_LOG("Config table row count after add_pubkeys_to_config_table(): %d", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
// DEBUG_GUARD_END
|
||||
}
|
||||
|
||||
// Apply CLI overrides for existing relay (port override should work even for existing relays)
|
||||
if (cli_options.port_override > 0) {
|
||||
char port_str[16];
|
||||
snprintf(port_str, sizeof(port_str), "%d", cli_options.port_override);
|
||||
if (update_config_in_table("relay_port", port_str) != 0) {
|
||||
DEBUG_ERROR("Failed to update relay port override in config table for existing relay");
|
||||
cleanup_configuration_system();
|
||||
nostr_cleanup();
|
||||
close_database();
|
||||
return 1;
|
||||
}
|
||||
printf(" Port: %d (overriding configured port)\n", cli_options.port_override);
|
||||
}
|
||||
*/
|
||||
|
||||
|
||||
// Free memory
|
||||
free(relay_pubkey);
|
||||
for (int i = 0; existing_files[i]; i++) {
|
||||
@@ -1989,9 +1786,6 @@ int main(int argc, char* argv[]) {
|
||||
// Initialize NIP-40 expiration configuration
|
||||
init_expiration_config();
|
||||
|
||||
// Initialize monitoring system
|
||||
init_monitoring_system();
|
||||
|
||||
// Update subscription manager configuration
|
||||
update_subscription_manager_config();
|
||||
|
||||
@@ -2015,17 +1809,14 @@ int main(int argc, char* argv[]) {
|
||||
|
||||
|
||||
|
||||
// Start WebSocket Nostr relay server (port from configuration)
|
||||
int result = start_websocket_relay(-1, cli_options.strict_port); // Let config system determine port, pass strict_port flag
|
||||
// Start WebSocket Nostr relay server (port from CLI override or configuration)
|
||||
int result = start_websocket_relay(cli_options.port_override, cli_options.strict_port); // Use CLI port override if specified, otherwise config
|
||||
|
||||
// Cleanup
|
||||
cleanup_relay_info();
|
||||
ginxsom_request_validator_cleanup();
|
||||
cleanup_configuration_system();
|
||||
|
||||
// Cleanup monitoring system
|
||||
cleanup_monitoring_system();
|
||||
|
||||
// Cleanup subscription manager mutexes
|
||||
pthread_mutex_destroy(&g_subscription_manager.subscriptions_lock);
|
||||
pthread_mutex_destroy(&g_subscription_manager.ip_tracking_lock);
|
||||
|
||||
@@ -10,10 +10,10 @@
|
||||
#define MAIN_H
|
||||
|
||||
// Version information (auto-updated by build system)
|
||||
#define VERSION "v0.7.27"
|
||||
#define VERSION "v0.8.0"
|
||||
#define VERSION_MAJOR 0
|
||||
#define VERSION_MINOR
|
||||
#define VERSION_PATCH 27
|
||||
#define VERSION_MINOR 7
|
||||
#define VERSION_PATCH 44
|
||||
|
||||
// Relay metadata (authoritative source for NIP-11 information)
|
||||
#define RELAY_NAME "C-Relay"
|
||||
|
||||
27
src/nip042.c
27
src/nip042.c
@@ -12,6 +12,7 @@
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <time.h>
|
||||
#include "websockets.h"
|
||||
|
||||
|
||||
// Forward declaration for notice message function
|
||||
@@ -22,23 +23,7 @@ int nostr_nip42_generate_challenge(char *challenge_buffer, size_t buffer_size);
|
||||
int nostr_nip42_verify_auth_event(cJSON *event, const char *challenge_id,
|
||||
const char *relay_url, int time_tolerance_seconds);
|
||||
|
||||
// Forward declaration for per_session_data struct (defined in main.c)
|
||||
struct per_session_data {
|
||||
int authenticated;
|
||||
void* subscriptions; // Head of this session's subscription list
|
||||
pthread_mutex_t session_lock; // Per-session thread safety
|
||||
char client_ip[41]; // Client IP for logging
|
||||
int subscription_count; // Number of subscriptions for this session
|
||||
|
||||
// NIP-42 Authentication State
|
||||
char authenticated_pubkey[65]; // Authenticated public key (64 hex + null)
|
||||
char active_challenge[65]; // Current challenge for this session (64 hex + null)
|
||||
time_t challenge_created; // When challenge was created
|
||||
time_t challenge_expires; // Challenge expiration time
|
||||
int nip42_auth_required_events; // Whether NIP-42 auth is required for EVENT submission
|
||||
int nip42_auth_required_subscriptions; // Whether NIP-42 auth is required for REQ operations
|
||||
int auth_challenge_sent; // Whether challenge has been sent (0/1)
|
||||
};
|
||||
// Forward declaration for per_session_data struct (defined in websockets.h)
|
||||
|
||||
|
||||
// Send NIP-42 authentication challenge to client
|
||||
@@ -70,11 +55,9 @@ void send_nip42_auth_challenge(struct lws* wsi, struct per_session_data* pss) {
|
||||
char* msg_str = cJSON_Print(auth_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue AUTH challenge message");
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
/* Embedded SQL Schema for C Nostr Relay
|
||||
* Generated from db/schema.sql - Do not edit manually
|
||||
* Schema Version: 7
|
||||
* Schema Version: 8
|
||||
*/
|
||||
#ifndef SQL_SCHEMA_H
|
||||
#define SQL_SCHEMA_H
|
||||
|
||||
/* Schema version constant */
|
||||
#define EMBEDDED_SCHEMA_VERSION "7"
|
||||
#define EMBEDDED_SCHEMA_VERSION "8"
|
||||
|
||||
/* Embedded SQL schema as C string literal */
|
||||
static const char* const EMBEDDED_SCHEMA_SQL =
|
||||
@@ -15,7 +15,7 @@ static const char* const EMBEDDED_SCHEMA_SQL =
|
||||
-- Configuration system using config table\n\
|
||||
\n\
|
||||
-- Schema version tracking\n\
|
||||
PRAGMA user_version = 7;\n\
|
||||
PRAGMA user_version = 8;\n\
|
||||
\n\
|
||||
-- Enable foreign key support\n\
|
||||
PRAGMA foreign_keys = ON;\n\
|
||||
@@ -58,8 +58,8 @@ CREATE TABLE schema_info (\n\
|
||||
\n\
|
||||
-- Insert schema metadata\n\
|
||||
INSERT INTO schema_info (key, value) VALUES\n\
|
||||
('version', '7'),\n\
|
||||
('description', 'Hybrid Nostr relay schema with event-based and table-based configuration'),\n\
|
||||
('version', '8'),\n\
|
||||
('description', 'Hybrid Nostr relay schema with subscription deduplication support'),\n\
|
||||
('created_at', strftime('%s', 'now'));\n\
|
||||
\n\
|
||||
-- Helper views for common queries\n\
|
||||
@@ -93,16 +93,6 @@ FROM events\n\
|
||||
WHERE kind = 33334\n\
|
||||
ORDER BY created_at DESC;\n\
|
||||
\n\
|
||||
-- Optimization: Trigger for automatic cleanup of ephemeral events older than 1 hour\n\
|
||||
CREATE TRIGGER cleanup_ephemeral_events\n\
|
||||
AFTER INSERT ON events\n\
|
||||
WHEN NEW.event_type = 'ephemeral'\n\
|
||||
BEGIN\n\
|
||||
DELETE FROM events \n\
|
||||
WHERE event_type = 'ephemeral' \n\
|
||||
AND first_seen < (strftime('%s', 'now') - 3600);\n\
|
||||
END;\n\
|
||||
\n\
|
||||
-- Replaceable event handling trigger\n\
|
||||
CREATE TRIGGER handle_replaceable_events\n\
|
||||
AFTER INSERT ON events\n\
|
||||
@@ -181,17 +171,19 @@ END;\n\
|
||||
-- Persistent Subscriptions Logging Tables (Phase 2)\n\
|
||||
-- Optional database logging for subscription analytics and debugging\n\
|
||||
\n\
|
||||
-- Subscription events log\n\
|
||||
CREATE TABLE subscription_events (\n\
|
||||
-- Subscriptions log (renamed from subscription_events for clarity)\n\
|
||||
CREATE TABLE subscriptions (\n\
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,\n\
|
||||
subscription_id TEXT NOT NULL, -- Subscription ID from client\n\
|
||||
wsi_pointer TEXT NOT NULL, -- WebSocket pointer address (hex string)\n\
|
||||
client_ip TEXT NOT NULL, -- Client IP address\n\
|
||||
event_type TEXT NOT NULL CHECK (event_type IN ('created', 'closed', 'expired', 'disconnected')),\n\
|
||||
filter_json TEXT, -- JSON representation of filters (for created events)\n\
|
||||
events_sent INTEGER DEFAULT 0, -- Number of events sent to this subscription\n\
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),\n\
|
||||
ended_at INTEGER, -- When subscription ended (for closed/expired/disconnected)\n\
|
||||
duration INTEGER -- Computed: ended_at - created_at\n\
|
||||
duration INTEGER, -- Computed: ended_at - created_at\n\
|
||||
UNIQUE(subscription_id, wsi_pointer) -- Prevent duplicate subscriptions per connection\n\
|
||||
);\n\
|
||||
\n\
|
||||
-- Subscription metrics summary\n\
|
||||
@@ -207,34 +199,23 @@ CREATE TABLE subscription_metrics (\n\
|
||||
UNIQUE(date)\n\
|
||||
);\n\
|
||||
\n\
|
||||
-- Event broadcasting log (optional, for detailed analytics)\n\
|
||||
CREATE TABLE event_broadcasts (\n\
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,\n\
|
||||
event_id TEXT NOT NULL, -- Event ID that was broadcast\n\
|
||||
subscription_id TEXT NOT NULL, -- Subscription that received it\n\
|
||||
client_ip TEXT NOT NULL, -- Client IP\n\
|
||||
broadcast_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),\n\
|
||||
FOREIGN KEY (event_id) REFERENCES events(id)\n\
|
||||
);\n\
|
||||
\n\
|
||||
-- Indexes for subscription logging performance\n\
|
||||
CREATE INDEX idx_subscription_events_id ON subscription_events(subscription_id);\n\
|
||||
CREATE INDEX idx_subscription_events_type ON subscription_events(event_type);\n\
|
||||
CREATE INDEX idx_subscription_events_created ON subscription_events(created_at DESC);\n\
|
||||
CREATE INDEX idx_subscription_events_client ON subscription_events(client_ip);\n\
|
||||
CREATE INDEX idx_subscriptions_id ON subscriptions(subscription_id);\n\
|
||||
CREATE INDEX idx_subscriptions_type ON subscriptions(event_type);\n\
|
||||
CREATE INDEX idx_subscriptions_created ON subscriptions(created_at DESC);\n\
|
||||
CREATE INDEX idx_subscriptions_client ON subscriptions(client_ip);\n\
|
||||
CREATE INDEX idx_subscriptions_wsi ON subscriptions(wsi_pointer);\n\
|
||||
\n\
|
||||
CREATE INDEX idx_subscription_metrics_date ON subscription_metrics(date DESC);\n\
|
||||
\n\
|
||||
CREATE INDEX idx_event_broadcasts_event ON event_broadcasts(event_id);\n\
|
||||
CREATE INDEX idx_event_broadcasts_sub ON event_broadcasts(subscription_id);\n\
|
||||
CREATE INDEX idx_event_broadcasts_time ON event_broadcasts(broadcast_at DESC);\n\
|
||||
\n\
|
||||
-- Trigger to update subscription duration when ended\n\
|
||||
CREATE TRIGGER update_subscription_duration\n\
|
||||
AFTER UPDATE OF ended_at ON subscription_events\n\
|
||||
AFTER UPDATE OF ended_at ON subscriptions\n\
|
||||
WHEN NEW.ended_at IS NOT NULL AND OLD.ended_at IS NULL\n\
|
||||
BEGIN\n\
|
||||
UPDATE subscription_events\n\
|
||||
UPDATE subscriptions\n\
|
||||
SET duration = NEW.ended_at - NEW.created_at\n\
|
||||
WHERE id = NEW.id;\n\
|
||||
END;\n\
|
||||
@@ -249,24 +230,26 @@ SELECT\n\
|
||||
MAX(events_sent) as max_events_sent,\n\
|
||||
AVG(events_sent) as avg_events_sent,\n\
|
||||
COUNT(DISTINCT client_ip) as unique_clients\n\
|
||||
FROM subscription_events\n\
|
||||
FROM subscriptions\n\
|
||||
GROUP BY date(created_at, 'unixepoch')\n\
|
||||
ORDER BY date DESC;\n\
|
||||
\n\
|
||||
-- View for current active subscriptions (from log perspective)\n\
|
||||
CREATE VIEW active_subscriptions_log AS\n\
|
||||
SELECT\n\
|
||||
subscription_id,\n\
|
||||
client_ip,\n\
|
||||
filter_json,\n\
|
||||
events_sent,\n\
|
||||
created_at,\n\
|
||||
(strftime('%s', 'now') - created_at) as duration_seconds\n\
|
||||
FROM subscription_events\n\
|
||||
WHERE event_type = 'created'\n\
|
||||
AND subscription_id NOT IN (\n\
|
||||
SELECT subscription_id FROM subscription_events\n\
|
||||
WHERE event_type IN ('closed', 'expired', 'disconnected')\n\
|
||||
s.subscription_id,\n\
|
||||
s.client_ip,\n\
|
||||
s.filter_json,\n\
|
||||
s.events_sent,\n\
|
||||
s.created_at,\n\
|
||||
(strftime('%s', 'now') - s.created_at) as duration_seconds\n\
|
||||
FROM subscriptions s\n\
|
||||
WHERE s.event_type = 'created'\n\
|
||||
AND NOT EXISTS (\n\
|
||||
SELECT 1 FROM subscriptions s2\n\
|
||||
WHERE s2.subscription_id = s.subscription_id\n\
|
||||
AND s2.wsi_pointer = s.wsi_pointer\n\
|
||||
AND s2.event_type IN ('closed', 'expired', 'disconnected')\n\
|
||||
);\n\
|
||||
\n\
|
||||
-- Database Statistics Views for Admin API\n\
|
||||
|
||||
@@ -25,6 +25,9 @@ int validate_timestamp_range(long since, long until, char* error_message, size_t
|
||||
int validate_numeric_limits(int limit, char* error_message, size_t error_size);
|
||||
int validate_search_term(const char* search_term, char* error_message, size_t error_size);
|
||||
|
||||
// Forward declaration for monitoring function
|
||||
void monitoring_on_subscription_change(void);
|
||||
|
||||
// Global database variable
|
||||
extern sqlite3* g_db;
|
||||
|
||||
@@ -238,27 +241,81 @@ void free_subscription(subscription_t* sub) {
|
||||
// Add subscription to global manager (thread-safe)
|
||||
int add_subscription_to_manager(subscription_t* sub) {
|
||||
if (!sub) return -1;
|
||||
|
||||
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
// Check global limits
|
||||
if (g_subscription_manager.total_subscriptions >= g_subscription_manager.max_total_subscriptions) {
|
||||
|
||||
// Check for existing subscription with same ID and WebSocket connection
|
||||
// Remove it first to prevent duplicates (implements subscription replacement per NIP-01)
|
||||
subscription_t** current = &g_subscription_manager.active_subscriptions;
|
||||
int found_duplicate = 0;
|
||||
subscription_t* duplicate_old = NULL;
|
||||
|
||||
while (*current) {
|
||||
subscription_t* existing = *current;
|
||||
|
||||
// Match by subscription ID and WebSocket pointer
|
||||
if (strcmp(existing->id, sub->id) == 0 && existing->wsi == sub->wsi) {
|
||||
// Found duplicate: mark inactive and unlink from global list under lock
|
||||
existing->active = 0;
|
||||
*current = existing->next;
|
||||
g_subscription_manager.total_subscriptions--;
|
||||
found_duplicate = 1;
|
||||
duplicate_old = existing; // defer free until after per-session unlink
|
||||
break;
|
||||
}
|
||||
|
||||
current = &(existing->next);
|
||||
}
|
||||
|
||||
// Check global limits (only if not replacing an existing subscription)
|
||||
if (!found_duplicate && g_subscription_manager.total_subscriptions >= g_subscription_manager.max_total_subscriptions) {
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
DEBUG_ERROR("Maximum total subscriptions reached");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
// Add to global list
|
||||
sub->next = g_subscription_manager.active_subscriptions;
|
||||
g_subscription_manager.active_subscriptions = sub;
|
||||
g_subscription_manager.total_subscriptions++;
|
||||
g_subscription_manager.total_created++;
|
||||
|
||||
|
||||
// Only increment total_created if this is a new subscription (not a replacement)
|
||||
if (!found_duplicate) {
|
||||
g_subscription_manager.total_created++;
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
// Log subscription creation to database
|
||||
|
||||
// If we replaced an existing subscription, unlink it from the per-session list before freeing
|
||||
if (duplicate_old) {
|
||||
// Obtain per-session data for this wsi
|
||||
struct per_session_data* pss = (struct per_session_data*) lws_wsi_user(duplicate_old->wsi);
|
||||
if (pss) {
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
struct subscription** scur = &pss->subscriptions;
|
||||
while (*scur) {
|
||||
if (*scur == duplicate_old) {
|
||||
// Unlink by pointer identity to avoid removing the newly-added one
|
||||
*scur = duplicate_old->session_next;
|
||||
if (pss->subscription_count > 0) {
|
||||
pss->subscription_count--;
|
||||
}
|
||||
break;
|
||||
}
|
||||
scur = &((*scur)->session_next);
|
||||
}
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
}
|
||||
// Now safe to free the old subscription
|
||||
free_subscription(duplicate_old);
|
||||
}
|
||||
|
||||
// Log subscription creation to database (INSERT OR REPLACE handles duplicates)
|
||||
log_subscription_created(sub);
|
||||
|
||||
// Trigger monitoring update for subscription changes
|
||||
monitoring_on_subscription_change();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -306,6 +363,9 @@ int remove_subscription_from_manager(const char* sub_id, struct lws* wsi) {
|
||||
// Update events sent counter before freeing
|
||||
update_subscription_events_sent(sub_id_copy, events_sent_copy);
|
||||
|
||||
// Trigger monitoring update for subscription changes
|
||||
monitoring_on_subscription_change();
|
||||
|
||||
free_subscription(sub);
|
||||
return 0;
|
||||
}
|
||||
@@ -324,37 +384,52 @@ int remove_subscription_from_manager(const char* sub_id, struct lws* wsi) {
|
||||
|
||||
// Check if an event matches a subscription filter
|
||||
int event_matches_filter(cJSON* event, subscription_filter_t* filter) {
|
||||
DEBUG_TRACE("Checking event against subscription filter");
|
||||
|
||||
if (!event || !filter) {
|
||||
DEBUG_TRACE("Exiting event_matches_filter - null parameters");
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Debug: Log event details being tested
|
||||
cJSON* event_kind_obj = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at_obj = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("FILTER_MATCH: Testing event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind_obj ? (int)cJSON_GetNumberValue(event_kind_obj) : -1,
|
||||
event_id_obj && cJSON_IsString(event_id_obj) ? cJSON_GetStringValue(event_id_obj) : "null",
|
||||
event_created_at_obj ? (long)cJSON_GetNumberValue(event_created_at_obj) : 0);
|
||||
|
||||
// Check kinds filter
|
||||
if (filter->kinds && cJSON_IsArray(filter->kinds)) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking kinds filter with %d kinds", cJSON_GetArraySize(filter->kinds));
|
||||
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
if (!event_kind || !cJSON_IsNumber(event_kind)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid kind field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int event_kind_val = (int)cJSON_GetNumberValue(event_kind);
|
||||
int kind_match = 0;
|
||||
DEBUG_TRACE("FILTER_MATCH: Event kind=%d", event_kind_val);
|
||||
|
||||
int kind_match = 0;
|
||||
cJSON* kind_item = NULL;
|
||||
cJSON_ArrayForEach(kind_item, filter->kinds) {
|
||||
if (cJSON_IsNumber(kind_item)) {
|
||||
int filter_kind = (int)cJSON_GetNumberValue(kind_item);
|
||||
DEBUG_TRACE("FILTER_MATCH: Comparing event kind %d with filter kind %d", event_kind_val, filter_kind);
|
||||
if (filter_kind == event_kind_val) {
|
||||
kind_match = 1;
|
||||
DEBUG_TRACE("FILTER_MATCH: Kind matched!");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!kind_match) {
|
||||
DEBUG_TRACE("FILTER_MATCH: No kind match, filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Kinds filter passed");
|
||||
}
|
||||
|
||||
// Check authors filter
|
||||
@@ -415,13 +490,19 @@ int event_matches_filter(cJSON* event, subscription_filter_t* filter) {
|
||||
if (filter->since > 0) {
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
if (!event_created_at || !cJSON_IsNumber(event_created_at)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid created_at field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
long event_timestamp = (long)cJSON_GetNumberValue(event_created_at);
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking since filter: event_ts=%ld filter_since=%ld",
|
||||
event_timestamp, filter->since);
|
||||
|
||||
if (event_timestamp < filter->since) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Event too old (before since), filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Since filter passed");
|
||||
}
|
||||
|
||||
// Check until filter
|
||||
@@ -503,7 +584,7 @@ int event_matches_filter(cJSON* event, subscription_filter_t* filter) {
|
||||
}
|
||||
}
|
||||
|
||||
DEBUG_TRACE("Exiting event_matches_filter - match found");
|
||||
DEBUG_TRACE("FILTER_MATCH: All filters passed, event matches!");
|
||||
return 1; // All filters passed
|
||||
}
|
||||
|
||||
@@ -513,23 +594,29 @@ int event_matches_subscription(cJSON* event, subscription_t* subscription) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: Testing subscription '%s'", subscription->id);
|
||||
|
||||
int filter_num = 0;
|
||||
subscription_filter_t* filter = subscription->filters;
|
||||
while (filter) {
|
||||
filter_num++;
|
||||
DEBUG_TRACE("SUB_MATCH: Testing filter #%d", filter_num);
|
||||
|
||||
if (event_matches_filter(event, filter)) {
|
||||
DEBUG_TRACE("SUB_MATCH: Filter #%d matched! Subscription '%s' matches",
|
||||
filter_num, subscription->id);
|
||||
return 1; // Match found (OR logic)
|
||||
}
|
||||
filter = filter->next;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: No filters matched for subscription '%s'", subscription->id);
|
||||
return 0; // No filters matched
|
||||
}
|
||||
|
||||
// Broadcast event to all matching subscriptions (thread-safe)
|
||||
int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
DEBUG_TRACE("Broadcasting event to subscriptions");
|
||||
|
||||
if (!event) {
|
||||
DEBUG_TRACE("Exiting broadcast_event_to_subscriptions - null event");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -545,7 +632,17 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
}
|
||||
|
||||
int broadcasts = 0;
|
||||
|
||||
|
||||
// Log event details
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("BROADCAST: Event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind ? (int)cJSON_GetNumberValue(event_kind) : -1,
|
||||
event_id && cJSON_IsString(event_id) ? cJSON_GetStringValue(event_id) : "null",
|
||||
event_created_at ? (long)cJSON_GetNumberValue(event_created_at) : 0);
|
||||
|
||||
// Create a temporary list of matching subscriptions to avoid holding lock during I/O
|
||||
typedef struct temp_sub {
|
||||
struct lws* wsi;
|
||||
@@ -553,13 +650,21 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
char client_ip[CLIENT_IP_MAX_LENGTH];
|
||||
struct temp_sub* next;
|
||||
} temp_sub_t;
|
||||
|
||||
|
||||
temp_sub_t* matching_subs = NULL;
|
||||
int matching_count = 0;
|
||||
|
||||
// First pass: collect matching subscriptions while holding lock
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
int total_subs = 0;
|
||||
subscription_t* count_sub = g_subscription_manager.active_subscriptions;
|
||||
while (count_sub) {
|
||||
total_subs++;
|
||||
count_sub = count_sub->next;
|
||||
}
|
||||
DEBUG_TRACE("BROADCAST: Checking %d active subscriptions", total_subs);
|
||||
|
||||
subscription_t* sub = g_subscription_manager.active_subscriptions;
|
||||
while (sub) {
|
||||
if (sub->active && sub->wsi && event_matches_subscription(event, sub)) {
|
||||
@@ -611,12 +716,19 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
|
||||
// Send to WebSocket connection with error checking
|
||||
// Note: lws_write can fail if connection is closed, but won't crash
|
||||
int write_result = lws_write(current_temp->wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
if (write_result >= 0) {
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=EVENT sub=%s len=%zu data=%.100s%s",
|
||||
current_temp->id,
|
||||
msg_len,
|
||||
msg_str,
|
||||
msg_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
struct per_session_data* pss = (struct per_session_data*)lws_wsi_user(current_temp->wsi);
|
||||
if (queue_message(current_temp->wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT) == 0) {
|
||||
// Message queued successfully
|
||||
broadcasts++;
|
||||
|
||||
|
||||
// Update events sent counter for this subscription
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
subscription_t* update_sub = g_subscription_manager.active_subscriptions;
|
||||
@@ -630,12 +742,15 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
update_sub = update_sub->next;
|
||||
}
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
|
||||
// Log event broadcast to database (optional - can be disabled for performance)
|
||||
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
if (event_id_obj && cJSON_IsString(event_id_obj)) {
|
||||
log_event_broadcast(cJSON_GetStringValue(event_id_obj), current_temp->id, current_temp->client_ip);
|
||||
}
|
||||
// NOTE: event_broadcasts table removed due to FOREIGN KEY constraint issues
|
||||
// cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
// if (event_id_obj && cJSON_IsString(event_id_obj)) {
|
||||
// log_event_broadcast(cJSON_GetStringValue(event_id_obj), current_temp->id, current_temp->client_ip);
|
||||
// }
|
||||
} else {
|
||||
DEBUG_ERROR("Failed to queue EVENT message for sub=%s", current_temp->id);
|
||||
}
|
||||
|
||||
free(buf);
|
||||
@@ -660,10 +775,41 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
DEBUG_LOG("Event broadcast complete: %d subscriptions matched", broadcasts);
|
||||
DEBUG_TRACE("Exiting broadcast_event_to_subscriptions");
|
||||
return broadcasts;
|
||||
}
|
||||
|
||||
// Check if any active subscription exists for a specific event kind (thread-safe)
|
||||
int has_subscriptions_for_kind(int event_kind) {
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
subscription_t* sub = g_subscription_manager.active_subscriptions;
|
||||
while (sub) {
|
||||
if (sub->active && sub->filters) {
|
||||
subscription_filter_t* filter = sub->filters;
|
||||
while (filter) {
|
||||
// Check if this filter includes our event kind
|
||||
if (filter->kinds && cJSON_IsArray(filter->kinds)) {
|
||||
cJSON* kind_item = NULL;
|
||||
cJSON_ArrayForEach(kind_item, filter->kinds) {
|
||||
if (cJSON_IsNumber(kind_item)) {
|
||||
int filter_kind = (int)cJSON_GetNumberValue(kind_item);
|
||||
if (filter_kind == event_kind) {
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
return 1; // Found matching subscription
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
filter = filter->next;
|
||||
}
|
||||
}
|
||||
sub = sub->next;
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
return 0; // No matching subscriptions
|
||||
}
|
||||
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
@@ -675,6 +821,10 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
void log_subscription_created(const subscription_t* sub) {
|
||||
if (!g_db || !sub) return;
|
||||
|
||||
// Convert wsi pointer to string
|
||||
char wsi_str[32];
|
||||
snprintf(wsi_str, sizeof(wsi_str), "%p", (void*)sub->wsi);
|
||||
|
||||
// Create filter JSON for logging
|
||||
char* filter_json = NULL;
|
||||
if (sub->filters) {
|
||||
@@ -721,16 +871,18 @@ void log_subscription_created(const subscription_t* sub) {
|
||||
cJSON_Delete(filters_array);
|
||||
}
|
||||
|
||||
// Use INSERT OR REPLACE to handle duplicates automatically
|
||||
const char* sql =
|
||||
"INSERT INTO subscription_events (subscription_id, client_ip, event_type, filter_json) "
|
||||
"VALUES (?, ?, 'created', ?)";
|
||||
"INSERT OR REPLACE INTO subscriptions (subscription_id, wsi_pointer, client_ip, event_type, filter_json) "
|
||||
"VALUES (?, ?, ?, 'created', ?)";
|
||||
|
||||
sqlite3_stmt* stmt;
|
||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, sub->id, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, sub->client_ip, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 3, filter_json ? filter_json : "[]", -1, SQLITE_TRANSIENT);
|
||||
sqlite3_bind_text(stmt, 2, wsi_str, -1, SQLITE_TRANSIENT);
|
||||
sqlite3_bind_text(stmt, 3, sub->client_ip, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 4, filter_json ? filter_json : "[]", -1, SQLITE_TRANSIENT);
|
||||
|
||||
sqlite3_step(stmt);
|
||||
sqlite3_finalize(stmt);
|
||||
@@ -745,8 +897,8 @@ void log_subscription_closed(const char* sub_id, const char* client_ip, const ch
|
||||
if (!g_db || !sub_id) return;
|
||||
|
||||
const char* sql =
|
||||
"INSERT INTO subscription_events (subscription_id, client_ip, event_type) "
|
||||
"VALUES (?, ?, 'closed')";
|
||||
"INSERT INTO subscriptions (subscription_id, wsi_pointer, client_ip, event_type) "
|
||||
"VALUES (?, '', ?, 'closed')";
|
||||
|
||||
sqlite3_stmt* stmt;
|
||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
@@ -760,7 +912,7 @@ void log_subscription_closed(const char* sub_id, const char* client_ip, const ch
|
||||
|
||||
// Update the corresponding 'created' entry with end time and events sent
|
||||
const char* update_sql =
|
||||
"UPDATE subscription_events "
|
||||
"UPDATE subscriptions "
|
||||
"SET ended_at = strftime('%s', 'now') "
|
||||
"WHERE subscription_id = ? AND event_type = 'created' AND ended_at IS NULL";
|
||||
|
||||
@@ -778,7 +930,7 @@ void log_subscription_disconnected(const char* client_ip) {
|
||||
|
||||
// Mark all active subscriptions for this client as disconnected
|
||||
const char* sql =
|
||||
"UPDATE subscription_events "
|
||||
"UPDATE subscriptions "
|
||||
"SET ended_at = strftime('%s', 'now') "
|
||||
"WHERE client_ip = ? AND event_type = 'created' AND ended_at IS NULL";
|
||||
|
||||
@@ -793,8 +945,8 @@ void log_subscription_disconnected(const char* client_ip) {
|
||||
if (changes > 0) {
|
||||
// Log a disconnection event
|
||||
const char* insert_sql =
|
||||
"INSERT INTO subscription_events (subscription_id, client_ip, event_type) "
|
||||
"VALUES ('disconnect', ?, 'disconnected')";
|
||||
"INSERT INTO subscriptions (subscription_id, wsi_pointer, client_ip, event_type) "
|
||||
"VALUES ('disconnect', '', ?, 'disconnected')";
|
||||
|
||||
rc = sqlite3_prepare_v2(g_db, insert_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
@@ -807,31 +959,32 @@ void log_subscription_disconnected(const char* client_ip) {
|
||||
}
|
||||
|
||||
// Log event broadcast to database (optional, can be resource intensive)
|
||||
void log_event_broadcast(const char* event_id, const char* sub_id, const char* client_ip) {
|
||||
if (!g_db || !event_id || !sub_id || !client_ip) return;
|
||||
|
||||
const char* sql =
|
||||
"INSERT INTO event_broadcasts (event_id, subscription_id, client_ip) "
|
||||
"VALUES (?, ?, ?)";
|
||||
|
||||
sqlite3_stmt* stmt;
|
||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, event_id, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, sub_id, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 3, client_ip, -1, SQLITE_STATIC);
|
||||
|
||||
sqlite3_step(stmt);
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
// REMOVED: event_broadcasts table removed due to FOREIGN KEY constraint issues
|
||||
// void log_event_broadcast(const char* event_id, const char* sub_id, const char* client_ip) {
|
||||
// if (!g_db || !event_id || !sub_id || !client_ip) return;
|
||||
//
|
||||
// const char* sql =
|
||||
// "INSERT INTO event_broadcasts (event_id, subscription_id, client_ip) "
|
||||
// "VALUES (?, ?, ?)";
|
||||
//
|
||||
// sqlite3_stmt* stmt;
|
||||
// int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
// if (rc == SQLITE_OK) {
|
||||
// sqlite3_bind_text(stmt, 1, event_id, -1, SQLITE_STATIC);
|
||||
// sqlite3_bind_text(stmt, 2, sub_id, -1, SQLITE_STATIC);
|
||||
// sqlite3_bind_text(stmt, 3, client_ip, -1, SQLITE_STATIC);
|
||||
//
|
||||
// sqlite3_step(stmt);
|
||||
// sqlite3_finalize(stmt);
|
||||
// }
|
||||
// }
|
||||
|
||||
// Update events sent counter for a subscription
|
||||
void update_subscription_events_sent(const char* sub_id, int events_sent) {
|
||||
if (!g_db || !sub_id) return;
|
||||
|
||||
const char* sql =
|
||||
"UPDATE subscription_events "
|
||||
"UPDATE subscriptions "
|
||||
"SET events_sent = ? "
|
||||
"WHERE subscription_id = ? AND event_type = 'created'";
|
||||
|
||||
|
||||
@@ -115,7 +115,9 @@ int get_active_connections_for_ip(const char* client_ip);
|
||||
void log_subscription_created(const subscription_t* sub);
|
||||
void log_subscription_closed(const char* sub_id, const char* client_ip, const char* reason);
|
||||
void log_subscription_disconnected(const char* client_ip);
|
||||
void log_event_broadcast(const char* event_id, const char* sub_id, const char* client_ip);
|
||||
void update_subscription_events_sent(const char* sub_id, int events_sent);
|
||||
|
||||
// Subscription query functions
|
||||
int has_subscriptions_for_kind(int event_kind);
|
||||
|
||||
#endif // SUBSCRIPTIONS_H
|
||||
351
src/websockets.c
351
src/websockets.c
@@ -108,6 +108,136 @@ struct subscription_manager g_subscription_manager;
|
||||
|
||||
|
||||
|
||||
// Message queue functions for proper libwebsockets pattern
|
||||
|
||||
/**
|
||||
* Queue a message for WebSocket writing following libwebsockets' proper pattern.
|
||||
* This function adds messages to a per-session queue and requests writeable callback.
|
||||
*
|
||||
* @param wsi WebSocket instance
|
||||
* @param pss Per-session data containing message queue
|
||||
* @param message Message string to write
|
||||
* @param length Length of message string
|
||||
* @param type LWS_WRITE_* type (LWS_WRITE_TEXT, etc.)
|
||||
* @return 0 on success, -1 on error
|
||||
*/
|
||||
int queue_message(struct lws* wsi, struct per_session_data* pss, const char* message, size_t length, enum lws_write_protocol type) {
|
||||
if (!wsi || !pss || !message || length == 0) {
|
||||
DEBUG_ERROR("queue_message: invalid parameters");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Allocate message queue node
|
||||
struct message_queue_node* node = malloc(sizeof(struct message_queue_node));
|
||||
if (!node) {
|
||||
DEBUG_ERROR("queue_message: failed to allocate queue node");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Allocate buffer with LWS_PRE space
|
||||
size_t buffer_size = LWS_PRE + length;
|
||||
unsigned char* buffer = malloc(buffer_size);
|
||||
if (!buffer) {
|
||||
DEBUG_ERROR("queue_message: failed to allocate message buffer");
|
||||
free(node);
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Copy message to buffer with LWS_PRE offset
|
||||
memcpy(buffer + LWS_PRE, message, length);
|
||||
|
||||
// Initialize node
|
||||
node->data = buffer;
|
||||
node->length = length;
|
||||
node->type = type;
|
||||
node->next = NULL;
|
||||
|
||||
// Add to queue (thread-safe)
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
if (!pss->message_queue_head) {
|
||||
// Queue was empty
|
||||
pss->message_queue_head = node;
|
||||
pss->message_queue_tail = node;
|
||||
} else {
|
||||
// Add to end of queue
|
||||
pss->message_queue_tail->next = node;
|
||||
pss->message_queue_tail = node;
|
||||
}
|
||||
pss->message_queue_count++;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Request writeable callback (only if not already requested)
|
||||
if (!pss->writeable_requested) {
|
||||
pss->writeable_requested = 1;
|
||||
lws_callback_on_writable(wsi);
|
||||
}
|
||||
|
||||
DEBUG_TRACE("Queued message: len=%zu, queue_count=%d", length, pss->message_queue_count);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Process message queue when the socket becomes writeable.
|
||||
* This function is called from LWS_CALLBACK_SERVER_WRITEABLE.
|
||||
*
|
||||
* @param wsi WebSocket instance
|
||||
* @param pss Per-session data containing message queue
|
||||
* @return 0 on success, -1 on error
|
||||
*/
|
||||
int process_message_queue(struct lws* wsi, struct per_session_data* pss) {
|
||||
if (!wsi || !pss) {
|
||||
DEBUG_ERROR("process_message_queue: invalid parameters");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Get next message from queue (thread-safe)
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
struct message_queue_node* node = pss->message_queue_head;
|
||||
if (!node) {
|
||||
// Queue is empty
|
||||
pss->writeable_requested = 0;
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Remove from queue
|
||||
pss->message_queue_head = node->next;
|
||||
if (!pss->message_queue_head) {
|
||||
pss->message_queue_tail = NULL;
|
||||
}
|
||||
pss->message_queue_count--;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Write message (libwebsockets handles partial writes internally)
|
||||
int write_result = lws_write(wsi, node->data + LWS_PRE, node->length, node->type);
|
||||
|
||||
// Free node resources
|
||||
free(node->data);
|
||||
free(node);
|
||||
|
||||
if (write_result < 0) {
|
||||
DEBUG_ERROR("process_message_queue: write failed, result=%d", write_result);
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("Processed message: wrote %d bytes, remaining in queue: %d", write_result, pss->message_queue_count);
|
||||
|
||||
// If queue not empty, request another callback
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
if (pss->message_queue_head) {
|
||||
lws_callback_on_writable(wsi);
|
||||
} else {
|
||||
pss->writeable_requested = 0;
|
||||
}
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
// WEBSOCKET PROTOCOL
|
||||
@@ -247,7 +377,57 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
|
||||
// Get real client IP address
|
||||
char client_ip[CLIENT_IP_MAX_LENGTH];
|
||||
lws_get_peer_simple(wsi, client_ip, sizeof(client_ip));
|
||||
memset(client_ip, 0, sizeof(client_ip));
|
||||
|
||||
// Check if we should trust proxy headers
|
||||
int trust_proxy = get_config_bool("trust_proxy_headers", 0);
|
||||
|
||||
if (trust_proxy) {
|
||||
// Try to get IP from X-Forwarded-For header first
|
||||
char x_forwarded_for[CLIENT_IP_MAX_LENGTH];
|
||||
int header_len = lws_hdr_copy(wsi, x_forwarded_for, sizeof(x_forwarded_for) - 1, WSI_TOKEN_X_FORWARDED_FOR);
|
||||
|
||||
if (header_len > 0) {
|
||||
x_forwarded_for[header_len] = '\0';
|
||||
// X-Forwarded-For can contain multiple IPs (client, proxy1, proxy2, ...)
|
||||
// We want the first (leftmost) IP which is the original client
|
||||
char* comma = strchr(x_forwarded_for, ',');
|
||||
if (comma) {
|
||||
*comma = '\0'; // Truncate at first comma
|
||||
}
|
||||
// Trim leading/trailing whitespace
|
||||
char* ip_start = x_forwarded_for;
|
||||
while (*ip_start == ' ' || *ip_start == '\t') ip_start++;
|
||||
size_t ip_len = strlen(ip_start);
|
||||
while (ip_len > 0 && (ip_start[ip_len-1] == ' ' || ip_start[ip_len-1] == '\t')) {
|
||||
ip_start[--ip_len] = '\0';
|
||||
}
|
||||
if (ip_len > 0 && ip_len < CLIENT_IP_MAX_LENGTH) {
|
||||
strncpy(client_ip, ip_start, CLIENT_IP_MAX_LENGTH - 1);
|
||||
client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
|
||||
DEBUG_TRACE("Using X-Forwarded-For IP: %s", client_ip);
|
||||
}
|
||||
}
|
||||
|
||||
// If X-Forwarded-For didn't work, try X-Real-IP
|
||||
if (client_ip[0] == '\0') {
|
||||
char x_real_ip[CLIENT_IP_MAX_LENGTH];
|
||||
header_len = lws_hdr_copy(wsi, x_real_ip, sizeof(x_real_ip) - 1, WSI_TOKEN_HTTP_X_REAL_IP);
|
||||
|
||||
if (header_len > 0) {
|
||||
x_real_ip[header_len] = '\0';
|
||||
strncpy(client_ip, x_real_ip, CLIENT_IP_MAX_LENGTH - 1);
|
||||
client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
|
||||
DEBUG_TRACE("Using X-Real-IP: %s", client_ip);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fall back to direct connection IP if proxy headers not available or not trusted
|
||||
if (client_ip[0] == '\0') {
|
||||
lws_get_peer_simple(wsi, client_ip, sizeof(client_ip));
|
||||
DEBUG_TRACE("Using direct connection IP: %s", client_ip);
|
||||
}
|
||||
|
||||
// Ensure client_ip is null-terminated and copy safely
|
||||
client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
|
||||
@@ -382,11 +562,9 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
char *error_str = cJSON_Print(error_response);
|
||||
if (error_str) {
|
||||
size_t error_len = strlen(error_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + error_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, error_str, error_len);
|
||||
lws_write(wsi, buf + LWS_PRE, error_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, error_str, error_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue error response message");
|
||||
}
|
||||
free(error_str);
|
||||
}
|
||||
@@ -628,16 +806,24 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
}
|
||||
}
|
||||
} else {
|
||||
DEBUG_TRACE("Storing regular event in database");
|
||||
// Regular event - store in database and broadcast
|
||||
if (store_event(event) != 0) {
|
||||
DEBUG_ERROR("Failed to store event in database");
|
||||
result = -1;
|
||||
strncpy(error_message, "error: failed to store event", sizeof(error_message) - 1);
|
||||
} else {
|
||||
DEBUG_LOG("Event stored and broadcast (kind %d)", event_kind);
|
||||
// Broadcast event to matching persistent subscriptions
|
||||
// Check if this is an ephemeral event (kinds 20000-29999)
|
||||
// Per NIP-01: ephemeral events are broadcast but never stored
|
||||
if (event_kind >= 20000 && event_kind < 30000) {
|
||||
DEBUG_TRACE("Ephemeral event (kind %d) - broadcasting without storage", event_kind);
|
||||
// Broadcast directly to subscriptions without database storage
|
||||
broadcast_event_to_subscriptions(event);
|
||||
} else {
|
||||
DEBUG_TRACE("Storing regular event in database");
|
||||
// Regular event - store in database and broadcast
|
||||
if (store_event(event) != 0) {
|
||||
DEBUG_ERROR("Failed to store event in database");
|
||||
result = -1;
|
||||
strncpy(error_message, "error: failed to store event", sizeof(error_message) - 1);
|
||||
} else {
|
||||
DEBUG_LOG("Event stored and broadcast (kind %d)", event_kind);
|
||||
// Broadcast event to matching persistent subscriptions
|
||||
broadcast_event_to_subscriptions(event);
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@@ -661,16 +847,22 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
cJSON_AddItemToArray(response, cJSON_CreateString(cJSON_GetStringValue(event_id)));
|
||||
cJSON_AddItemToArray(response, cJSON_CreateBool(result == 0));
|
||||
cJSON_AddItemToArray(response, cJSON_CreateString(strlen(error_message) > 0 ? error_message : ""));
|
||||
|
||||
|
||||
char *response_str = cJSON_Print(response);
|
||||
if (response_str) {
|
||||
size_t response_len = strlen(response_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + response_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, response_str, response_len);
|
||||
lws_write(wsi, buf + LWS_PRE, response_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=OK len=%zu data=%.100s%s",
|
||||
response_len,
|
||||
response_str,
|
||||
response_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
if (queue_message(wsi, pss, response_str, response_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue OK response message");
|
||||
}
|
||||
|
||||
free(response_str);
|
||||
}
|
||||
cJSON_Delete(response);
|
||||
@@ -765,12 +957,18 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
char *eose_str = cJSON_Print(eose_response);
|
||||
if (eose_str) {
|
||||
size_t eose_len = strlen(eose_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + eose_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, eose_str, eose_len);
|
||||
lws_write(wsi, buf + LWS_PRE, eose_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=EOSE len=%zu data=%.100s%s",
|
||||
eose_len,
|
||||
eose_str,
|
||||
eose_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
if (queue_message(wsi, pss, eose_str, eose_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue EOSE message");
|
||||
}
|
||||
|
||||
free(eose_str);
|
||||
}
|
||||
cJSON_Delete(eose_response);
|
||||
@@ -850,9 +1048,22 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
return 0;
|
||||
}
|
||||
|
||||
// CRITICAL FIX: Remove from session list FIRST (while holding lock)
|
||||
// to prevent race condition where global manager frees the subscription
|
||||
// while we're still iterating through the session list
|
||||
// CRITICAL FIX: Mark subscription as inactive in global manager FIRST
|
||||
// This prevents other threads from accessing it during removal
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
subscription_t* target_sub = g_subscription_manager.active_subscriptions;
|
||||
while (target_sub) {
|
||||
if (strcmp(target_sub->id, subscription_id) == 0 && target_sub->wsi == wsi) {
|
||||
target_sub->active = 0; // Mark as inactive immediately
|
||||
break;
|
||||
}
|
||||
target_sub = target_sub->next;
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
// Now safe to remove from session list
|
||||
if (pss) {
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
@@ -870,8 +1081,7 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
}
|
||||
|
||||
// Remove from global manager AFTER removing from session list
|
||||
// This prevents use-after-free when iterating session subscriptions
|
||||
// Finally remove from global manager (which will free it)
|
||||
remove_subscription_from_manager(subscription_id, wsi);
|
||||
|
||||
// Subscription closed
|
||||
@@ -914,6 +1124,13 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
}
|
||||
break;
|
||||
|
||||
case LWS_CALLBACK_SERVER_WRITEABLE:
|
||||
// Handle message queue when socket becomes writeable
|
||||
if (pss) {
|
||||
process_message_queue(wsi, pss);
|
||||
}
|
||||
break;
|
||||
|
||||
case LWS_CALLBACK_CLOSED:
|
||||
DEBUG_TRACE("WebSocket connection closed");
|
||||
|
||||
@@ -947,20 +1164,66 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
auth_status,
|
||||
reason);
|
||||
|
||||
// Clean up session subscriptions
|
||||
// Clean up message queue to prevent memory leaks
|
||||
while (pss->message_queue_head) {
|
||||
struct message_queue_node* node = pss->message_queue_head;
|
||||
pss->message_queue_head = node->next;
|
||||
free(node->data);
|
||||
free(node);
|
||||
}
|
||||
pss->message_queue_tail = NULL;
|
||||
pss->message_queue_count = 0;
|
||||
pss->writeable_requested = 0;
|
||||
|
||||
// Clean up session subscriptions - copy IDs first to avoid use-after-free
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
// First pass: collect subscription IDs safely
|
||||
typedef struct temp_sub_id {
|
||||
char id[SUBSCRIPTION_ID_MAX_LENGTH];
|
||||
struct temp_sub_id* next;
|
||||
} temp_sub_id_t;
|
||||
|
||||
temp_sub_id_t* temp_ids = NULL;
|
||||
temp_sub_id_t* temp_tail = NULL;
|
||||
int temp_count = 0;
|
||||
|
||||
struct subscription* sub = pss->subscriptions;
|
||||
while (sub) {
|
||||
struct subscription* next = sub->session_next;
|
||||
remove_subscription_from_manager(sub->id, wsi);
|
||||
sub = next;
|
||||
if (sub->active) { // Only process active subscriptions
|
||||
temp_sub_id_t* temp = malloc(sizeof(temp_sub_id_t));
|
||||
if (temp) {
|
||||
memcpy(temp->id, sub->id, SUBSCRIPTION_ID_MAX_LENGTH);
|
||||
temp->id[SUBSCRIPTION_ID_MAX_LENGTH - 1] = '\0';
|
||||
temp->next = NULL;
|
||||
|
||||
if (!temp_ids) {
|
||||
temp_ids = temp;
|
||||
temp_tail = temp;
|
||||
} else {
|
||||
temp_tail->next = temp;
|
||||
temp_tail = temp;
|
||||
}
|
||||
temp_count++;
|
||||
}
|
||||
}
|
||||
sub = sub->session_next;
|
||||
}
|
||||
|
||||
// Clear session list immediately
|
||||
pss->subscriptions = NULL;
|
||||
pss->subscription_count = 0;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Second pass: remove from global manager using copied IDs
|
||||
temp_sub_id_t* current_temp = temp_ids;
|
||||
while (current_temp) {
|
||||
temp_sub_id_t* next_temp = current_temp->next;
|
||||
remove_subscription_from_manager(current_temp->id, wsi);
|
||||
free(current_temp);
|
||||
current_temp = next_temp;
|
||||
}
|
||||
pthread_mutex_destroy(&pss->session_lock);
|
||||
} else {
|
||||
DEBUG_LOG("WebSocket CLOSED: ip=unknown duration=0s subscriptions=0 authenticated=no reason=unknown");
|
||||
@@ -1249,7 +1512,7 @@ int process_dm_stats_command(cJSON* dm_event, char* error_message, size_t error_
|
||||
const char* encrypted_content = cJSON_GetStringValue(content_obj);
|
||||
|
||||
// Decrypt content
|
||||
char decrypted_content[4096];
|
||||
char decrypted_content[16384];
|
||||
int decrypt_result = nostr_nip44_decrypt(relay_privkey, sender_pubkey_bytes,
|
||||
encrypted_content, decrypted_content, sizeof(decrypted_content));
|
||||
|
||||
@@ -1627,12 +1890,18 @@ int handle_count_message(const char* sub_id, cJSON* filters, struct lws *wsi, st
|
||||
char *count_str = cJSON_Print(count_response);
|
||||
if (count_str) {
|
||||
size_t count_len = strlen(count_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + count_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, count_str, count_len);
|
||||
lws_write(wsi, buf + LWS_PRE, count_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=COUNT len=%zu data=%.100s%s",
|
||||
count_len,
|
||||
count_str,
|
||||
count_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
if (queue_message(wsi, pss, count_str, count_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue COUNT message");
|
||||
}
|
||||
|
||||
free(count_str);
|
||||
}
|
||||
cJSON_Delete(count_response);
|
||||
|
||||
@@ -31,6 +31,14 @@
|
||||
#define MAX_SEARCH_LENGTH 256
|
||||
#define MAX_TAG_VALUE_LENGTH 1024
|
||||
|
||||
// Message queue node for proper libwebsockets pattern
|
||||
struct message_queue_node {
|
||||
unsigned char* data; // Message data (with LWS_PRE space)
|
||||
size_t length; // Message length (without LWS_PRE)
|
||||
enum lws_write_protocol type; // LWS_WRITE_TEXT, etc.
|
||||
struct message_queue_node* next; // Next node in queue
|
||||
};
|
||||
|
||||
// Enhanced per-session data with subscription management, NIP-42 authentication, and rate limiting
|
||||
struct per_session_data {
|
||||
int authenticated;
|
||||
@@ -59,6 +67,12 @@ struct per_session_data {
|
||||
int malformed_request_count; // Count of malformed requests in current hour
|
||||
time_t malformed_request_window_start; // Start of current hour window
|
||||
time_t malformed_request_blocked_until; // Time until blocked for malformed requests
|
||||
|
||||
// Message queue for proper libwebsockets pattern (replaces single buffer)
|
||||
struct message_queue_node* message_queue_head; // Head of message queue
|
||||
struct message_queue_node* message_queue_tail; // Tail of message queue
|
||||
int message_queue_count; // Number of messages in queue
|
||||
int writeable_requested; // Flag: 1 if writeable callback requested
|
||||
};
|
||||
|
||||
// NIP-11 HTTP session data structure for managing buffer lifetime
|
||||
@@ -73,6 +87,10 @@ struct nip11_session_data {
|
||||
// Function declarations
|
||||
int start_websocket_relay(int port_override, int strict_port);
|
||||
|
||||
// Message queue functions for proper libwebsockets pattern
|
||||
int queue_message(struct lws* wsi, struct per_session_data* pss, const char* message, size_t length, enum lws_write_protocol type);
|
||||
int process_message_queue(struct lws* wsi, struct per_session_data* pss);
|
||||
|
||||
// Auth rules checking function from request_validator.c
|
||||
int check_database_auth_rules(const char *pubkey, const char *operation, const char *resource_hash);
|
||||
|
||||
|
||||
40
systemd/c-relay-local.service
Normal file
40
systemd/c-relay-local.service
Normal file
@@ -0,0 +1,40 @@
|
||||
[Unit]
|
||||
Description=C Nostr Relay Server (Local Development)
|
||||
Documentation=https://github.com/your-repo/c-relay
|
||||
After=network.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=teknari
|
||||
WorkingDirectory=/home/teknari/Storage/c_relay
|
||||
Environment=DEBUG_LEVEL=0
|
||||
ExecStart=/home/teknari/Storage/c_relay/crelay --port 7777 --debug-level=$DEBUG_LEVEL
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=c-relay-local
|
||||
|
||||
# Security settings (relaxed for local development)
|
||||
NoNewPrivileges=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths=/home/teknari/Storage/c_relay
|
||||
PrivateTmp=true
|
||||
|
||||
# Network security
|
||||
PrivateNetwork=false
|
||||
RestrictAddressFamilies=AF_INET AF_INET6
|
||||
|
||||
# Resource limits
|
||||
LimitNOFILE=65536
|
||||
LimitNPROC=4096
|
||||
|
||||
# Event-based configuration system
|
||||
# No environment variables needed - all configuration is stored as Nostr events
|
||||
# Database files (<relay_pubkey>.db) are created automatically in WorkingDirectory
|
||||
# Admin keys are generated and displayed only during first startup
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
12
tests/debug.log
Normal file
12
tests/debug.log
Normal file
@@ -0,0 +1,12 @@
|
||||
|
||||
=== NOSTR WebSocket Debug Log Started ===
|
||||
[14:13:42.079] SEND localhost:8888: ["EVENT", {
|
||||
"pubkey": "e74e808f64b82fe4671b92cdf83f6dd5f5f44dbcb67fbd0e044f34a6193e0994",
|
||||
"created_at": 1761499244,
|
||||
"kind": 1059,
|
||||
"tags": [["p", "4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"]],
|
||||
"content": "ApTb8y2oD3/TtVCV73Szhgfh5ODlluGd5zjsH44g5BBwaGB1NshOJ/5kF/XN0TfYJKQBe07UTpnOYMZ4l2ppU6SrR8Tor+ZEiAF/kpCpa/x6LDDIvf4mueQicDKjOf8Y6oEbsxYjtFrpuSC0LLMgLaVhcZjAgVD0YQTo+8nHOzHZD5RBr305vdnrxIe4ubEficAHCpnKq9L3A46AIyb+aHjjTbSYmB061cf6hzLSnmdh5xeACExjhxwsX9ivSvqGYcDNsH1JCM8EYQyRX9xAPDBYM1yuS8PpadqMluOcqOd/FFYyjYNpFrardblPsjUzZTz/TDSLyrYFDUKNa7pWIhW1asc1ZaY0ry0AoWnbl/QyMxqBjDFXd3mJfWccYsOI/Yrx3sxbZdL+ayRlQeQuDk/M9rQkH8GN/5+GE1aN5I6eVl0F37Axc/lLuIt/AIpoTwZYAEi9j/BYGLP6sYkjUp0foz91QximOTgu8evynu+nfAv330HVkipTIGOjEZea7QNSK0Fylxs8fanHlmiqWGyfyBeoWpxGslHZVu6K9k7GC8ABEIdNRa8vlqlphPfWPCS70Lnq3LgeKOj1C3sNF9ST8g7pth/0FEZgXruzhpx/EyjsasNbdLZg3iX1QwRS0P4L341Flrztovt8npyP9ytTiukkYIQzXCX8XuWjiaUuzXiLkVazjh0Nl03ikKKu2+7nuaBB92geBjbGT76zZ6HeXBgcmC7dWn7pHhzqu+QTonZK0oCl427Fs0eXiYsILjxFFQkmk7OHXgdZF9jquNXloz5lgwY9S3xj4JyRwLN/9xfh16awxLZNEFvX10X97bXsmNMRUDrJJPkKMTSxZpvuTbd+Lx2iB++4NyGZibNa6nOWOJG9d2LwEzIcIHS0uQpEIPl7Ccz6+rmkVh9kLbB2rda2fYp9GCOcn6XbfaXZZXJM+HAQwPJgrtDiuQex0tEIcQcB9CYCN4ze9HCt1kb23TUgEDAipz/RqYP4dOCYmRZ7vaYk/irJ+iRDfnvPK0Id1TrSeo5kaVc7py2zWZRVdndpTM8RvW0SLwdldXDIv+ym/mS0L7bchoaYjoNeuTNKQ6AOoc0E7f4ySr65FUKYd2FTvIsP2Avsa3S+D0za30ensxr733l80AQlVmUPrhsgOzzjEuOW1hGlGus38X+CDDEuMSJnq3hvz/CxVtAk71Zkbyr5lc1BPi758Y4rlZFQnhaKYKv5nSFJc7GtDykv+1cwxNGC6AxGKprnYMDVxuAIFYBztFitdO5BsjWvvKzAbleszewtGfjE2NgltIJk+gQlTpWvLNxd3gvb+qHarfEv7BPnPfsKktDpEfuNMKXdJPANyACq5gXj854o/X8iO2iLm7JSdMhEQgIIyHNyLCCQdLDnqDWIfcdyIzAfRilSCwImt3CVJBGD7HoXRbwGRR3vgEBcoVPmsYzaU9vr62I=",
|
||||
"id": "75c178ee47aac3ab9e984ddb85bdf9d8c68ade0d97e9cd86bb39e3110218a589",
|
||||
"sig": "aba8382cc8d6ba6bba467109d2ddc19718732fe803d71e73fd2db62c1cbbb1b4527447240906e01755139067a71c75d8c03271826ca5d0226c818cb7fb495fe2"
|
||||
}]
|
||||
[14:13:42.083] RECV localhost:8888: ["OK", "75c178ee47aac3ab9e984ddb85bdf9d8c68ade0d97e9cd86bb39e3110218a589", true, ""]
|
||||
35
tests/ephemeral_test.sh
Executable file
35
tests/ephemeral_test.sh
Executable file
@@ -0,0 +1,35 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Simplified Ephemeral Event Test
|
||||
# Tests that ephemeral events are broadcast to active subscriptions
|
||||
|
||||
echo "=== Generating Ephemeral Event (kind 20000) ==="
|
||||
event=$(nak event --kind 20000 --content "test ephemeral event")
|
||||
echo "$event"
|
||||
echo ""
|
||||
|
||||
echo "=== Testing Ephemeral Event Broadcast ==="
|
||||
subscription='["REQ","test_sub",{"kinds":[20000],"limit":10}]'
|
||||
echo "Subscription Filter:"
|
||||
echo "$subscription"
|
||||
echo ""
|
||||
|
||||
event_msg='["EVENT",'"$event"']'
|
||||
echo "Event Message:"
|
||||
echo "$event_msg"
|
||||
echo ""
|
||||
|
||||
echo "=== Relay Responses ==="
|
||||
(
|
||||
# Send subscription
|
||||
printf "%s\n" "$subscription"
|
||||
# Wait for subscription to establish
|
||||
sleep 1
|
||||
# Send ephemeral event on same connection
|
||||
printf "%s\n" "$event_msg"
|
||||
# Wait for responses
|
||||
sleep 2
|
||||
) | timeout 5 websocat ws://127.0.0.1:8888
|
||||
|
||||
echo ""
|
||||
echo "Test complete!"
|
||||
63
tests/large_event_test.sh
Executable file
63
tests/large_event_test.sh
Executable file
@@ -0,0 +1,63 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Test script for posting large events (>4KB) to test partial write handling
|
||||
# Uses nak to properly sign events with large content
|
||||
|
||||
RELAY_URL="ws://localhost:8888"
|
||||
|
||||
# Check if nak is installed
|
||||
if ! command -v nak &> /dev/null; then
|
||||
echo "Error: nak is not installed. Install with: go install github.com/fiatjaf/nak@latest"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Generate a test private key if not set
|
||||
if [ -z "$NOSTR_PRIVATE_KEY" ]; then
|
||||
echo "Generating temporary test key..."
|
||||
export NOSTR_PRIVATE_KEY=$(nak key generate)
|
||||
fi
|
||||
|
||||
echo "=== Large Event Test ==="
|
||||
echo "Testing partial write handling with events >4KB"
|
||||
echo "Relay: $RELAY_URL"
|
||||
echo ""
|
||||
|
||||
# Test 1: 5KB event
|
||||
echo "Test 1: Posting 5KB event..."
|
||||
CONTENT_5KB=$(python3 -c "print('A' * 5000)")
|
||||
echo "$CONTENT_5KB" | nak event -k 1 --content - $RELAY_URL
|
||||
sleep 1
|
||||
|
||||
# Test 2: 10KB event
|
||||
echo ""
|
||||
echo "Test 2: Posting 10KB event..."
|
||||
CONTENT_10KB=$(python3 -c "print('B' * 10000)")
|
||||
echo "$CONTENT_10KB" | nak event -k 1 --content - $RELAY_URL
|
||||
sleep 1
|
||||
|
||||
# Test 3: 20KB event
|
||||
echo ""
|
||||
echo "Test 3: Posting 20KB event..."
|
||||
CONTENT_20KB=$(python3 -c "print('C' * 20000)")
|
||||
echo "$CONTENT_20KB" | nak event -k 1 --content - $RELAY_URL
|
||||
sleep 1
|
||||
|
||||
# Test 4: 50KB event (very large)
|
||||
echo ""
|
||||
echo "Test 4: Posting 50KB event..."
|
||||
CONTENT_50KB=$(python3 -c "print('D' * 50000)")
|
||||
echo "$CONTENT_50KB" | nak event -k 1 --content - $RELAY_URL
|
||||
|
||||
echo ""
|
||||
echo "=== Test Complete ==="
|
||||
echo ""
|
||||
echo "Check relay.log for:"
|
||||
echo " - 'Queued partial write' messages (indicates buffering is working)"
|
||||
echo " - 'write completed' messages (indicates retry succeeded)"
|
||||
echo " - No 'Invalid frame header' errors"
|
||||
echo ""
|
||||
echo "To view logs in real-time:"
|
||||
echo " tail -f relay.log | grep -E '(partial|write completed|Invalid frame)'"
|
||||
echo ""
|
||||
echo "To check if events were stored:"
|
||||
echo " sqlite3 build/*.db 'SELECT id, length(content) as content_size FROM events ORDER BY created_at DESC LIMIT 4;'"
|
||||
@@ -3,6 +3,19 @@
|
||||
# Test script to post kind 1 events to the relay every second
|
||||
# Cycles through three different secret keys
|
||||
# Content includes current timestamp
|
||||
#
|
||||
# Usage: ./post_events.sh <relay_url>
|
||||
# Example: ./post_events.sh ws://localhost:8888
|
||||
# Example: ./post_events.sh wss://relay.laantungir.net
|
||||
|
||||
# Check if relay URL is provided
|
||||
if [ -z "$1" ]; then
|
||||
echo "Error: Relay URL is required"
|
||||
echo "Usage: $0 <relay_url>"
|
||||
echo "Example: $0 ws://localhost:8888"
|
||||
echo "Example: $0 wss://relay.laantungir.net"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Array of secret keys to cycle through
|
||||
SECRET_KEYS=(
|
||||
@@ -11,7 +24,7 @@ SECRET_KEYS=(
|
||||
"1618aaa21f5bd45c5ffede0d9a60556db67d4a046900e5f66b0bae5c01c801fb"
|
||||
)
|
||||
|
||||
RELAY_URL="ws://localhost:8888"
|
||||
RELAY_URL="$1"
|
||||
KEY_INDEX=0
|
||||
|
||||
echo "Starting event posting test to $RELAY_URL"
|
||||
@@ -36,5 +49,5 @@ while true; do
|
||||
KEY_INDEX=$(( (KEY_INDEX + 1) % ${#SECRET_KEYS[@]} ))
|
||||
|
||||
# Wait 1 second
|
||||
sleep 1
|
||||
sleep .2
|
||||
done
|
||||
@@ -1,203 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Rate Limiting Test Suite for C-Relay
|
||||
# Tests rate limiting and abuse prevention mechanisms
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
TEST_TIMEOUT=15
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Function to test rate limiting
|
||||
test_rate_limiting() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local burst_count="${3:-10}"
|
||||
local expected_limited="${4:-false}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
local rate_limited=false
|
||||
local success_count=0
|
||||
local error_count=0
|
||||
|
||||
# Send burst of messages
|
||||
for i in $(seq 1 "$burst_count"); do
|
||||
local response
|
||||
response=$(echo "$message" | timeout 2 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1 || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"rate limit"* ]] || [[ "$response" == *"too many"* ]] || [[ "$response" == *"TOO_MANY"* ]]; then
|
||||
rate_limited=true
|
||||
elif [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]] || [[ "$response" == *"OK"* ]]; then
|
||||
((success_count++))
|
||||
else
|
||||
((error_count++))
|
||||
fi
|
||||
|
||||
# Small delay between requests
|
||||
sleep 0.05
|
||||
done
|
||||
|
||||
if [[ "$expected_limited" == "true" ]]; then
|
||||
if [[ "$rate_limited" == "true" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Rate limiting triggered as expected"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Rate limiting not triggered (expected)"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
if [[ "$rate_limited" == "false" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - No rate limiting for normal traffic"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - Unexpected rate limiting"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1)) # Count as passed since it's conservative
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test sustained load
|
||||
test_sustained_load() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local duration="${3:-10}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
local start_time
|
||||
start_time=$(date +%s)
|
||||
local rate_limited=false
|
||||
local total_requests=0
|
||||
local successful_requests=0
|
||||
|
||||
while [[ $(($(date +%s) - start_time)) -lt duration ]]; do
|
||||
((total_requests++))
|
||||
local response
|
||||
response=$(echo "$message" | timeout 1 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1 || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"rate limit"* ]] || [[ "$response" == *"too many"* ]] || [[ "$response" == *"TOO_MANY"* ]]; then
|
||||
rate_limited=true
|
||||
elif [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]] || [[ "$response" == *"OK"* ]]; then
|
||||
((successful_requests++))
|
||||
fi
|
||||
|
||||
# Small delay to avoid overwhelming
|
||||
sleep 0.1
|
||||
done
|
||||
|
||||
local success_rate=0
|
||||
if [[ $total_requests -gt 0 ]]; then
|
||||
success_rate=$((successful_requests * 100 / total_requests))
|
||||
fi
|
||||
|
||||
if [[ "$rate_limited" == "true" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Rate limiting activated under sustained load (${success_rate}% success rate)"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - No rate limiting detected (${success_rate}% success rate)"
|
||||
# This might be acceptable if rate limiting is very permissive
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Rate Limiting Test Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing rate limiting against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Test basic connectivity first
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
test_rate_limiting "Basic connectivity" '["REQ","rate_test",{}]' 1 false
|
||||
echo ""
|
||||
|
||||
echo "=== Burst Request Testing ==="
|
||||
# Test rapid succession of requests
|
||||
test_rate_limiting "Rapid REQ messages" '["REQ","burst_req_'$(date +%s%N)'",{}]' 20 true
|
||||
test_rate_limiting "Rapid COUNT messages" '["COUNT","burst_count_'$(date +%s%N)'",{}]' 20 true
|
||||
test_rate_limiting "Rapid CLOSE messages" '["CLOSE","burst_close"]' 20 true
|
||||
echo ""
|
||||
|
||||
echo "=== Malformed Message Rate Limiting ==="
|
||||
# Test if malformed messages trigger rate limiting faster
|
||||
test_rate_limiting "Malformed JSON burst" '["REQ","malformed"' 15 true
|
||||
test_rate_limiting "Invalid message type burst" '["INVALID","test",{}]' 15 true
|
||||
test_rate_limiting "Empty message burst" '[]' 15 true
|
||||
echo ""
|
||||
|
||||
echo "=== Sustained Load Testing ==="
|
||||
# Test sustained moderate load
|
||||
test_sustained_load "Sustained REQ load" '["REQ","sustained_'$(date +%s%N)'",{}]' 10
|
||||
test_sustained_load "Sustained COUNT load" '["COUNT","sustained_count_'$(date +%s%N)'",{}]' 10
|
||||
echo ""
|
||||
|
||||
echo "=== Filter Complexity Testing ==="
|
||||
# Test if complex filters trigger rate limiting
|
||||
test_rate_limiting "Complex filter burst" '["REQ","complex_'$(date +%s%N)'",{"authors":["a","b","c"],"kinds":[1,2,3],"#e":["x","y","z"],"#p":["m","n","o"],"since":1000000000,"until":2000000000,"limit":100}]' 10 true
|
||||
echo ""
|
||||
|
||||
echo "=== Subscription Management Testing ==="
|
||||
# Test subscription creation/deletion rate limiting
|
||||
echo -n "Testing subscription churn... "
|
||||
local churn_test_passed=true
|
||||
for i in $(seq 1 25); do
|
||||
# Create subscription
|
||||
echo "[\"REQ\",\"churn_${i}_$(date +%s%N)\",{}]" | timeout 1 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1 || true
|
||||
|
||||
# Close subscription
|
||||
echo "[\"CLOSE\",\"churn_${i}_*\"]" | timeout 1 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1 || true
|
||||
|
||||
sleep 0.05
|
||||
done
|
||||
|
||||
# Check if relay is still responsive
|
||||
if echo 'ping' | timeout 2 websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1; then
|
||||
echo -e "${GREEN}PASSED${NC} - Subscription churn handled"
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Relay unresponsive after subscription churn"
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All rate limiting tests passed!${NC}"
|
||||
echo "Rate limiting appears to be working correctly."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ Some rate limiting tests failed!${NC}"
|
||||
echo "Rate limiting may not be properly configured."
|
||||
exit 1
|
||||
fi
|
||||
BIN
tests/sendDM
Executable file
BIN
tests/sendDM
Executable file
Binary file not shown.
296
tests/sendDM.c
Normal file
296
tests/sendDM.c
Normal file
@@ -0,0 +1,296 @@
|
||||
/*
|
||||
* NIP-17 Private Direct Messages - Command Line Application
|
||||
*
|
||||
* This example demonstrates how to send NIP-17 private direct messages
|
||||
* using the Nostr Core Library.
|
||||
*
|
||||
* Usage:
|
||||
* ./send_nip17_dm -r <recipient> -s <sender> [-R <relay>]... <message>
|
||||
*
|
||||
* Options:
|
||||
* -r <recipient>: The recipient's public key (npub or hex)
|
||||
* -s <sender>: The sender's private key (nsec or hex)
|
||||
* -R <relay>: Relay URL to send to (can be specified multiple times)
|
||||
* <message>: The message to send (must be the last argument)
|
||||
*
|
||||
* If no relays are specified, uses default relay.
|
||||
* If no sender key is provided, uses a default test key.
|
||||
*
|
||||
* Examples:
|
||||
* ./send_nip17_dm -r npub1example... -s nsec1test... -R wss://relay1.com "Hello from NIP-17!"
|
||||
* ./send_nip17_dm -r 4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa -s aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -R ws://localhost:8888 "config"
|
||||
*/
|
||||
|
||||
#define _GNU_SOURCE
|
||||
#define _POSIX_C_SOURCE 200809L
|
||||
|
||||
#include "../nostr_core_lib/nostr_core/nostr_core.h"
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include <getopt.h>
|
||||
|
||||
// Default test private key (for demonstration - DO NOT USE IN PRODUCTION)
|
||||
#define DEFAULT_SENDER_NSEC "nsec12kgt0dv2k2safv6s32w8f89z9uw27e68hjaa0d66c5xvk70ezpwqncd045"
|
||||
|
||||
// Default relay for sending DMs
|
||||
#define DEFAULT_RELAY "wss://relay.laantungir.net"
|
||||
|
||||
// Progress callback for publishing
|
||||
void publish_progress_callback(const char* relay_url, const char* status,
|
||||
const char* message, int success_count,
|
||||
int total_relays, int completed_relays, void* user_data) {
|
||||
(void)user_data;
|
||||
|
||||
if (relay_url) {
|
||||
printf("📡 [%s]: %s", relay_url, status);
|
||||
if (message) {
|
||||
printf(" - %s", message);
|
||||
}
|
||||
printf(" (%d/%d completed, %d successful)\n", completed_relays, total_relays, success_count);
|
||||
} else {
|
||||
printf("📡 PUBLISH COMPLETE: %d/%d successful\n", success_count, total_relays);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert npub or hex pubkey to hex format
|
||||
*/
|
||||
int convert_pubkey_to_hex(const char* input_pubkey, char* output_hex) {
|
||||
// Check if it's already hex (64 characters)
|
||||
if (strlen(input_pubkey) == 64) {
|
||||
// Assume it's already hex
|
||||
strcpy(output_hex, input_pubkey);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Check if it's an npub (starts with "npub1")
|
||||
if (strncmp(input_pubkey, "npub1", 5) == 0) {
|
||||
// Convert npub to hex
|
||||
unsigned char pubkey_bytes[32];
|
||||
if (nostr_decode_npub(input_pubkey, pubkey_bytes) != 0) {
|
||||
fprintf(stderr, "Error: Invalid npub format\n");
|
||||
return -1;
|
||||
}
|
||||
nostr_bytes_to_hex(pubkey_bytes, 32, output_hex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
fprintf(stderr, "Error: Public key must be 64-character hex or valid npub\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert nsec to private key bytes if needed
|
||||
*/
|
||||
int convert_nsec_to_private_key(const char* input_nsec, unsigned char* private_key) {
|
||||
// Check if it's already hex (64 characters)
|
||||
if (strlen(input_nsec) == 64) {
|
||||
// Convert hex to bytes
|
||||
if (nostr_hex_to_bytes(input_nsec, private_key, 32) != 0) {
|
||||
fprintf(stderr, "Error: Invalid hex private key\n");
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Check if it's an nsec (starts with "nsec1")
|
||||
if (strncmp(input_nsec, "nsec1", 5) == 0) {
|
||||
// Convert nsec directly to private key bytes
|
||||
if (nostr_decode_nsec(input_nsec, private_key) != 0) {
|
||||
fprintf(stderr, "Error: Invalid nsec format\n");
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
fprintf(stderr, "Error: Private key must be 64-character hex or valid nsec\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/**
|
||||
* Main function
|
||||
*/
|
||||
int main(int argc, char* argv[]) {
|
||||
char* recipient_key = NULL;
|
||||
char* sender_key = NULL;
|
||||
char** relays = NULL;
|
||||
int relay_count = 0;
|
||||
char* message = NULL;
|
||||
|
||||
// Parse command line options
|
||||
int opt;
|
||||
while ((opt = getopt(argc, argv, "r:s:R:")) != -1) {
|
||||
switch (opt) {
|
||||
case 'r':
|
||||
recipient_key = optarg;
|
||||
break;
|
||||
case 's':
|
||||
sender_key = optarg;
|
||||
break;
|
||||
case 'R':
|
||||
relays = realloc(relays, (relay_count + 1) * sizeof(char*));
|
||||
relays[relay_count] = optarg;
|
||||
relay_count++;
|
||||
break;
|
||||
default:
|
||||
fprintf(stderr, "Usage: %s -r <recipient> -s <sender> [-R <relay>]... <message>\n", argv[0]);
|
||||
fprintf(stderr, "Options:\n");
|
||||
fprintf(stderr, " -r <recipient>: The recipient's public key (npub or hex)\n");
|
||||
fprintf(stderr, " -s <sender>: The sender's private key (nsec or hex)\n");
|
||||
fprintf(stderr, " -R <relay>: Relay URL to send to (can be specified multiple times)\n");
|
||||
fprintf(stderr, " <message>: The message to send (must be the last argument)\n");
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
// Check for required arguments
|
||||
if (!recipient_key) {
|
||||
fprintf(stderr, "Error: Recipient key (-r) is required\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Get message from remaining arguments
|
||||
if (optind >= argc) {
|
||||
fprintf(stderr, "Error: Message is required\n");
|
||||
return 1;
|
||||
}
|
||||
message = argv[optind];
|
||||
|
||||
// Use default values if not provided
|
||||
if (!sender_key) {
|
||||
sender_key = DEFAULT_SENDER_NSEC;
|
||||
}
|
||||
if (relay_count == 0) {
|
||||
relays = malloc(sizeof(char*));
|
||||
relays[0] = DEFAULT_RELAY;
|
||||
relay_count = 1;
|
||||
}
|
||||
|
||||
printf("🧪 NIP-17 Private Direct Message Sender\n");
|
||||
printf("======================================\n\n");
|
||||
|
||||
// Initialize crypto
|
||||
if (nostr_init() != NOSTR_SUCCESS) {
|
||||
fprintf(stderr, "Failed to initialize crypto\n");
|
||||
free(relays);
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Convert recipient pubkey
|
||||
char recipient_pubkey_hex[65];
|
||||
if (convert_pubkey_to_hex(recipient_key, recipient_pubkey_hex) != 0) {
|
||||
free(relays);
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Convert sender private key
|
||||
unsigned char sender_privkey[32];
|
||||
if (convert_nsec_to_private_key(sender_key, sender_privkey) != 0) {
|
||||
free(relays);
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Derive sender public key for display
|
||||
unsigned char sender_pubkey_bytes[32];
|
||||
char sender_pubkey_hex[65];
|
||||
if (nostr_ec_public_key_from_private_key(sender_privkey, sender_pubkey_bytes) != 0) {
|
||||
fprintf(stderr, "Failed to derive sender public key\n");
|
||||
return 1;
|
||||
}
|
||||
nostr_bytes_to_hex(sender_pubkey_bytes, 32, sender_pubkey_hex);
|
||||
|
||||
printf("📤 Sender: %s\n", sender_pubkey_hex);
|
||||
printf("📥 Recipient: %s\n", recipient_pubkey_hex);
|
||||
printf("💬 Message: %s\n", message);
|
||||
printf("🌐 Relays: ");
|
||||
for (int i = 0; i < relay_count; i++) {
|
||||
printf("%s", relays[i]);
|
||||
if (i < relay_count - 1) printf(", ");
|
||||
}
|
||||
printf("\n\n");
|
||||
|
||||
// Create DM event
|
||||
printf("💬 Creating DM event...\n");
|
||||
const char* recipient_pubkeys[] = {recipient_pubkey_hex};
|
||||
cJSON* dm_event = nostr_nip17_create_chat_event(
|
||||
message,
|
||||
recipient_pubkeys,
|
||||
1,
|
||||
"NIP-17 CLI", // subject
|
||||
NULL, // no reply
|
||||
relays[0], // relay hint (use first relay)
|
||||
sender_pubkey_hex
|
||||
);
|
||||
|
||||
if (!dm_event) {
|
||||
fprintf(stderr, "Failed to create DM event\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
printf("✅ Created DM event (kind 14)\n");
|
||||
|
||||
// Send DM (create gift wraps)
|
||||
printf("🎁 Creating gift wraps...\n");
|
||||
cJSON* gift_wraps[10]; // Max 10 gift wraps
|
||||
int gift_wrap_count = nostr_nip17_send_dm(
|
||||
dm_event,
|
||||
recipient_pubkeys,
|
||||
1,
|
||||
sender_privkey,
|
||||
gift_wraps,
|
||||
10
|
||||
);
|
||||
|
||||
cJSON_Delete(dm_event); // Original DM event no longer needed
|
||||
|
||||
if (gift_wrap_count <= 0) {
|
||||
fprintf(stderr, "Failed to create gift wraps\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
printf("✅ Created %d gift wrap(s)\n", gift_wrap_count);
|
||||
|
||||
// Publish the gift wrap to relays
|
||||
printf("\n📤 Publishing gift wrap to %d relay(s)...\n", relay_count);
|
||||
|
||||
int success_count = 0;
|
||||
publish_result_t* publish_results = synchronous_publish_event_with_progress(
|
||||
(const char**)relays,
|
||||
relay_count,
|
||||
gift_wraps[0], // Send the first gift wrap
|
||||
&success_count,
|
||||
10, // 10 second timeout
|
||||
publish_progress_callback,
|
||||
NULL, // no user data
|
||||
0, // NIP-42 disabled
|
||||
NULL // no private key for auth
|
||||
);
|
||||
|
||||
if (!publish_results || success_count == 0) {
|
||||
fprintf(stderr, "\n❌ Failed to publish gift wrap to any relay (success_count: %d/%d)\n", success_count, relay_count);
|
||||
// Clean up gift wraps
|
||||
for (int i = 0; i < gift_wrap_count; i++) {
|
||||
cJSON_Delete(gift_wraps[i]);
|
||||
}
|
||||
if (publish_results) free(publish_results);
|
||||
free(relays);
|
||||
return 1;
|
||||
}
|
||||
|
||||
printf("\n✅ Successfully published NIP-17 DM to %d/%d relay(s)!\n", success_count, relay_count);
|
||||
|
||||
// Clean up
|
||||
free(publish_results);
|
||||
for (int i = 0; i < gift_wrap_count; i++) {
|
||||
cJSON_Delete(gift_wraps[i]);
|
||||
}
|
||||
free(relays);
|
||||
|
||||
nostr_cleanup();
|
||||
|
||||
printf("\n🎉 DM sent successfully! The recipient can now decrypt it using their private key.\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
Submodule text_graph updated: 0762bfbd1e...bf1785f372
Reference in New Issue
Block a user