Compare commits
19 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3dc09d55fd | ||
|
|
079fb1b0f5 | ||
|
|
17b2aa8111 | ||
|
|
78d484cfe0 | ||
|
|
182e12817d | ||
|
|
9179d57cc9 | ||
|
|
9cb9b746d8 | ||
|
|
57a0089664 | ||
|
|
53f7608872 | ||
|
|
838ce5b45a | ||
|
|
e878b9557e | ||
|
|
6638d37d6f | ||
|
|
4c29e15329 | ||
|
|
48890a2121 | ||
|
|
e312d7e18c | ||
|
|
6c38aaebf3 | ||
|
|
18b0ac44bf | ||
|
|
b6749eff2f | ||
|
|
c73a103280 |
3
.gitmodules
vendored
3
.gitmodules
vendored
@@ -4,3 +4,6 @@
|
||||
[submodule "c_utils_lib"]
|
||||
path = c_utils_lib
|
||||
url = ssh://git@git.laantungir.net:2222/laantungir/c_utils_lib.git
|
||||
[submodule "text_graph"]
|
||||
path = text_graph
|
||||
url = ssh://git@git.laantungir.net:2222/laantungir/text_graph.git
|
||||
|
||||
@@ -121,8 +121,8 @@ fuser -k 8888/tcp
|
||||
- Event filtering done at C level, not SQL level for NIP-40 expiration
|
||||
|
||||
### Configuration Override Behavior
|
||||
- CLI port override only affects first-time startup
|
||||
- After database creation, all config comes from events
|
||||
- CLI port override applies during first-time startup and existing relay restarts
|
||||
- After database creation, all config comes from events (but CLI overrides can still be applied)
|
||||
- Database path cannot be changed after initialization
|
||||
|
||||
## Non-Obvious Pitfalls
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
# Alpine-based MUSL static binary builder for C-Relay
|
||||
# Produces truly portable binaries with zero runtime dependencies
|
||||
|
||||
ARG DEBUG_BUILD=false
|
||||
|
||||
FROM alpine:3.19 AS builder
|
||||
|
||||
# Re-declare build argument in this stage
|
||||
ARG DEBUG_BUILD=false
|
||||
|
||||
# Install build dependencies
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
@@ -98,9 +103,19 @@ RUN cd nostr_core_lib && \
|
||||
COPY src/ /build/src/
|
||||
COPY Makefile /build/Makefile
|
||||
|
||||
# Build c-relay with full static linking and debug symbols (only rebuilds when src/ changes)
|
||||
# Build c-relay with full static linking (only rebuilds when src/ changes)
|
||||
# Disable fortification to avoid __*_chk symbols that don't exist in MUSL
|
||||
RUN gcc -static -g -O0 -DDEBUG -Wall -Wextra -std=c99 \
|
||||
# Use conditional compilation flags based on DEBUG_BUILD argument
|
||||
RUN if [ "$DEBUG_BUILD" = "true" ]; then \
|
||||
CFLAGS="-g -O0 -DDEBUG"; \
|
||||
STRIP_CMD=""; \
|
||||
echo "Building with DEBUG symbols enabled"; \
|
||||
else \
|
||||
CFLAGS="-O2"; \
|
||||
STRIP_CMD="strip /build/c_relay_static"; \
|
||||
echo "Building optimized production binary"; \
|
||||
fi && \
|
||||
gcc -static $CFLAGS -Wall -Wextra -std=c99 \
|
||||
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
|
||||
-I. -Ic_utils_lib/src -Inostr_core_lib -Inostr_core_lib/nostr_core \
|
||||
-Inostr_core_lib/cjson -Inostr_core_lib/nostr_websocket \
|
||||
@@ -111,10 +126,8 @@ RUN gcc -static -g -O0 -DDEBUG -Wall -Wextra -std=c99 \
|
||||
c_utils_lib/libc_utils.a \
|
||||
nostr_core_lib/libnostr_core_x64.a \
|
||||
-lwebsockets -lssl -lcrypto -lsqlite3 -lsecp256k1 \
|
||||
-lcurl -lz -lpthread -lm -ldl
|
||||
|
||||
# DO NOT strip - we need debug symbols for debugging
|
||||
# RUN strip /build/c_relay_static
|
||||
-lcurl -lz -lpthread -lm -ldl && \
|
||||
eval "$STRIP_CMD"
|
||||
|
||||
# Verify it's truly static
|
||||
RUN echo "=== Binary Information ===" && \
|
||||
|
||||
62
README.md
62
README.md
@@ -164,6 +164,8 @@ All commands are sent as NIP-44 encrypted JSON arrays in the event content. The
|
||||
| `system_clear_auth` | `["system_command", "clear_all_auth_rules"]` | Clear all auth rules |
|
||||
| `system_status` | `["system_command", "system_status"]` | Get system status |
|
||||
| `stats_query` | `["stats_query"]` | Get comprehensive database statistics |
|
||||
| **Database Queries** |
|
||||
| `sql_query` | `["sql_query", "SELECT * FROM events LIMIT 10"]` | Execute read-only SQL query against relay database |
|
||||
|
||||
### Available Configuration Keys
|
||||
|
||||
@@ -320,8 +322,68 @@ All admin commands return **signed EVENT responses** via WebSocket following sta
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
**SQL Query Response:**
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44 encrypted:{\"query_type\": \"sql_query\", \"request_id\": \"request_event_id\", \"timestamp\": 1234567890, \"query\": \"SELECT * FROM events LIMIT 10\", \"execution_time_ms\": 45, \"row_count\": 10, \"columns\": [\"id\", \"pubkey\", \"created_at\", \"kind\", \"content\"], \"rows\": [[\"abc123...\", \"def456...\", 1234567890, 1, \"Hello world\"], ...]}",
|
||||
"tags": [
|
||||
["p", "admin_public_key"],
|
||||
["e", "request_event_id"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
### SQL Query Command
|
||||
|
||||
The `sql_query` command allows administrators to execute read-only SQL queries against the relay database. This provides powerful analytics and debugging capabilities through the admin API.
|
||||
|
||||
**Request/Response Correlation:**
|
||||
- Each response includes the request event ID in both the `tags` array (`["e", "request_event_id"]`) and the decrypted content (`"request_id": "request_event_id"`)
|
||||
- This allows proper correlation when multiple queries are submitted concurrently
|
||||
- Frontend can track pending queries and match responses to requests
|
||||
|
||||
**Security Features:**
|
||||
- Only SELECT statements allowed (INSERT, UPDATE, DELETE, DROP, etc. are blocked)
|
||||
- Query timeout: 5 seconds (configurable)
|
||||
- Result row limit: 1000 rows (configurable)
|
||||
- All queries logged with execution time
|
||||
|
||||
**Available Tables and Views:**
|
||||
- `events` - All Nostr events
|
||||
- `config` - Configuration parameters
|
||||
- `auth_rules` - Authentication rules
|
||||
- `subscription_events` - Subscription lifecycle log
|
||||
- `event_broadcasts` - Event broadcast log
|
||||
- `recent_events` - Last 1000 events (view)
|
||||
- `event_stats` - Event statistics by type (view)
|
||||
- `subscription_analytics` - Subscription metrics (view)
|
||||
- `active_subscriptions_log` - Currently active subscriptions (view)
|
||||
- `event_kinds_view` - Event distribution by kind (view)
|
||||
- `top_pubkeys_view` - Top 10 pubkeys by event count (view)
|
||||
- `time_stats_view` - Time-based statistics (view)
|
||||
|
||||
**Example Queries:**
|
||||
```sql
|
||||
-- Recent events
|
||||
SELECT id, pubkey, created_at, kind FROM events ORDER BY created_at DESC LIMIT 20
|
||||
|
||||
-- Event distribution by kind
|
||||
SELECT * FROM event_kinds_view ORDER BY count DESC
|
||||
|
||||
-- Active subscriptions
|
||||
SELECT * FROM active_subscriptions_log ORDER BY created_at DESC
|
||||
|
||||
-- Database statistics
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM events) as total_events,
|
||||
(SELECT COUNT(*) FROM subscription_events) as total_subscriptions
|
||||
```
|
||||
|
||||
|
||||
|
||||
604
api/index.css
604
api/index.css
@@ -6,7 +6,7 @@
|
||||
--muted-color: #dddddd;
|
||||
--border-color: var(--muted-color);
|
||||
--font-family: "Courier New", Courier, monospace;
|
||||
--border-radius: 15px;
|
||||
--border-radius: 5px;
|
||||
--border-width: 1px;
|
||||
|
||||
/* Floating Tab Variables (8) */
|
||||
@@ -22,6 +22,23 @@
|
||||
--tab-border-opacity-logged-in: 0.1;
|
||||
}
|
||||
|
||||
/* Dark Mode Overrides */
|
||||
body.dark-mode {
|
||||
--primary-color: #ffffff;
|
||||
--secondary-color: #000000;
|
||||
--accent-color: #ff0000;
|
||||
--muted-color: #222222;
|
||||
--border-color: var(--muted-color);
|
||||
|
||||
|
||||
--tab-bg-logged-out: #000000;
|
||||
--tab-color-logged-out: #ffffff;
|
||||
--tab-border-logged-out: #ffffff;
|
||||
--tab-bg-logged-in: #000000;
|
||||
--tab-color-logged-in: #ffffff;
|
||||
--tab-border-logged-in: #00ffff;
|
||||
}
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
@@ -41,10 +58,8 @@ body {
|
||||
/* Header Styles */
|
||||
.main-header {
|
||||
background-color: var(--secondary-color);
|
||||
border-bottom: var(--border-width) solid var(--border-color);
|
||||
|
||||
padding: 15px 20px;
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 100;
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
@@ -67,6 +82,94 @@ body {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.relay-info {
|
||||
text-align: center;
|
||||
flex: 1;
|
||||
max-width: 150px;
|
||||
margin: 0 auto;
|
||||
}
|
||||
|
||||
.relay-name {
|
||||
font-size: 14px;
|
||||
font-weight: bold;
|
||||
color: var(--primary-color);
|
||||
margin-bottom: 2px;
|
||||
}
|
||||
|
||||
.relay-pubkey-container {
|
||||
border: 1px solid transparent;
|
||||
border-radius: var(--border-radius);
|
||||
padding: 4px;
|
||||
margin-top: 4px;
|
||||
cursor: pointer;
|
||||
transition: border-color 0.2s ease;
|
||||
background-color: var(--secondary-color);
|
||||
display: inline-block;
|
||||
width: fit-content;
|
||||
}
|
||||
|
||||
.relay-pubkey-container:hover {
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
.relay-pubkey-container.copied {
|
||||
border-color: var(--accent-color);
|
||||
animation: flash-accent 0.5s ease-in-out;
|
||||
}
|
||||
|
||||
.relay-pubkey {
|
||||
font-size: 8px;
|
||||
color: var(--primary-color);
|
||||
font-family: "Courier New", Courier, monospace;
|
||||
line-height: 1.2;
|
||||
white-space: pre-line;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
@keyframes flash-accent {
|
||||
0% { border-color: var(--accent-color); }
|
||||
50% { border-color: var(--accent-color); }
|
||||
100% { border-color: transparent; }
|
||||
}
|
||||
|
||||
.relay-description {
|
||||
font-size: 10px;
|
||||
color: var(--primary-color);
|
||||
margin-bottom: 0;
|
||||
display: inline-block;
|
||||
width: fit-content;
|
||||
word-wrap: break-word;
|
||||
overflow-wrap: break-word;
|
||||
}
|
||||
|
||||
.header-title {
|
||||
margin: 0;
|
||||
font-size: 24px;
|
||||
font-weight: bolder;
|
||||
color: var(--primary-color);
|
||||
border: none;
|
||||
padding: 0;
|
||||
text-align: left;
|
||||
display: flex;
|
||||
gap: 2px;
|
||||
}
|
||||
|
||||
.relay-letter {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
transition: all 0.05s ease;
|
||||
}
|
||||
|
||||
.relay-letter.underlined::after {
|
||||
content: '';
|
||||
position: absolute;
|
||||
bottom: -2px;
|
||||
left: 0;
|
||||
right: 0;
|
||||
height: 2px;
|
||||
background-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.header-user-name {
|
||||
display: block;
|
||||
font-weight: 500;
|
||||
@@ -78,13 +181,22 @@ body {
|
||||
|
||||
.profile-area {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
position: relative;
|
||||
cursor: pointer;
|
||||
padding: 8px 12px;
|
||||
border-radius: var(--border-radius);
|
||||
transition: background-color 0.2s ease;
|
||||
margin-left: auto;
|
||||
/* margin-left: auto; */
|
||||
}
|
||||
|
||||
.admin-label {
|
||||
font-size: 10px;
|
||||
color: var(--primary-color);
|
||||
font-weight: normal;
|
||||
margin-bottom: 4px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.profile-container {
|
||||
@@ -129,13 +241,13 @@ body {
|
||||
|
||||
.logout-btn {
|
||||
width: 100%;
|
||||
padding: 10px 15px;
|
||||
padding: 5px 10px;
|
||||
background: none;
|
||||
border: none;
|
||||
color: var(--primary-color);
|
||||
text-align: left;
|
||||
cursor: pointer;
|
||||
font-size: 14px;
|
||||
font-size: 10px;
|
||||
font-family: var(--font-family);
|
||||
border-radius: var(--border-radius);
|
||||
transition: background-color 0.2s ease;
|
||||
@@ -193,6 +305,8 @@ h2 {
|
||||
border-radius: var(--border-radius);
|
||||
padding: 20px;
|
||||
margin-bottom: 20px;
|
||||
margin-left: 5px;
|
||||
margin-right:5px;
|
||||
}
|
||||
|
||||
.input-group {
|
||||
@@ -255,10 +369,10 @@ button:active {
|
||||
}
|
||||
|
||||
button:disabled {
|
||||
background-color: #ccc;
|
||||
color: var(--muted-color);
|
||||
background-color: var(--muted-color);
|
||||
color: var(--primary-color);
|
||||
cursor: not-allowed;
|
||||
border-color: #ccc;
|
||||
border-color: var(--muted-color);
|
||||
}
|
||||
|
||||
/* Flash animation for refresh button */
|
||||
@@ -269,7 +383,7 @@ button:disabled {
|
||||
}
|
||||
|
||||
.flash-red {
|
||||
animation: flash-red 0.5s ease-in-out;
|
||||
animation: flash-red 1s ease-in-out;
|
||||
}
|
||||
|
||||
/* Flash animation for updated statistics values */
|
||||
@@ -280,7 +394,7 @@ button:disabled {
|
||||
}
|
||||
|
||||
.flash-value {
|
||||
animation: flash-value 0.5s ease-in-out;
|
||||
animation: flash-value 1s ease-in-out;
|
||||
}
|
||||
|
||||
/* Npub links styling */
|
||||
@@ -326,23 +440,6 @@ button:disabled {
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
/* Authentication warning message */
|
||||
.auth-warning-message {
|
||||
margin-bottom: 15px;
|
||||
padding: 12px;
|
||||
background-color: #fff3cd;
|
||||
border: 1px solid #ffeaa7;
|
||||
border-radius: var(--border-radius);
|
||||
color: #856404;
|
||||
}
|
||||
|
||||
.warning-content {
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
.warning-content strong {
|
||||
color: #d68910;
|
||||
}
|
||||
|
||||
.config-table {
|
||||
border: 1px solid var(--border-color);
|
||||
@@ -363,6 +460,10 @@ button:disabled {
|
||||
font-size: 10px;
|
||||
}
|
||||
|
||||
.config-table tbody tr:hover {
|
||||
background-color: rgba(0, 0, 0, 0.05);
|
||||
}
|
||||
|
||||
.config-table-container {
|
||||
overflow-x: auto;
|
||||
max-width: 100%;
|
||||
@@ -370,12 +471,13 @@ button:disabled {
|
||||
|
||||
.config-table th {
|
||||
font-weight: bold;
|
||||
height: 40px; /* Double the default height */
|
||||
line-height: 40px; /* Center text vertically */
|
||||
height: 24px; /* Base height for tbody rows */
|
||||
line-height: 24px; /* Center text vertically */
|
||||
}
|
||||
|
||||
.config-table tr:hover {
|
||||
background-color: var(--muted-color);
|
||||
.config-table td {
|
||||
height: 16px; /* 50% taller than tbody rows would be */
|
||||
line-height: 16px; /* Center text vertically */
|
||||
}
|
||||
|
||||
/* Inline config value inputs - remove borders and padding to fit seamlessly in table cells */
|
||||
@@ -453,6 +555,7 @@ button:disabled {
|
||||
.inline-buttons {
|
||||
display: flex;
|
||||
gap: 10px;
|
||||
flex-wrap: nowrap;
|
||||
}
|
||||
|
||||
.inline-buttons button {
|
||||
@@ -563,9 +666,9 @@ button:disabled {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 15px;
|
||||
border-bottom: var(--border-width) solid var(--border-color);
|
||||
padding-bottom: 10px;
|
||||
/* margin-bottom: 15px; */
|
||||
/* border-bottom: var(--border-width) solid var(--border-color); */
|
||||
/* padding-bottom: 10px; */
|
||||
}
|
||||
|
||||
.countdown-btn {
|
||||
@@ -713,35 +816,414 @@ button:disabled {
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
/* Main Sections Wrapper */
|
||||
.main-sections-wrapper {
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
/* SQL Query Interface Styles */
|
||||
.query-selector {
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
.query-selector select {
|
||||
width: 100%;
|
||||
padding: 8px;
|
||||
background: var(--secondary-color);
|
||||
color: var(--primary-color);
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
font-family: var(--font-family);
|
||||
font-size: 14px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.query-selector select:focus {
|
||||
border-color: var(--accent-color);
|
||||
outline: none;
|
||||
}
|
||||
|
||||
.query-selector optgroup {
|
||||
font-weight: bold;
|
||||
color: var(--primary-color);
|
||||
}
|
||||
|
||||
.query-selector option {
|
||||
padding: 4px;
|
||||
background: var(--secondary-color);
|
||||
color: var(--primary-color);
|
||||
}
|
||||
|
||||
.query-editor textarea {
|
||||
width: 100%;
|
||||
min-height: 120px;
|
||||
resize: vertical;
|
||||
font-family: "Courier New", Courier, monospace;
|
||||
font-size: 12px;
|
||||
line-height: 1.4;
|
||||
tab-size: 4;
|
||||
white-space: pre;
|
||||
}
|
||||
|
||||
.query-actions {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: var(--border-width);
|
||||
gap: 10px;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
.flex-section {
|
||||
.query-actions button {
|
||||
flex: 1;
|
||||
min-width: 300px;
|
||||
min-width: 120px;
|
||||
}
|
||||
|
||||
@media (max-width: 700px) {
|
||||
body {
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
.inline-buttons {
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 20px;
|
||||
}
|
||||
|
||||
h2 {
|
||||
font-size: 14px;
|
||||
}
|
||||
.primary-button {
|
||||
background: var(--primary-color);
|
||||
color: var(--secondary-color);
|
||||
border-color: var(--primary-color);
|
||||
}
|
||||
|
||||
.primary-button:hover {
|
||||
background: var(--secondary-color);
|
||||
color: var(--primary-color);
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.danger-button {
|
||||
background: var(--accent-color);
|
||||
color: var(--secondary-color);
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.danger-button:hover {
|
||||
background: var(--secondary-color);
|
||||
color: var(--primary-color);
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.query-info {
|
||||
padding: 10px;
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
margin: 10px 0;
|
||||
font-family: var(--font-family);
|
||||
font-size: 12px;
|
||||
background-color: var(--secondary-color);
|
||||
}
|
||||
|
||||
.query-info-success {
|
||||
border-color: #4CAF50;
|
||||
background-color: #E8F5E8;
|
||||
color: #2E7D32;
|
||||
}
|
||||
|
||||
.query-info-success span {
|
||||
display: inline-block;
|
||||
margin-right: 15px;
|
||||
}
|
||||
|
||||
.request-id {
|
||||
font-family: "Courier New", Courier, monospace;
|
||||
font-size: 10px;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.error-message {
|
||||
border-color: var(--accent-color);
|
||||
background-color: #FFEBEE;
|
||||
color: #C62828;
|
||||
padding: 10px;
|
||||
border-radius: var(--border-radius);
|
||||
margin: 10px 0;
|
||||
font-family: var(--font-family);
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.sql-results-table {
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
width: 100%;
|
||||
border-collapse: separate;
|
||||
border-spacing: 0;
|
||||
margin: 10px 0;
|
||||
overflow: hidden;
|
||||
font-size: 11px;
|
||||
}
|
||||
|
||||
.sql-results-table th,
|
||||
.sql-results-table td {
|
||||
border: 0.1px solid var(--muted-color);
|
||||
padding: 6px 8px;
|
||||
text-align: left;
|
||||
font-family: var(--font-family);
|
||||
white-space: nowrap;
|
||||
min-width: 100px;
|
||||
}
|
||||
|
||||
.sql-results-table th {
|
||||
font-weight: bold;
|
||||
background-color: rgba(0, 0, 0, 0.05);
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
.sql-results-table tbody tr:hover {
|
||||
background-color: rgba(0, 0, 0, 0.05);
|
||||
}
|
||||
|
||||
.sql-results-table tbody tr:nth-child(even) {
|
||||
background-color: rgba(0, 0, 0, 0.02);
|
||||
}
|
||||
|
||||
.no-results {
|
||||
text-align: center;
|
||||
font-style: italic;
|
||||
color: var(--muted-color);
|
||||
padding: 20px;
|
||||
font-family: var(--font-family);
|
||||
}
|
||||
|
||||
.loading {
|
||||
text-align: center;
|
||||
font-style: italic;
|
||||
color: var(--muted-color);
|
||||
padding: 20px;
|
||||
font-family: var(--font-family);
|
||||
}
|
||||
|
||||
/* Dark mode adjustments for SQL interface */
|
||||
body.dark-mode .query-info-success {
|
||||
border-color: #4CAF50;
|
||||
background-color: rgba(76, 175, 80, 0.1);
|
||||
color: #81C784;
|
||||
}
|
||||
|
||||
body.dark-mode .error-message {
|
||||
border-color: var(--accent-color);
|
||||
background-color: rgba(244, 67, 54, 0.1);
|
||||
color: #EF5350;
|
||||
}
|
||||
|
||||
body.dark-mode .sql-results-table th {
|
||||
background-color: rgba(255, 255, 255, 0.05);
|
||||
}
|
||||
|
||||
body.dark-mode .sql-results-table tbody tr:hover {
|
||||
background-color: rgba(255, 255, 255, 0.05);
|
||||
}
|
||||
|
||||
body.dark-mode .sql-results-table tbody tr:nth-child(even) {
|
||||
background-color: rgba(255, 255, 255, 0.02);
|
||||
}
|
||||
|
||||
|
||||
/* Config Toggle Button Styles */
|
||||
.config-toggle-btn {
|
||||
width: 24px;
|
||||
height: 24px;
|
||||
padding: 0;
|
||||
background: var(--secondary-color);
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
font-family: var(--font-family);
|
||||
font-size: 14px;
|
||||
cursor: pointer;
|
||||
margin-left: 10px;
|
||||
font-weight: bold;
|
||||
transition: all 0.2s ease;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
/* Toggle Button Styles */
|
||||
.toggle-btn {
|
||||
width: auto;
|
||||
min-width: 120px;
|
||||
padding: 8px 12px;
|
||||
background: var(--secondary-color);
|
||||
color: var(--primary-color);
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
font-family: var(--font-family);
|
||||
font-size: 12px;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
margin-left: auto;
|
||||
}
|
||||
|
||||
.toggle-btn:hover {
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.toggle-btn:active {
|
||||
background: var(--accent-color);
|
||||
color: var(--secondary-color);
|
||||
}
|
||||
|
||||
.config-toggle-btn:hover {
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.config-toggle-btn:active {
|
||||
background: var(--accent-color);
|
||||
color: var(--secondary-color);
|
||||
}
|
||||
|
||||
.config-toggle-btn[data-state="true"] {
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.config-toggle-btn[data-state="false"] {
|
||||
color: var(--primary-color);
|
||||
}
|
||||
|
||||
.config-toggle-btn[data-state="indeterminate"] {
|
||||
background-color: var(--muted-color);
|
||||
color: var(--primary-color);
|
||||
cursor: not-allowed;
|
||||
border-color: var(--muted-color);
|
||||
|
||||
}
|
||||
|
||||
|
||||
/* ================================
|
||||
REAL-TIME EVENT RATE CHART
|
||||
================================ */
|
||||
|
||||
.chart-container {
|
||||
margin: 20px 0;
|
||||
padding: 15px;
|
||||
background: var(--secondary-color);
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
}
|
||||
|
||||
#event-rate-chart {
|
||||
font-family: var(--font-family);
|
||||
font-size: 12px;
|
||||
line-height: 1.2;
|
||||
color: var(--primary-color);
|
||||
background: var(--secondary-color);
|
||||
padding: 20px;
|
||||
overflow: hidden;
|
||||
white-space: pre;
|
||||
border: var(--border-width) solid var(--border-color);
|
||||
border-radius: var(--border-radius);
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
/* ================================
|
||||
SIDE NAVIGATION MENU
|
||||
================================ */
|
||||
|
||||
.side-nav {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: -300px;
|
||||
width: 280px;
|
||||
height: 100vh;
|
||||
background: var(--secondary-color);
|
||||
border-right: var(--border-width) solid var(--border-color);
|
||||
z-index: 1000;
|
||||
transition: left 0.3s ease;
|
||||
overflow-y: auto;
|
||||
padding-top: 80px;
|
||||
}
|
||||
|
||||
.side-nav.open {
|
||||
left: 0;
|
||||
}
|
||||
|
||||
.side-nav-overlay {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
background: rgba(0, 0, 0, 0.5);
|
||||
z-index: 999;
|
||||
display: none;
|
||||
}
|
||||
|
||||
.side-nav-overlay.show {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.nav-menu {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.nav-menu li {
|
||||
border-bottom: var(--border-width) solid var(--muted-color);
|
||||
}
|
||||
|
||||
.nav-menu li:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
display: block;
|
||||
padding: 15px 20px;
|
||||
color: var(--primary-color);
|
||||
text-decoration: none;
|
||||
font-family: var(--font-family);
|
||||
font-size: 16px;
|
||||
font-weight: bold;
|
||||
transition: all 0.2s ease;
|
||||
cursor: pointer;
|
||||
border: 2px solid var(--secondary-color);
|
||||
background: none;
|
||||
width: 100%;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.nav-item:hover {
|
||||
border: 2px solid var(--secondary-color);
|
||||
background:var(--muted-color);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.nav-item.active {
|
||||
text-decoration: underline;
|
||||
padding-left: 16px;
|
||||
}
|
||||
|
||||
.nav-footer {
|
||||
position: absolute;
|
||||
bottom: 20px;
|
||||
left: 0;
|
||||
right: 0;
|
||||
padding: 0 20px;
|
||||
}
|
||||
|
||||
.nav-footer-btn {
|
||||
display: block;
|
||||
width: 100%;
|
||||
padding: 12px 20px;
|
||||
margin-bottom: 8px;
|
||||
color: var(--primary-color);
|
||||
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 4px;
|
||||
font-family: var(--font-family);
|
||||
font-size: 14px;
|
||||
font-weight: bold;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.nav-footer-btn:hover {
|
||||
background:var(--muted-color);
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.nav-footer-btn:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.header-title.clickable {
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.header-title.clickable:hover {
|
||||
opacity: 0.8;
|
||||
}
|
||||
222
api/index.html
222
api/index.html
@@ -9,22 +9,55 @@
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<!-- Side Navigation Menu -->
|
||||
<nav class="side-nav" id="side-nav">
|
||||
<ul class="nav-menu">
|
||||
<li><button class="nav-item" data-page="statistics">Statistics</button></li>
|
||||
<li><button class="nav-item" data-page="subscriptions">Subscriptions</button></li>
|
||||
<li><button class="nav-item" data-page="configuration">Configuration</button></li>
|
||||
<li><button class="nav-item" data-page="authorization">Authorization</button></li>
|
||||
<li><button class="nav-item" data-page="dm">DM</button></li>
|
||||
<li><button class="nav-item" data-page="database">Database Query</button></li>
|
||||
</ul>
|
||||
<div class="nav-footer">
|
||||
<button class="nav-footer-btn" id="nav-dark-mode-btn">DARK MODE</button>
|
||||
<button class="nav-footer-btn" id="nav-logout-btn">LOGOUT</button>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
<!-- Side Navigation Overlay -->
|
||||
<div class="side-nav-overlay" id="side-nav-overlay"></div>
|
||||
|
||||
<!-- Header with title and profile display -->
|
||||
<header class="main-header">
|
||||
<div class="header-content">
|
||||
<div class="header-title">RELAY</div>
|
||||
<div class="profile-area" id="profile-area" style="display: none;">
|
||||
<div class="profile-container">
|
||||
<img id="header-user-image" class="header-user-image" alt="Profile" style="display: none;">
|
||||
<span id="header-user-name" class="header-user-name">Loading...</span>
|
||||
<div class="section">
|
||||
|
||||
<div class="header-content">
|
||||
<div class="header-title clickable" id="header-title">
|
||||
<span class="relay-letter" data-letter="R">R</span>
|
||||
<span class="relay-letter" data-letter="E">E</span>
|
||||
<span class="relay-letter" data-letter="L">L</span>
|
||||
<span class="relay-letter" data-letter="A">A</span>
|
||||
<span class="relay-letter" data-letter="Y">Y</span>
|
||||
</div>
|
||||
<!-- Logout dropdown -->
|
||||
<div class="logout-dropdown" id="logout-dropdown" style="display: none;">
|
||||
<button type="button" id="logout-btn" class="logout-btn">LOGOUT</button>
|
||||
<div class="relay-info">
|
||||
<div id="relay-name" class="relay-name">C-Relay</div>
|
||||
<div id="relay-description" class="relay-description">Loading...</div>
|
||||
<div id="relay-pubkey-container" class="relay-pubkey-container">
|
||||
<div id="relay-pubkey" class="relay-pubkey">Loading...</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="profile-area" id="profile-area" style="display: none;">
|
||||
<div class="admin-label">admin</div>
|
||||
<div class="profile-container">
|
||||
<img id="header-user-image" class="header-user-image" alt="Profile" style="display: none;">
|
||||
<span id="header-user-name" class="header-user-name">Loading...</span>
|
||||
</div>
|
||||
<!-- Logout dropdown -->
|
||||
<!-- Dropdown menu removed - buttons moved to sidebar -->
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
</div>
|
||||
|
||||
<!-- Login Modal Overlay -->
|
||||
<div id="login-modal" class="login-modal-overlay" style="display: none;">
|
||||
@@ -33,98 +66,63 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Main Sections Wrapper -->
|
||||
<div class="main-sections-wrapper">
|
||||
|
||||
<!-- Relay Connection Section -->
|
||||
<div id="relay-connection-section" class="flex-section">
|
||||
<div class="section">
|
||||
<h2>RELAY CONNECTION</h2>
|
||||
|
||||
<div class="input-group">
|
||||
<label for="relay-connection-url">Relay URL:</label>
|
||||
<input type="text" id="relay-connection-url" value=""
|
||||
placeholder="ws://localhost:8888 or wss://relay.example.com">
|
||||
</div>
|
||||
|
||||
<div class="input-group">
|
||||
<label for="relay-pubkey-manual">Relay Pubkey (if not available via NIP-11):</label>
|
||||
<input type="text" id="relay-pubkey-manual" placeholder="64-character hex pubkey"
|
||||
pattern="[0-9a-fA-F]{64}" title="64-character hexadecimal public key">
|
||||
|
||||
</div>
|
||||
|
||||
<div class="inline-buttons">
|
||||
<button type="button" id="connect-relay-btn">CONNECT TO RELAY</button>
|
||||
<button type="button" id="disconnect-relay-btn" disabled>DISCONNECT</button>
|
||||
<button type="button" id="restart-relay-btn" disabled>RESTART RELAY</button>
|
||||
</div>
|
||||
|
||||
<div class="status disconnected" id="relay-connection-status">NOT CONNECTED</div>
|
||||
|
||||
<!-- Relay Information Display -->
|
||||
<div id="relay-info-display" class="hidden">
|
||||
<h3>Relay Information (NIP-11)</h3>
|
||||
<table class="config-table" id="relay-info-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Property</th>
|
||||
<th>Value</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="relay-info-table-body">
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
|
||||
</div> <!-- End Main Sections Wrapper -->
|
||||
|
||||
<!-- DATABASE STATISTICS Section -->
|
||||
<div class="section flex-section" id="databaseStatisticsSection" style="display: none;">
|
||||
<div class="section-header">
|
||||
<h2>DATABASE STATISTICS</h2>
|
||||
<button type="button" id="refresh-stats-btn" class="countdown-btn"></button>
|
||||
<!-- Monitoring is now subscription-based - no toggle button needed -->
|
||||
<!-- Subscribe to kind 24567 events to receive real-time monitoring data -->
|
||||
</div>
|
||||
|
||||
<!-- Event Rate Graph Container -->
|
||||
<div id="event-rate-chart"></div>
|
||||
|
||||
<!-- Database Overview Table -->
|
||||
<div class="input-group">
|
||||
<label>Database Overview:</label>
|
||||
<div class="config-table-container">
|
||||
<table class="config-table" id="stats-overview-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Metric</th>
|
||||
<th>Value</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="stats-overview-table-body">
|
||||
<tr>
|
||||
<td>Database Size</td>
|
||||
<td id="db-size">-</td>
|
||||
<td>Current database file size</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Total Events</td>
|
||||
<td id="total-events">-</td>
|
||||
<td>Total number of events stored</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Process ID</td>
|
||||
<td id="process-id">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Active Subscriptions</td>
|
||||
<td id="active-subscriptions">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Memory Usage</td>
|
||||
<td id="memory-usage">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CPU Core</td>
|
||||
<td id="cpu-core">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CPU Usage</td>
|
||||
<td id="cpu-usage">-</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Oldest Event</td>
|
||||
<td id="oldest-event">-</td>
|
||||
<td>Timestamp of oldest event</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Newest Event</td>
|
||||
<td id="newest-event">-</td>
|
||||
<td>Timestamp of newest event</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
@@ -161,24 +159,20 @@
|
||||
<tr>
|
||||
<th>Period</th>
|
||||
<th>Events</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="stats-time-table-body">
|
||||
<tr>
|
||||
<td>Last 24 Hours</td>
|
||||
<td id="events-24h">-</td>
|
||||
<td>Events in the last day</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Last 7 Days</td>
|
||||
<td id="events-7d">-</td>
|
||||
<td>Events in the last week</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Last 30 Days</td>
|
||||
<td id="events-30d">-</td>
|
||||
<td>Events in the last month</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
@@ -209,6 +203,34 @@
|
||||
|
||||
</div>
|
||||
|
||||
<!-- SUBSCRIPTION DETAILS Section (Admin Only) -->
|
||||
<div class="section flex-section" id="subscriptionDetailsSection" style="display: none;">
|
||||
<div class="section-header">
|
||||
<h2>ACTIVE SUBSCRIPTION DETAILS</h2>
|
||||
</div>
|
||||
|
||||
<div class="input-group">
|
||||
<div class="config-table-container">
|
||||
<table class="config-table" id="subscription-details-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Subscription ID</th>
|
||||
<th>Client IP</th>
|
||||
<th>WSI Pointer</th>
|
||||
<th>Duration</th>
|
||||
<th>Filters</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="subscription-details-table-body">
|
||||
<tr>
|
||||
<td colspan="5" style="text-align: center; font-style: italic;">No subscriptions active</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Testing Section -->
|
||||
<div id="div_config" class="section flex-section" style="display: none;">
|
||||
<h2>RELAY CONFIGURATION</h2>
|
||||
@@ -319,6 +341,54 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- SQL QUERY Section -->
|
||||
<div class="section" id="sqlQuerySection" style="display: none;">
|
||||
<div class="section-header">
|
||||
<h2>SQL QUERY CONSOLE</h2>
|
||||
</div>
|
||||
|
||||
<!-- Query Selector -->
|
||||
<div class="input-group">
|
||||
<label for="query-dropdown">Quick Queries & History:</label>
|
||||
<select id="query-dropdown" onchange="loadSelectedQuery()">
|
||||
<option value="">-- Select a query --</option>
|
||||
<optgroup label="Common Queries">
|
||||
<option value="recent_events">Recent Events</option>
|
||||
<option value="event_stats">Event Statistics</option>
|
||||
<option value="subscriptions">Active Subscriptions</option>
|
||||
<option value="top_pubkeys">Top Pubkeys</option>
|
||||
<option value="event_kinds">Event Kinds Distribution</option>
|
||||
<option value="time_stats">Time-based Statistics</option>
|
||||
</optgroup>
|
||||
<optgroup label="Query History" id="history-group">
|
||||
<!-- Dynamically populated from localStorage -->
|
||||
</optgroup>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<!-- Query Editor -->
|
||||
<div class="input-group">
|
||||
<label for="sql-input">SQL Query:</label>
|
||||
<textarea id="sql-input" rows="5" placeholder="SELECT * FROM events LIMIT 10"></textarea>
|
||||
</div>
|
||||
|
||||
<!-- Query Actions -->
|
||||
<div class="input-group">
|
||||
<div class="inline-buttons">
|
||||
<button type="button" id="execute-sql-btn">EXECUTE QUERY</button>
|
||||
<button type="button" id="clear-sql-btn">CLEAR</button>
|
||||
<button type="button" id="clear-history-btn">CLEAR HISTORY</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Query Results -->
|
||||
<div class="input-group">
|
||||
<label>Query Results:</label>
|
||||
<div id="query-info" class="info-box"></div>
|
||||
<div id="query-table" class="config-table-container"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Load the official nostr-tools bundle first -->
|
||||
<!-- <script src="https://laantungir.net/nostr-login-lite/nostr.bundle.js"></script> -->
|
||||
<script src="/api/nostr.bundle.js"></script>
|
||||
@@ -326,6 +396,8 @@
|
||||
<!-- Load NOSTR_LOGIN_LITE main library -->
|
||||
<!-- <script src="https://laantungir.net/nostr-login-lite/nostr-lite.js"></script> -->
|
||||
<script src="/api/nostr-lite.js"></script>
|
||||
<!-- Load text_graph library -->
|
||||
<script src="/api/text_graph.js"></script>
|
||||
|
||||
|
||||
|
||||
|
||||
2283
api/index.js
2283
api/index.js
File diff suppressed because it is too large
Load Diff
463
api/text_graph.js
Normal file
463
api/text_graph.js
Normal file
@@ -0,0 +1,463 @@
|
||||
/**
|
||||
* ASCIIBarChart - A dynamic ASCII-based vertical bar chart renderer
|
||||
*
|
||||
* Creates real-time animated bar charts using monospaced characters (X)
|
||||
* with automatic scaling, labels, and responsive font sizing.
|
||||
*/
|
||||
class ASCIIBarChart {
|
||||
/**
|
||||
* Create a new ASCII bar chart
|
||||
* @param {string} containerId - The ID of the HTML element to render the chart in
|
||||
* @param {Object} options - Configuration options
|
||||
* @param {number} [options.maxHeight=20] - Maximum height of the chart in rows
|
||||
* @param {number} [options.maxDataPoints=30] - Maximum number of data columns before scrolling
|
||||
* @param {string} [options.title=''] - Chart title (displayed centered at top)
|
||||
* @param {string} [options.xAxisLabel=''] - X-axis label (displayed centered at bottom)
|
||||
* @param {string} [options.yAxisLabel=''] - Y-axis label (displayed vertically on left)
|
||||
* @param {boolean} [options.autoFitWidth=true] - Automatically adjust font size to fit container width
|
||||
* @param {boolean} [options.useBinMode=false] - Enable time bin mode for data aggregation
|
||||
* @param {number} [options.binDuration=10000] - Duration of each time bin in milliseconds (10 seconds default)
|
||||
* @param {string} [options.xAxisLabelFormat='elapsed'] - X-axis label format: 'elapsed', 'bins', 'timestamps', 'ranges'
|
||||
* @param {boolean} [options.debug=false] - Enable debug logging
|
||||
*/
|
||||
constructor(containerId, options = {}) {
|
||||
this.container = document.getElementById(containerId);
|
||||
this.data = [];
|
||||
this.maxHeight = options.maxHeight || 20;
|
||||
this.maxDataPoints = options.maxDataPoints || 30;
|
||||
this.totalDataPoints = 0; // Track total number of data points added
|
||||
this.title = options.title || '';
|
||||
this.xAxisLabel = options.xAxisLabel || '';
|
||||
this.yAxisLabel = options.yAxisLabel || '';
|
||||
this.autoFitWidth = options.autoFitWidth !== false; // Default to true
|
||||
this.debug = options.debug || false; // Debug logging option
|
||||
|
||||
// Time bin configuration
|
||||
this.useBinMode = options.useBinMode !== false; // Default to true
|
||||
this.binDuration = options.binDuration || 4000; // 4 seconds default
|
||||
this.xAxisLabelFormat = options.xAxisLabelFormat || 'elapsed';
|
||||
|
||||
// Time bin data structures
|
||||
this.bins = [];
|
||||
this.currentBinIndex = -1;
|
||||
this.binStartTime = null;
|
||||
this.binCheckInterval = null;
|
||||
this.chartStartTime = Date.now();
|
||||
|
||||
// Set up resize observer if auto-fit is enabled
|
||||
if (this.autoFitWidth) {
|
||||
this.resizeObserver = new ResizeObserver(() => {
|
||||
this.adjustFontSize();
|
||||
});
|
||||
this.resizeObserver.observe(this.container);
|
||||
}
|
||||
|
||||
// Initialize first bin if bin mode is enabled
|
||||
if (this.useBinMode) {
|
||||
this.initializeBins();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a new data point to the chart
|
||||
* @param {number} value - The numeric value to add
|
||||
*/
|
||||
addValue(value) {
|
||||
// Time bin mode: add value to current active bin count
|
||||
this.checkBinRotation(); // Ensure we have an active bin
|
||||
this.bins[this.currentBinIndex].count += value; // Changed from ++ to += value
|
||||
this.totalDataPoints++;
|
||||
|
||||
this.render();
|
||||
this.updateInfo();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all data from the chart
|
||||
*/
|
||||
clear() {
|
||||
this.data = [];
|
||||
this.totalDataPoints = 0;
|
||||
|
||||
if (this.useBinMode) {
|
||||
this.bins = [];
|
||||
this.currentBinIndex = -1;
|
||||
this.binStartTime = null;
|
||||
this.initializeBins();
|
||||
}
|
||||
|
||||
this.render();
|
||||
this.updateInfo();
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate the width of the chart in characters
|
||||
* @returns {number} The chart width in characters
|
||||
* @private
|
||||
*/
|
||||
getChartWidth() {
|
||||
let dataLength = this.maxDataPoints; // Always use maxDataPoints for consistent width
|
||||
|
||||
if (dataLength === 0) return 50; // Default width for empty chart
|
||||
|
||||
const yAxisPadding = this.yAxisLabel ? 2 : 0;
|
||||
const yAxisNumbers = 3; // Width of Y-axis numbers
|
||||
const separator = 1; // The '|' character
|
||||
// const dataWidth = dataLength * 2; // Each column is 2 characters wide // TEMP: commented for no-space test
|
||||
const dataWidth = dataLength; // Each column is 1 character wide // TEMP: adjusted for no-space columns
|
||||
const padding = 1; // Extra padding
|
||||
|
||||
const totalWidth = yAxisPadding + yAxisNumbers + separator + dataWidth + padding;
|
||||
|
||||
// Only log when width changes
|
||||
if (this.debug && this.lastChartWidth !== totalWidth) {
|
||||
console.log('getChartWidth changed:', { dataLength, totalWidth, previous: this.lastChartWidth });
|
||||
this.lastChartWidth = totalWidth;
|
||||
}
|
||||
|
||||
return totalWidth;
|
||||
}
|
||||
|
||||
/**
|
||||
* Adjust font size to fit container width
|
||||
* @private
|
||||
*/
|
||||
adjustFontSize() {
|
||||
if (!this.autoFitWidth) return;
|
||||
|
||||
const containerWidth = this.container.clientWidth;
|
||||
const chartWidth = this.getChartWidth();
|
||||
|
||||
if (chartWidth === 0) return;
|
||||
|
||||
// Calculate optimal font size
|
||||
// For monospace fonts, character width is approximately 0.6 * font size
|
||||
// Use a slightly smaller ratio to fit more content
|
||||
const charWidthRatio = 0.7;
|
||||
const padding = 30; // Reduce padding to fit more content
|
||||
const availableWidth = containerWidth - padding;
|
||||
const optimalFontSize = Math.floor((availableWidth / chartWidth) / charWidthRatio);
|
||||
|
||||
// Set reasonable bounds (min 4px, max 20px)
|
||||
const fontSize = Math.max(4, Math.min(20, optimalFontSize));
|
||||
|
||||
// Only log when font size changes
|
||||
if (this.debug && this.lastFontSize !== fontSize) {
|
||||
console.log('fontSize changed:', { containerWidth, chartWidth, fontSize, previous: this.lastFontSize });
|
||||
this.lastFontSize = fontSize;
|
||||
}
|
||||
|
||||
this.container.style.fontSize = fontSize + 'px';
|
||||
this.container.style.lineHeight = '1.0';
|
||||
}
|
||||
|
||||
/**
|
||||
* Render the chart to the container
|
||||
* @private
|
||||
*/
|
||||
render() {
|
||||
let dataToRender = [];
|
||||
let maxValue = 0;
|
||||
let minValue = 0;
|
||||
let valueRange = 0;
|
||||
|
||||
if (this.useBinMode) {
|
||||
// Bin mode: render bin counts
|
||||
if (this.bins.length === 0) {
|
||||
this.container.textContent = 'No data yet. Click Start to begin.';
|
||||
return;
|
||||
}
|
||||
// Always create a fixed-length array filled with 0s, then overlay actual bin data
|
||||
dataToRender = new Array(this.maxDataPoints).fill(0);
|
||||
|
||||
// Overlay actual bin data (most recent bins, reversed for left-to-right display)
|
||||
const startIndex = Math.max(0, this.bins.length - this.maxDataPoints);
|
||||
const recentBins = this.bins.slice(startIndex);
|
||||
|
||||
// Reverse the bins so most recent is on the left, and overlay onto the fixed array
|
||||
recentBins.reverse().forEach((bin, index) => {
|
||||
if (index < this.maxDataPoints) {
|
||||
dataToRender[index] = bin.count;
|
||||
}
|
||||
});
|
||||
|
||||
if (this.debug) {
|
||||
console.log('render() dataToRender:', dataToRender, 'bins length:', this.bins.length);
|
||||
}
|
||||
maxValue = Math.max(...dataToRender);
|
||||
minValue = Math.min(...dataToRender);
|
||||
valueRange = maxValue - minValue;
|
||||
} else {
|
||||
// Legacy mode: render individual values
|
||||
if (this.data.length === 0) {
|
||||
this.container.textContent = 'No data yet. Click Start to begin.';
|
||||
return;
|
||||
}
|
||||
dataToRender = this.data;
|
||||
maxValue = Math.max(...this.data);
|
||||
minValue = Math.min(...this.data);
|
||||
valueRange = maxValue - minValue;
|
||||
}
|
||||
|
||||
let output = '';
|
||||
const scale = this.maxHeight;
|
||||
|
||||
// Calculate scaling factor: each X represents at least 1 count
|
||||
const maxCount = Math.max(...dataToRender);
|
||||
const scaleFactor = Math.max(1, Math.ceil(maxCount / scale)); // 1 X = scaleFactor counts
|
||||
const scaledMax = Math.ceil(maxCount / scaleFactor) * scaleFactor;
|
||||
|
||||
// Calculate Y-axis label width (for vertical text)
|
||||
const yLabelWidth = this.yAxisLabel ? 2 : 0;
|
||||
const yAxisPadding = this.yAxisLabel ? ' ' : '';
|
||||
|
||||
// Add title if provided (centered)
|
||||
if (this.title) {
|
||||
// const chartWidth = 4 + this.maxDataPoints * 2; // Y-axis numbers + data columns // TEMP: commented for no-space test
|
||||
const chartWidth = 4 + this.maxDataPoints; // Y-axis numbers + data columns // TEMP: adjusted for no-space columns
|
||||
const titlePadding = Math.floor((chartWidth - this.title.length) / 2);
|
||||
output += yAxisPadding + ' '.repeat(Math.max(0, titlePadding)) + this.title + '\n\n';
|
||||
}
|
||||
|
||||
// Draw from top to bottom
|
||||
for (let row = scale; row > 0; row--) {
|
||||
let line = '';
|
||||
|
||||
// Add vertical Y-axis label character
|
||||
if (this.yAxisLabel) {
|
||||
const L = this.yAxisLabel.length;
|
||||
const startRow = Math.floor((scale - L) / 2) + 1;
|
||||
const relativeRow = scale - row + 1; // 1 at top, scale at bottom
|
||||
if (relativeRow >= startRow && relativeRow < startRow + L) {
|
||||
const labelIndex = relativeRow - startRow;
|
||||
line += this.yAxisLabel[labelIndex] + ' ';
|
||||
} else {
|
||||
line += ' ';
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate the actual count value this row represents (1 at bottom, increasing upward)
|
||||
const rowCount = (row - 1) * scaleFactor + 1;
|
||||
|
||||
// Add Y-axis label (show actual count values)
|
||||
line += String(rowCount).padStart(3, ' ') + ' |';
|
||||
|
||||
// Draw each column
|
||||
for (let i = 0; i < dataToRender.length; i++) {
|
||||
const count = dataToRender[i];
|
||||
const scaledHeight = Math.ceil(count / scaleFactor);
|
||||
|
||||
if (scaledHeight >= row) {
|
||||
// line += ' X'; // TEMP: commented out space between columns
|
||||
line += 'X'; // TEMP: no space between columns
|
||||
} else {
|
||||
// line += ' '; // TEMP: commented out space between columns
|
||||
line += ' '; // TEMP: single space for empty columns
|
||||
}
|
||||
}
|
||||
|
||||
output += line + '\n';
|
||||
}
|
||||
|
||||
// Draw X-axis
|
||||
// output += yAxisPadding + ' +' + '-'.repeat(this.maxDataPoints * 2) + '\n'; // TEMP: commented out for no-space test
|
||||
output += yAxisPadding + ' +' + '-'.repeat(this.maxDataPoints) + '\n'; // TEMP: back to original length
|
||||
|
||||
// Draw X-axis labels based on mode and format
|
||||
let xAxisLabels = yAxisPadding + ' '; // Initial padding to align with X-axis
|
||||
|
||||
// Determine label interval (every 5 columns)
|
||||
const labelInterval = 5;
|
||||
|
||||
// Generate all labels first and store in array
|
||||
let labels = [];
|
||||
for (let i = 0; i < this.maxDataPoints; i++) {
|
||||
if (i % labelInterval === 0) {
|
||||
let label = '';
|
||||
if (this.useBinMode) {
|
||||
// For bin mode, show labels for all possible positions
|
||||
// i=0 is leftmost (most recent), i=maxDataPoints-1 is rightmost (oldest)
|
||||
const elapsedSec = (i * this.binDuration) / 1000;
|
||||
// Format with appropriate precision for sub-second bins
|
||||
if (this.binDuration < 1000) {
|
||||
// Show decimal seconds for sub-second bins
|
||||
label = elapsedSec.toFixed(1) + 's';
|
||||
} else {
|
||||
// Show whole seconds for 1+ second bins
|
||||
label = String(Math.round(elapsedSec)) + 's';
|
||||
}
|
||||
} else {
|
||||
// For legacy mode, show data point numbers
|
||||
const startIndex = Math.max(1, this.totalDataPoints - this.maxDataPoints + 1);
|
||||
label = String(startIndex + i);
|
||||
}
|
||||
labels.push(label);
|
||||
}
|
||||
}
|
||||
|
||||
// Build the label string with calculated spacing
|
||||
for (let i = 0; i < labels.length; i++) {
|
||||
const label = labels[i];
|
||||
xAxisLabels += label;
|
||||
|
||||
// Add spacing: labelInterval - label.length (except for last label)
|
||||
if (i < labels.length - 1) {
|
||||
const spacing = labelInterval - label.length;
|
||||
xAxisLabels += ' '.repeat(spacing);
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure the label line extends to match the X-axis dash line length
|
||||
// The dash line is this.maxDataPoints characters long, starting after " +"
|
||||
const dashLineLength = this.maxDataPoints;
|
||||
const minLabelLineLength = yAxisPadding.length + 4 + dashLineLength; // 4 for " "
|
||||
if (xAxisLabels.length < minLabelLineLength) {
|
||||
xAxisLabels += ' '.repeat(minLabelLineLength - xAxisLabels.length);
|
||||
}
|
||||
output += xAxisLabels + '\n';
|
||||
|
||||
// Add X-axis label if provided
|
||||
if (this.xAxisLabel) {
|
||||
// const labelPadding = Math.floor((this.maxDataPoints * 2 - this.xAxisLabel.length) / 2); // TEMP: commented for no-space test
|
||||
const labelPadding = Math.floor((this.maxDataPoints - this.xAxisLabel.length) / 2); // TEMP: adjusted for no-space columns
|
||||
output += '\n' + yAxisPadding + ' ' + ' '.repeat(Math.max(0, labelPadding)) + this.xAxisLabel + '\n';
|
||||
}
|
||||
|
||||
this.container.textContent = output;
|
||||
|
||||
// Adjust font size to fit width (only once at initialization)
|
||||
if (this.autoFitWidth) {
|
||||
this.adjustFontSize();
|
||||
}
|
||||
|
||||
// Update the external info display
|
||||
if (this.useBinMode) {
|
||||
const binCounts = this.bins.map(bin => bin.count);
|
||||
const scaleFactor = Math.max(1, Math.ceil(maxValue / scale));
|
||||
document.getElementById('values').textContent = `[${dataToRender.join(', ')}]`;
|
||||
document.getElementById('max-value').textContent = maxValue;
|
||||
document.getElementById('scale').textContent = `Min: ${minValue}, Max: ${maxValue}, 1X=${scaleFactor} counts`;
|
||||
} else {
|
||||
document.getElementById('values').textContent = `[${this.data.join(', ')}]`;
|
||||
document.getElementById('max-value').textContent = maxValue;
|
||||
document.getElementById('scale').textContent = `Min: ${minValue}, Max: ${maxValue}, Height: ${scale}`;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the info display
|
||||
* @private
|
||||
*/
|
||||
updateInfo() {
|
||||
if (this.useBinMode) {
|
||||
const totalCount = this.bins.reduce((sum, bin) => sum + bin.count, 0);
|
||||
document.getElementById('count').textContent = totalCount;
|
||||
} else {
|
||||
document.getElementById('count').textContent = this.data.length;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the bin system
|
||||
* @private
|
||||
*/
|
||||
initializeBins() {
|
||||
this.bins = [];
|
||||
this.currentBinIndex = -1;
|
||||
this.binStartTime = null;
|
||||
this.chartStartTime = Date.now();
|
||||
|
||||
// Create first bin
|
||||
this.rotateBin();
|
||||
|
||||
// Set up automatic bin rotation check
|
||||
this.binCheckInterval = setInterval(() => {
|
||||
this.checkBinRotation();
|
||||
}, 100); // Check every 100ms for responsiveness
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if current bin should rotate and create new bin if needed
|
||||
* @private
|
||||
*/
|
||||
checkBinRotation() {
|
||||
if (!this.useBinMode || !this.binStartTime) return;
|
||||
|
||||
const now = Date.now();
|
||||
if ((now - this.binStartTime) >= this.binDuration) {
|
||||
this.rotateBin();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Rotate to a new bin, finalizing the current one
|
||||
*/
|
||||
rotateBin() {
|
||||
// Finalize current bin if it exists
|
||||
if (this.currentBinIndex >= 0) {
|
||||
this.bins[this.currentBinIndex].isActive = false;
|
||||
}
|
||||
|
||||
// Create new bin
|
||||
const newBin = {
|
||||
startTime: Date.now(),
|
||||
count: 0,
|
||||
isActive: true
|
||||
};
|
||||
|
||||
this.bins.push(newBin);
|
||||
this.currentBinIndex = this.bins.length - 1;
|
||||
this.binStartTime = newBin.startTime;
|
||||
|
||||
// Keep only the most recent bins
|
||||
if (this.bins.length > this.maxDataPoints) {
|
||||
this.bins.shift();
|
||||
this.currentBinIndex--;
|
||||
}
|
||||
|
||||
// Ensure currentBinIndex points to the last bin (the active one)
|
||||
this.currentBinIndex = this.bins.length - 1;
|
||||
|
||||
// Force a render to update the display immediately
|
||||
this.render();
|
||||
this.updateInfo();
|
||||
}
|
||||
|
||||
/**
|
||||
* Format X-axis label for a bin based on the configured format
|
||||
* @param {number} binIndex - Index of the bin
|
||||
* @returns {string} Formatted label
|
||||
* @private
|
||||
*/
|
||||
formatBinLabel(binIndex) {
|
||||
const bin = this.bins[binIndex];
|
||||
if (!bin) return ' ';
|
||||
|
||||
switch (this.xAxisLabelFormat) {
|
||||
case 'bins':
|
||||
return String(binIndex + 1).padStart(2, ' ');
|
||||
|
||||
case 'timestamps':
|
||||
const time = new Date(bin.startTime);
|
||||
return time.toLocaleTimeString('en-US', {
|
||||
hour12: false,
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
second: '2-digit'
|
||||
}).replace(/:/g, '');
|
||||
|
||||
case 'ranges':
|
||||
const startSec = Math.floor((bin.startTime - this.chartStartTime) / 1000);
|
||||
const endSec = startSec + Math.floor(this.binDuration / 1000);
|
||||
return `${startSec}-${endSec}`;
|
||||
|
||||
case 'elapsed':
|
||||
default:
|
||||
// For elapsed time, always show time relative to the first bin (index 0)
|
||||
// This keeps the leftmost label as 0s and increases to the right
|
||||
const firstBinTime = this.bins[0] ? this.bins[0].startTime : this.chartStartTime;
|
||||
const elapsedSec = Math.floor((bin.startTime - firstBinTime) / 1000);
|
||||
return String(elapsedSec).padStart(2, ' ') + 's';
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -9,11 +9,21 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
BUILD_DIR="$SCRIPT_DIR/build"
|
||||
DOCKERFILE="$SCRIPT_DIR/Dockerfile.alpine-musl"
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay MUSL Static Binary Builder"
|
||||
echo "=========================================="
|
||||
# Parse command line arguments
|
||||
DEBUG_BUILD=false
|
||||
if [[ "$1" == "--debug" ]]; then
|
||||
DEBUG_BUILD=true
|
||||
echo "=========================================="
|
||||
echo "C-Relay MUSL Static Binary Builder (DEBUG MODE)"
|
||||
echo "=========================================="
|
||||
else
|
||||
echo "=========================================="
|
||||
echo "C-Relay MUSL Static Binary Builder (PRODUCTION MODE)"
|
||||
echo "=========================================="
|
||||
fi
|
||||
echo "Project directory: $SCRIPT_DIR"
|
||||
echo "Build directory: $BUILD_DIR"
|
||||
echo "Debug build: $DEBUG_BUILD"
|
||||
echo ""
|
||||
|
||||
# Create build directory
|
||||
@@ -83,6 +93,7 @@ echo ""
|
||||
|
||||
$DOCKER_CMD build \
|
||||
--platform "$PLATFORM" \
|
||||
--build-arg DEBUG_BUILD=$DEBUG_BUILD \
|
||||
-f "$DOCKERFILE" \
|
||||
-t c-relay-musl-builder:latest \
|
||||
--progress=plain \
|
||||
@@ -105,6 +116,7 @@ echo "=========================================="
|
||||
# Build the builder stage to extract the binary
|
||||
$DOCKER_CMD build \
|
||||
--platform "$PLATFORM" \
|
||||
--build-arg DEBUG_BUILD=$DEBUG_BUILD \
|
||||
--target builder \
|
||||
-f "$DOCKERFILE" \
|
||||
-t c-relay-static-builder-stage:latest \
|
||||
@@ -179,11 +191,16 @@ echo "=========================================="
|
||||
echo "Binary: $BUILD_DIR/$OUTPUT_NAME"
|
||||
echo "Size: $(du -h "$BUILD_DIR/$OUTPUT_NAME" | cut -f1)"
|
||||
echo "Platform: $PLATFORM"
|
||||
if [ "$DEBUG_BUILD" = true ]; then
|
||||
echo "Build Type: DEBUG (with symbols, no optimization)"
|
||||
else
|
||||
echo "Build Type: PRODUCTION (optimized, stripped)"
|
||||
fi
|
||||
if [ "$TRULY_STATIC" = true ]; then
|
||||
echo "Type: Fully static binary (Alpine MUSL-based)"
|
||||
echo "Linkage: Fully static binary (Alpine MUSL-based)"
|
||||
echo "Portability: Works on ANY Linux distribution"
|
||||
else
|
||||
echo "Type: Static binary (may have minimal dependencies)"
|
||||
echo "Linkage: Static binary (may have minimal dependencies)"
|
||||
fi
|
||||
echo ""
|
||||
echo "✓ Build complete!"
|
||||
|
||||
@@ -1,3 +1,19 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copy the binary to the deployment location
|
||||
cp build/c_relay_x86 ~/Storage/c_relay/crelay
|
||||
|
||||
# Copy the local service file to systemd
|
||||
sudo cp systemd/c-relay-local.service /etc/systemd/system/
|
||||
|
||||
# Reload systemd daemon to pick up the new service
|
||||
sudo systemctl daemon-reload
|
||||
|
||||
# Enable the service (if not already enabled)
|
||||
sudo systemctl enable c-relay-local.service
|
||||
|
||||
# Restart the service
|
||||
sudo systemctl restart c-relay-local.service
|
||||
|
||||
# Show service status
|
||||
sudo systemctl status c-relay-local.service --no-pager -l
|
||||
|
||||
298
docs/libwebsockets_proper_pattern.md
Normal file
298
docs/libwebsockets_proper_pattern.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# Libwebsockets Proper Pattern - Message Queue Design
|
||||
|
||||
## Problem Analysis
|
||||
|
||||
### Current Violation
|
||||
We're calling `lws_write()` directly from multiple code paths:
|
||||
1. **Event broadcast** (subscriptions.c:667) - when events arrive
|
||||
2. **OK responses** (websockets.c:855) - when processing EVENT messages
|
||||
3. **EOSE responses** (websockets.c:976) - when processing REQ messages
|
||||
4. **COUNT responses** (websockets.c:1922) - when processing COUNT messages
|
||||
|
||||
This violates libwebsockets' design pattern which requires:
|
||||
- **`lws_write()` ONLY called from `LWS_CALLBACK_SERVER_WRITEABLE`**
|
||||
- Application queues messages and requests writeable callback
|
||||
- Libwebsockets handles write timing and socket buffer management
|
||||
|
||||
### Consequences of Violation
|
||||
1. Partial writes when socket buffer is full
|
||||
2. Multiple concurrent write attempts before callback fires
|
||||
3. "write already pending" errors with single buffer
|
||||
4. Frame corruption from interleaved partial writes
|
||||
5. "Invalid frame header" errors on client side
|
||||
|
||||
## Correct Architecture
|
||||
|
||||
### Message Queue Pattern
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Application Layer │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Event Arrives → Queue Message → Request Writeable Callback │
|
||||
│ REQ Received → Queue EOSE → Request Writeable Callback │
|
||||
│ EVENT Received→ Queue OK → Request Writeable Callback │
|
||||
│ COUNT Received→ Queue COUNT → Request Writeable Callback │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
lws_callback_on_writable(wsi)
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ LWS_CALLBACK_SERVER_WRITEABLE │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. Dequeue next message from queue │
|
||||
│ 2. Call lws_write() with message data │
|
||||
│ 3. If queue not empty, request another callback │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
libwebsockets handles:
|
||||
- Socket buffer management
|
||||
- Partial write handling
|
||||
- Frame atomicity
|
||||
```
|
||||
|
||||
## Data Structures
|
||||
|
||||
### Message Queue Node
|
||||
```c
|
||||
typedef struct message_queue_node {
|
||||
unsigned char* data; // Message data (with LWS_PRE space)
|
||||
size_t length; // Message length (without LWS_PRE)
|
||||
enum lws_write_protocol type; // LWS_WRITE_TEXT, etc.
|
||||
struct message_queue_node* next;
|
||||
} message_queue_node_t;
|
||||
```
|
||||
|
||||
### Per-Session Data Updates
|
||||
```c
|
||||
struct per_session_data {
|
||||
// ... existing fields ...
|
||||
|
||||
// Message queue (replaces single buffer)
|
||||
message_queue_node_t* message_queue_head;
|
||||
message_queue_node_t* message_queue_tail;
|
||||
int message_queue_count;
|
||||
int writeable_requested; // Flag to prevent duplicate requests
|
||||
};
|
||||
```
|
||||
|
||||
## Implementation Functions
|
||||
|
||||
### 1. Queue Message (Application Layer)
|
||||
```c
|
||||
int queue_message(struct lws* wsi, struct per_session_data* pss,
|
||||
const char* message, size_t length,
|
||||
enum lws_write_protocol type)
|
||||
{
|
||||
// Allocate node
|
||||
message_queue_node_t* node = malloc(sizeof(message_queue_node_t));
|
||||
|
||||
// Allocate buffer with LWS_PRE space
|
||||
node->data = malloc(LWS_PRE + length);
|
||||
memcpy(node->data + LWS_PRE, message, length);
|
||||
node->length = length;
|
||||
node->type = type;
|
||||
node->next = NULL;
|
||||
|
||||
// Add to queue (FIFO)
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
if (!pss->message_queue_head) {
|
||||
pss->message_queue_head = node;
|
||||
pss->message_queue_tail = node;
|
||||
} else {
|
||||
pss->message_queue_tail->next = node;
|
||||
pss->message_queue_tail = node;
|
||||
}
|
||||
pss->message_queue_count++;
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Request writeable callback (only if not already requested)
|
||||
if (!pss->writeable_requested) {
|
||||
pss->writeable_requested = 1;
|
||||
lws_callback_on_writable(wsi);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Process Queue (Writeable Callback)
|
||||
```c
|
||||
int process_message_queue(struct lws* wsi, struct per_session_data* pss)
|
||||
{
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
// Get next message from queue
|
||||
message_queue_node_t* node = pss->message_queue_head;
|
||||
if (!node) {
|
||||
pss->writeable_requested = 0;
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
return 0; // Queue empty
|
||||
}
|
||||
|
||||
// Remove from queue
|
||||
pss->message_queue_head = node->next;
|
||||
if (!pss->message_queue_head) {
|
||||
pss->message_queue_tail = NULL;
|
||||
}
|
||||
pss->message_queue_count--;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Write message (libwebsockets handles partial writes)
|
||||
int result = lws_write(wsi, node->data + LWS_PRE, node->length, node->type);
|
||||
|
||||
// Free node
|
||||
free(node->data);
|
||||
free(node);
|
||||
|
||||
// If queue not empty, request another callback
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
if (pss->message_queue_head) {
|
||||
lws_callback_on_writable(wsi);
|
||||
} else {
|
||||
pss->writeable_requested = 0;
|
||||
}
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
return (result < 0) ? -1 : 0;
|
||||
}
|
||||
```
|
||||
|
||||
## Refactoring Changes
|
||||
|
||||
### Before (WRONG - Direct Write)
|
||||
```c
|
||||
// websockets.c:855 - OK response
|
||||
int write_result = lws_write(wsi, buf + LWS_PRE, response_len, LWS_WRITE_TEXT);
|
||||
if (write_result < 0) {
|
||||
DEBUG_ERROR("Write failed");
|
||||
} else if ((size_t)write_result != response_len) {
|
||||
// Partial write - queue remaining data
|
||||
queue_websocket_write(wsi, pss, ...);
|
||||
}
|
||||
```
|
||||
|
||||
### After (CORRECT - Queue Message)
|
||||
```c
|
||||
// websockets.c:855 - OK response
|
||||
queue_message(wsi, pss, response_str, response_len, LWS_WRITE_TEXT);
|
||||
// That's it! Writeable callback will handle the actual write
|
||||
```
|
||||
|
||||
### Before (WRONG - Direct Write in Broadcast)
|
||||
```c
|
||||
// subscriptions.c:667 - EVENT broadcast
|
||||
int write_result = lws_write(current_temp->wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
if (write_result < 0) {
|
||||
DEBUG_ERROR("Write failed");
|
||||
} else if ((size_t)write_result != msg_len) {
|
||||
queue_websocket_write(...);
|
||||
}
|
||||
```
|
||||
|
||||
### After (CORRECT - Queue Message)
|
||||
```c
|
||||
// subscriptions.c:667 - EVENT broadcast
|
||||
struct per_session_data* pss = lws_wsi_user(current_temp->wsi);
|
||||
queue_message(current_temp->wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT);
|
||||
// Writeable callback will handle the actual write
|
||||
```
|
||||
|
||||
## Benefits of Correct Pattern
|
||||
|
||||
1. **No Partial Write Handling Needed**
|
||||
- Libwebsockets handles partial writes internally
|
||||
- We just queue complete messages
|
||||
|
||||
2. **No "Write Already Pending" Errors**
|
||||
- Queue can hold unlimited messages
|
||||
- Each processed sequentially from callback
|
||||
|
||||
3. **Thread Safety**
|
||||
- Queue operations protected by session lock
|
||||
- Write only from single callback thread
|
||||
|
||||
4. **Frame Atomicity**
|
||||
- Libwebsockets ensures complete frame transmission
|
||||
- No interleaved partial writes
|
||||
|
||||
5. **Simpler Code**
|
||||
- No complex partial write state machine
|
||||
- Just queue and forget
|
||||
|
||||
6. **Better Performance**
|
||||
- Libwebsockets optimizes write timing
|
||||
- Batches writes when socket ready
|
||||
|
||||
## Migration Steps
|
||||
|
||||
1. ✅ Identify all `lws_write()` call sites
|
||||
2. ✅ Confirm violation of libwebsockets pattern
|
||||
3. ⏳ Design message queue structure
|
||||
4. ⏳ Implement `queue_message()` function
|
||||
5. ⏳ Implement `process_message_queue()` function
|
||||
6. ⏳ Update `per_session_data` structure
|
||||
7. ⏳ Refactor OK response to use queue
|
||||
8. ⏳ Refactor EOSE response to use queue
|
||||
9. ⏳ Refactor COUNT response to use queue
|
||||
10. ⏳ Refactor EVENT broadcast to use queue
|
||||
11. ⏳ Update `LWS_CALLBACK_SERVER_WRITEABLE` handler
|
||||
12. ⏳ Add queue cleanup in `LWS_CALLBACK_CLOSED`
|
||||
13. ⏳ Remove old partial write code
|
||||
14. ⏳ Test with rapid multiple events
|
||||
15. ⏳ Test with large events (>4KB)
|
||||
16. ⏳ Test under load
|
||||
17. ⏳ Verify no frame errors
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Test 1: Multiple Rapid Events
|
||||
```bash
|
||||
# Send 10 events rapidly to same client
|
||||
for i in {1..10}; do
|
||||
echo '["EVENT",{"kind":1,"content":"test'$i'","created_at":'$(date +%s)',...}]' | \
|
||||
websocat ws://localhost:8888 &
|
||||
done
|
||||
```
|
||||
|
||||
**Expected**: All events queued and sent sequentially, no errors
|
||||
|
||||
### Test 2: Large Events
|
||||
```bash
|
||||
# Send event >4KB (forces multiple socket writes)
|
||||
nak event --content "$(head -c 5000 /dev/urandom | base64)" | \
|
||||
websocat ws://localhost:8888
|
||||
```
|
||||
|
||||
**Expected**: Event queued, libwebsockets handles partial writes internally
|
||||
|
||||
### Test 3: Concurrent Connections
|
||||
```bash
|
||||
# 100 concurrent connections, each sending events
|
||||
for i in {1..100}; do
|
||||
(echo '["REQ","sub'$i'",{}]'; sleep 1) | websocat ws://localhost:8888 &
|
||||
done
|
||||
```
|
||||
|
||||
**Expected**: All subscriptions work, events broadcast correctly
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- ✅ No `lws_write()` calls outside `LWS_CALLBACK_SERVER_WRITEABLE`
|
||||
- ✅ No "write already pending" errors in logs
|
||||
- ✅ No "Invalid frame header" errors on client side
|
||||
- ✅ All messages delivered in correct order
|
||||
- ✅ Large events (>4KB) handled correctly
|
||||
- ✅ Multiple rapid events to same client work
|
||||
- ✅ Concurrent connections stable under load
|
||||
|
||||
## References
|
||||
|
||||
- [libwebsockets documentation](https://libwebsockets.org/lws-api-doc-main/html/index.html)
|
||||
- [LWS_CALLBACK_SERVER_WRITEABLE](https://libwebsockets.org/lws-api-doc-main/html/group__callback-when-writeable.html)
|
||||
- [lws_callback_on_writable()](https://libwebsockets.org/lws-api-doc-main/html/group__callback-when-writeable.html#ga96f3ad8e1e2c3e0c8e0b0e5e5e5e5e5e)
|
||||
601
docs/monitoring_simplified_plan.md
Normal file
601
docs/monitoring_simplified_plan.md
Normal file
@@ -0,0 +1,601 @@
|
||||
# Simplified Monitoring Implementation Plan
|
||||
## Kind 34567 Event Kind Distribution Reporting
|
||||
|
||||
**Date:** 2025-10-16
|
||||
**Status:** Implementation Ready
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Simplified real-time monitoring system that:
|
||||
- Reports event kind distribution (which includes total event count)
|
||||
- Uses kind 34567 addressable events with `d=event_kinds`
|
||||
- Controlled by two config variables
|
||||
- Enabled on-demand when admin logs in
|
||||
- Uses simple throttling to prevent performance impact
|
||||
|
||||
---
|
||||
|
||||
## Configuration Variables
|
||||
|
||||
### Database Config Table
|
||||
|
||||
Add two new configuration keys:
|
||||
|
||||
```sql
|
||||
INSERT INTO config (key, value, data_type, description, category) VALUES
|
||||
('kind_34567_reporting_enabled', 'false', 'boolean',
|
||||
'Enable/disable kind 34567 event kind distribution reporting', 'monitoring'),
|
||||
('kind_34567_reporting_throttling_sec', '5', 'integer',
|
||||
'Minimum seconds between kind 34567 reports (throttling)', 'monitoring');
|
||||
```
|
||||
|
||||
### Configuration Access
|
||||
|
||||
```c
|
||||
// In src/monitoring.c or src/api.c
|
||||
int is_monitoring_enabled(void) {
|
||||
return get_config_bool("kind_34567_reporting_enabled", 0);
|
||||
}
|
||||
|
||||
int get_monitoring_throttle_seconds(void) {
|
||||
return get_config_int("kind_34567_reporting_throttling_sec", 5);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Event Structure
|
||||
|
||||
### Kind 34567 Event Format
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "<event_id>",
|
||||
"pubkey": "<relay_pubkey>",
|
||||
"created_at": 1697123456,
|
||||
"kind": 34567,
|
||||
"content": "{\"data_type\":\"event_kinds\",\"timestamp\":1697123456,\"data\":{\"total_events\":125000,\"distribution\":[{\"kind\":1,\"count\":45000,\"percentage\":36.0},{\"kind\":3,\"count\":12500,\"percentage\":10.0}]}}",
|
||||
"tags": [
|
||||
["d", "event_kinds"],
|
||||
["relay", "<relay_pubkey>"]
|
||||
],
|
||||
"sig": "<signature>"
|
||||
}
|
||||
```
|
||||
|
||||
### Content JSON Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"data_type": "event_kinds",
|
||||
"timestamp": 1697123456,
|
||||
"data": {
|
||||
"total_events": 125000,
|
||||
"distribution": [
|
||||
{
|
||||
"kind": 1,
|
||||
"count": 45000,
|
||||
"percentage": 36.0
|
||||
},
|
||||
{
|
||||
"kind": 3,
|
||||
"count": 12500,
|
||||
"percentage": 10.0
|
||||
}
|
||||
]
|
||||
},
|
||||
"metadata": {
|
||||
"query_time_ms": 18
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
src/
|
||||
monitoring.h # New file - monitoring system header
|
||||
monitoring.c # New file - monitoring implementation
|
||||
main.c # Modified - add trigger hook
|
||||
config.c # Modified - add config keys (or use migration)
|
||||
```
|
||||
|
||||
### 1. Header File: `src/monitoring.h`
|
||||
|
||||
```c
|
||||
#ifndef MONITORING_H
|
||||
#define MONITORING_H
|
||||
|
||||
#include <time.h>
|
||||
#include <cjson/cJSON.h>
|
||||
|
||||
// Initialize monitoring system
|
||||
int init_monitoring_system(void);
|
||||
|
||||
// Cleanup monitoring system
|
||||
void cleanup_monitoring_system(void);
|
||||
|
||||
// Called when an event is stored (from main.c)
|
||||
void monitoring_on_event_stored(void);
|
||||
|
||||
// Enable/disable monitoring (called from admin API)
|
||||
int set_monitoring_enabled(int enabled);
|
||||
|
||||
// Get monitoring status
|
||||
int is_monitoring_enabled(void);
|
||||
|
||||
// Get throttle interval
|
||||
int get_monitoring_throttle_seconds(void);
|
||||
|
||||
#endif /* MONITORING_H */
|
||||
```
|
||||
|
||||
### 2. Implementation: `src/monitoring.c`
|
||||
|
||||
```c
|
||||
#include "monitoring.h"
|
||||
#include "config.h"
|
||||
#include "debug.h"
|
||||
#include "../nostr_core_lib/nostr_core/nostr_core.h"
|
||||
#include <sqlite3.h>
|
||||
#include <string.h>
|
||||
#include <time.h>
|
||||
|
||||
// External references
|
||||
extern sqlite3* g_db;
|
||||
extern int broadcast_event_to_subscriptions(cJSON* event);
|
||||
extern int store_event(cJSON* event);
|
||||
extern const char* get_config_value(const char* key);
|
||||
extern int get_config_bool(const char* key, int default_value);
|
||||
extern int get_config_int(const char* key, int default_value);
|
||||
extern char* get_relay_private_key(void);
|
||||
|
||||
// Throttling state
|
||||
static time_t last_report_time = 0;
|
||||
|
||||
// Initialize monitoring system
|
||||
int init_monitoring_system(void) {
|
||||
DEBUG_LOG("Monitoring system initialized");
|
||||
last_report_time = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Cleanup monitoring system
|
||||
void cleanup_monitoring_system(void) {
|
||||
DEBUG_LOG("Monitoring system cleaned up");
|
||||
}
|
||||
|
||||
// Check if monitoring is enabled
|
||||
int is_monitoring_enabled(void) {
|
||||
return get_config_bool("kind_34567_reporting_enabled", 0);
|
||||
}
|
||||
|
||||
// Get throttle interval
|
||||
int get_monitoring_throttle_seconds(void) {
|
||||
return get_config_int("kind_34567_reporting_throttling_sec", 5);
|
||||
}
|
||||
|
||||
// Enable/disable monitoring
|
||||
int set_monitoring_enabled(int enabled) {
|
||||
// Update config table
|
||||
const char* value = enabled ? "true" : "false";
|
||||
|
||||
// This would call update_config_in_table() or similar
|
||||
// For now, assume we have a function to update config
|
||||
extern int update_config_in_table(const char* key, const char* value);
|
||||
return update_config_in_table("kind_34567_reporting_enabled", value);
|
||||
}
|
||||
|
||||
// Query event kind distribution from database
|
||||
static char* query_event_kind_distribution(void) {
|
||||
if (!g_db) {
|
||||
DEBUG_ERROR("Database not available for monitoring query");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct timespec start_time;
|
||||
clock_gettime(CLOCK_MONOTONIC, &start_time);
|
||||
|
||||
// Query total events
|
||||
sqlite3_stmt* stmt;
|
||||
int total_events = 0;
|
||||
|
||||
if (sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM events", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
total_events = sqlite3_column_int(stmt, 0);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
|
||||
// Query kind distribution
|
||||
cJSON* response = cJSON_CreateObject();
|
||||
cJSON_AddStringToObject(response, "data_type", "event_kinds");
|
||||
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
|
||||
|
||||
cJSON* data = cJSON_CreateObject();
|
||||
cJSON_AddNumberToObject(data, "total_events", total_events);
|
||||
|
||||
cJSON* distribution = cJSON_CreateArray();
|
||||
|
||||
const char* sql =
|
||||
"SELECT kind, COUNT(*) as count, "
|
||||
"ROUND(COUNT(*) * 100.0 / (SELECT COUNT(*) FROM events), 2) as percentage "
|
||||
"FROM events GROUP BY kind ORDER BY count DESC";
|
||||
|
||||
if (sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL) == SQLITE_OK) {
|
||||
while (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
cJSON* kind_obj = cJSON_CreateObject();
|
||||
cJSON_AddNumberToObject(kind_obj, "kind", sqlite3_column_int(stmt, 0));
|
||||
cJSON_AddNumberToObject(kind_obj, "count", sqlite3_column_int64(stmt, 1));
|
||||
cJSON_AddNumberToObject(kind_obj, "percentage", sqlite3_column_double(stmt, 2));
|
||||
cJSON_AddItemToArray(distribution, kind_obj);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
|
||||
cJSON_AddItemToObject(data, "distribution", distribution);
|
||||
cJSON_AddItemToObject(response, "data", data);
|
||||
|
||||
// Calculate query time
|
||||
struct timespec end_time;
|
||||
clock_gettime(CLOCK_MONOTONIC, &end_time);
|
||||
double query_time_ms = (end_time.tv_sec - start_time.tv_sec) * 1000.0 +
|
||||
(end_time.tv_nsec - start_time.tv_nsec) / 1000000.0;
|
||||
|
||||
cJSON* metadata = cJSON_CreateObject();
|
||||
cJSON_AddNumberToObject(metadata, "query_time_ms", query_time_ms);
|
||||
cJSON_AddItemToObject(response, "metadata", metadata);
|
||||
|
||||
char* json_string = cJSON_Print(response);
|
||||
cJSON_Delete(response);
|
||||
|
||||
return json_string;
|
||||
}
|
||||
|
||||
// Generate and broadcast kind 34567 event
|
||||
static int generate_monitoring_event(const char* json_content) {
|
||||
if (!json_content) return -1;
|
||||
|
||||
// Get relay keys
|
||||
const char* relay_pubkey = get_config_value("relay_pubkey");
|
||||
char* relay_privkey_hex = get_relay_private_key();
|
||||
if (!relay_pubkey || !relay_privkey_hex) {
|
||||
if (relay_privkey_hex) free(relay_privkey_hex);
|
||||
DEBUG_ERROR("Could not get relay keys for monitoring event");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Convert relay private key to bytes
|
||||
unsigned char relay_privkey[32];
|
||||
if (nostr_hex_to_bytes(relay_privkey_hex, relay_privkey, sizeof(relay_privkey)) != 0) {
|
||||
free(relay_privkey_hex);
|
||||
DEBUG_ERROR("Failed to convert relay private key");
|
||||
return -1;
|
||||
}
|
||||
free(relay_privkey_hex);
|
||||
|
||||
// Create tags array
|
||||
cJSON* tags = cJSON_CreateArray();
|
||||
|
||||
// d tag for addressable event
|
||||
cJSON* d_tag = cJSON_CreateArray();
|
||||
cJSON_AddItemToArray(d_tag, cJSON_CreateString("d"));
|
||||
cJSON_AddItemToArray(d_tag, cJSON_CreateString("event_kinds"));
|
||||
cJSON_AddItemToArray(tags, d_tag);
|
||||
|
||||
// relay tag
|
||||
cJSON* relay_tag = cJSON_CreateArray();
|
||||
cJSON_AddItemToArray(relay_tag, cJSON_CreateString("relay"));
|
||||
cJSON_AddItemToArray(relay_tag, cJSON_CreateString(relay_pubkey));
|
||||
cJSON_AddItemToArray(tags, relay_tag);
|
||||
|
||||
// Create and sign event
|
||||
cJSON* event = nostr_create_and_sign_event(
|
||||
34567, // kind
|
||||
json_content, // content
|
||||
tags, // tags
|
||||
relay_privkey, // private key
|
||||
time(NULL) // timestamp
|
||||
);
|
||||
|
||||
if (!event) {
|
||||
DEBUG_ERROR("Failed to create and sign monitoring event");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Broadcast to subscriptions
|
||||
broadcast_event_to_subscriptions(event);
|
||||
|
||||
// Store in database
|
||||
int result = store_event(event);
|
||||
|
||||
cJSON_Delete(event);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Called when an event is stored
|
||||
void monitoring_on_event_stored(void) {
|
||||
// Check if monitoring is enabled
|
||||
if (!is_monitoring_enabled()) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Check throttling
|
||||
time_t now = time(NULL);
|
||||
int throttle_seconds = get_monitoring_throttle_seconds();
|
||||
|
||||
if (now - last_report_time < throttle_seconds) {
|
||||
return; // Too soon, skip this update
|
||||
}
|
||||
|
||||
// Query event kind distribution
|
||||
char* json_content = query_event_kind_distribution();
|
||||
if (!json_content) {
|
||||
DEBUG_ERROR("Failed to query event kind distribution");
|
||||
return;
|
||||
}
|
||||
|
||||
// Generate and broadcast monitoring event
|
||||
int result = generate_monitoring_event(json_content);
|
||||
free(json_content);
|
||||
|
||||
if (result == 0) {
|
||||
last_report_time = now;
|
||||
DEBUG_LOG("Generated kind 34567 monitoring event");
|
||||
} else {
|
||||
DEBUG_ERROR("Failed to generate monitoring event");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Integration: Modify `src/main.c`
|
||||
|
||||
Add monitoring hook to event storage:
|
||||
|
||||
```c
|
||||
// At top of file
|
||||
#include "monitoring.h"
|
||||
|
||||
// In main() function, after init_database()
|
||||
if (init_monitoring_system() != 0) {
|
||||
DEBUG_WARN("Failed to initialize monitoring system");
|
||||
// Continue anyway - monitoring is optional
|
||||
}
|
||||
|
||||
// In store_event() function, after successful storage
|
||||
int store_event(cJSON* event) {
|
||||
// ... existing code ...
|
||||
|
||||
if (rc != SQLITE_DONE) {
|
||||
// ... error handling ...
|
||||
}
|
||||
|
||||
free(tags_json);
|
||||
|
||||
// Trigger monitoring update
|
||||
monitoring_on_event_stored();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
// In cleanup section of main()
|
||||
cleanup_monitoring_system();
|
||||
```
|
||||
|
||||
### 4. Admin API: Enable/Disable Monitoring
|
||||
|
||||
Add admin command to enable monitoring (in `src/dm_admin.c` or `src/api.c`):
|
||||
|
||||
```c
|
||||
// Handle admin command to enable monitoring
|
||||
if (strcmp(command, "enable_monitoring") == 0) {
|
||||
set_monitoring_enabled(1);
|
||||
send_nip17_response(sender_pubkey,
|
||||
"✅ Kind 34567 monitoring enabled",
|
||||
error_msg, sizeof(error_msg));
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Handle admin command to disable monitoring
|
||||
if (strcmp(command, "disable_monitoring") == 0) {
|
||||
set_monitoring_enabled(0);
|
||||
send_nip17_response(sender_pubkey,
|
||||
"🔴 Kind 34567 monitoring disabled",
|
||||
error_msg, sizeof(error_msg));
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Handle admin command to set throttle interval
|
||||
if (strncmp(command, "set_monitoring_throttle ", 24) == 0) {
|
||||
int seconds = atoi(command + 24);
|
||||
if (seconds >= 1 && seconds <= 3600) {
|
||||
char value[16];
|
||||
snprintf(value, sizeof(value), "%d", seconds);
|
||||
update_config_in_table("kind_34567_reporting_throttling_sec", value);
|
||||
|
||||
char response[128];
|
||||
snprintf(response, sizeof(response),
|
||||
"✅ Monitoring throttle set to %d seconds", seconds);
|
||||
send_nip17_response(sender_pubkey, response, error_msg, sizeof(error_msg));
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frontend Integration
|
||||
|
||||
### Admin Dashboard Subscription
|
||||
|
||||
```javascript
|
||||
// When admin logs in to dashboard
|
||||
async function enableMonitoring() {
|
||||
// Send admin command to enable monitoring
|
||||
await sendAdminCommand(['enable_monitoring']);
|
||||
|
||||
// Subscribe to kind 34567 events
|
||||
const subscription = {
|
||||
kinds: [34567],
|
||||
authors: [relayPubkey],
|
||||
"#d": ["event_kinds"]
|
||||
};
|
||||
|
||||
relay.subscribe([subscription], {
|
||||
onevent: (event) => {
|
||||
handleMonitoringEvent(event);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Handle incoming monitoring events
|
||||
function handleMonitoringEvent(event) {
|
||||
const content = JSON.parse(event.content);
|
||||
|
||||
if (content.data_type === 'event_kinds') {
|
||||
updateEventKindsChart(content.data);
|
||||
updateTotalEventsDisplay(content.data.total_events);
|
||||
}
|
||||
}
|
||||
|
||||
// When admin logs out or closes dashboard
|
||||
async function disableMonitoring() {
|
||||
await sendAdminCommand(['disable_monitoring']);
|
||||
}
|
||||
```
|
||||
|
||||
### Display Event Kind Distribution
|
||||
|
||||
```javascript
|
||||
function updateEventKindsChart(data) {
|
||||
const { total_events, distribution } = data;
|
||||
|
||||
// Update total events display
|
||||
document.getElementById('total-events').textContent =
|
||||
total_events.toLocaleString();
|
||||
|
||||
// Update chart/table with distribution
|
||||
const tableBody = document.getElementById('kind-distribution-table');
|
||||
tableBody.innerHTML = '';
|
||||
|
||||
distribution.forEach(item => {
|
||||
const row = document.createElement('tr');
|
||||
row.innerHTML = `
|
||||
<td>Kind ${item.kind}</td>
|
||||
<td>${item.count.toLocaleString()}</td>
|
||||
<td>${item.percentage}%</td>
|
||||
`;
|
||||
tableBody.appendChild(row);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Migration
|
||||
|
||||
### Add to Schema or Migration Script
|
||||
|
||||
```sql
|
||||
-- Add monitoring configuration
|
||||
INSERT INTO config (key, value, data_type, description, category) VALUES
|
||||
('kind_34567_reporting_enabled', 'false', 'boolean',
|
||||
'Enable/disable kind 34567 event kind distribution reporting', 'monitoring'),
|
||||
('kind_34567_reporting_throttling_sec', '5', 'integer',
|
||||
'Minimum seconds between kind 34567 reports (throttling)', 'monitoring');
|
||||
```
|
||||
|
||||
Or add to existing config initialization in `src/config.c`.
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
### 1. Enable Monitoring
|
||||
|
||||
```bash
|
||||
# Via admin command (NIP-17 DM)
|
||||
echo '["enable_monitoring"]' | nak event --kind 14 --content - ws://localhost:8888
|
||||
```
|
||||
|
||||
### 2. Subscribe to Monitoring Events
|
||||
|
||||
```bash
|
||||
# Subscribe to kind 34567 events
|
||||
nak req --kinds 34567 --authors <relay_pubkey> ws://localhost:8888
|
||||
```
|
||||
|
||||
### 3. Generate Events
|
||||
|
||||
```bash
|
||||
# Send some test events to trigger monitoring
|
||||
for i in {1..10}; do
|
||||
nak event -c "Test event $i" ws://localhost:8888
|
||||
sleep 1
|
||||
done
|
||||
```
|
||||
|
||||
### 4. Verify Monitoring Events
|
||||
|
||||
You should see kind 34567 events every 5 seconds (or configured throttle interval) with event kind distribution.
|
||||
|
||||
---
|
||||
|
||||
## Performance Impact
|
||||
|
||||
### With 3 events/second (relay.damus.io scale)
|
||||
|
||||
**Query execution**:
|
||||
- Frequency: Every 5 seconds (throttled)
|
||||
- Query time: ~700ms (for 1M events)
|
||||
- Overhead: 700ms / 5000ms = 14% (acceptable)
|
||||
|
||||
**Per-event overhead**:
|
||||
- Check if enabled: < 0.01ms
|
||||
- Check throttle: < 0.01ms
|
||||
- Total: < 0.02ms per event (negligible)
|
||||
|
||||
**Overall impact**: < 1% on event processing, 14% on query thread (separate from event processing)
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Once this is working, easy to add:
|
||||
|
||||
1. **More data types**: Add `d=connections`, `d=subscriptions`, etc.
|
||||
2. **Materialized counters**: Optimize queries for very large databases
|
||||
3. **Historical data**: Store monitoring events for trending
|
||||
4. **Alerts**: Trigger on thresholds (e.g., > 90% capacity)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
This simplified plan provides:
|
||||
|
||||
✅ **Single data type**: Event kind distribution (includes total events)
|
||||
✅ **Two config variables**: Enable/disable and throttle control
|
||||
✅ **On-demand activation**: Enabled when admin logs in
|
||||
✅ **Simple throttling**: Prevents performance impact
|
||||
✅ **Clean implementation**: ~200 lines of code
|
||||
✅ **Easy to extend**: Add more data types later
|
||||
|
||||
**Estimated implementation time**: 4-6 hours
|
||||
|
||||
**Files to create/modify**:
|
||||
- Create: `src/monitoring.h` (~30 lines)
|
||||
- Create: `src/monitoring.c` (~200 lines)
|
||||
- Modify: `src/main.c` (~10 lines)
|
||||
- Modify: `src/config.c` or migration (~5 lines)
|
||||
- Modify: `src/dm_admin.c` or `src/api.c` (~30 lines)
|
||||
- Create: `api/monitoring.js` (frontend, ~100 lines)
|
||||
|
||||
**Total new code**: ~375 lines
|
||||
1189
docs/realtime_monitoring_design.md
Normal file
1189
docs/realtime_monitoring_design.md
Normal file
File diff suppressed because it is too large
Load Diff
325
docs/relay_traffic_measurement.md
Normal file
325
docs/relay_traffic_measurement.md
Normal file
@@ -0,0 +1,325 @@
|
||||
# Relay Traffic Measurement Guide
|
||||
|
||||
## Measuring Real-World Relay Traffic
|
||||
|
||||
To validate our performance assumptions, here are commands to measure actual event rates from live relays.
|
||||
|
||||
---
|
||||
|
||||
## Command: Count Events Over 1 Minute
|
||||
|
||||
### Basic Command
|
||||
|
||||
```bash
|
||||
# Count events from relay.damus.io over 60 seconds
|
||||
timeout 60 nak req -s $(date +%s) --stream wss://relay.damus.io | wc -l
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Subscribe to all new events (`-s $(date +%s)` = since now)
|
||||
2. Stream for 60 seconds (`timeout 60`)
|
||||
3. Count the lines (each line = 1 event)
|
||||
|
||||
### With Event Rate Display
|
||||
|
||||
```bash
|
||||
# Show events per second in real-time
|
||||
timeout 60 nak req -s $(date +%s) --stream wss://relay.damus.io | \
|
||||
pv -l -i 1 -r > /dev/null
|
||||
```
|
||||
|
||||
This displays:
|
||||
- Total events received
|
||||
- Current rate (events/second)
|
||||
- Average rate
|
||||
|
||||
### With Detailed Statistics
|
||||
|
||||
```bash
|
||||
# Count events and calculate statistics
|
||||
echo "Measuring relay traffic for 60 seconds..."
|
||||
START=$(date +%s)
|
||||
COUNT=$(timeout 60 nak req -s $START --stream wss://relay.damus.io | wc -l)
|
||||
END=$(date +%s)
|
||||
DURATION=$((END - START))
|
||||
|
||||
echo "Results:"
|
||||
echo " Total events: $COUNT"
|
||||
echo " Duration: ${DURATION}s"
|
||||
echo " Events/second: $(echo "scale=2; $COUNT / $DURATION" | bc)"
|
||||
echo " Events/minute: $COUNT"
|
||||
```
|
||||
|
||||
### With Event Kind Distribution
|
||||
|
||||
```bash
|
||||
# Count events by kind over 60 seconds
|
||||
timeout 60 nak req -s $(date +%s) --stream wss://relay.damus.io | \
|
||||
jq -r '.kind' | \
|
||||
sort | uniq -c | sort -rn
|
||||
```
|
||||
|
||||
Output example:
|
||||
```
|
||||
45 1 # 45 text notes
|
||||
12 3 # 12 contact lists
|
||||
8 7 # 8 reactions
|
||||
3 6 # 3 reposts
|
||||
```
|
||||
|
||||
### With Timestamp Analysis
|
||||
|
||||
```bash
|
||||
# Show event timestamps and calculate intervals
|
||||
timeout 60 nak req -s $(date +%s) --stream wss://relay.damus.io | \
|
||||
jq -r '.created_at' | \
|
||||
awk 'NR>1 {print $1-prev} {prev=$1}' | \
|
||||
awk '{sum+=$1; count++} END {
|
||||
print "Average interval:", sum/count, "seconds"
|
||||
print "Events per second:", count/sum
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Multiple Relays
|
||||
|
||||
### Compare Traffic Across Relays
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# test_relay_traffic.sh
|
||||
|
||||
RELAYS=(
|
||||
"wss://relay.damus.io"
|
||||
"wss://nos.lol"
|
||||
"wss://relay.nostr.band"
|
||||
"wss://nostr.wine"
|
||||
)
|
||||
|
||||
DURATION=60
|
||||
|
||||
echo "Measuring relay traffic for ${DURATION} seconds..."
|
||||
echo ""
|
||||
|
||||
for relay in "${RELAYS[@]}"; do
|
||||
echo "Testing: $relay"
|
||||
count=$(timeout $DURATION nak req -s $(date +%s) --stream "$relay" 2>/dev/null | wc -l)
|
||||
rate=$(echo "scale=2; $count / $DURATION" | bc)
|
||||
echo " Events: $count"
|
||||
echo " Rate: ${rate}/sec"
|
||||
echo ""
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expected Results (Based on Real Measurements)
|
||||
|
||||
### relay.damus.io (Large Public Relay)
|
||||
- **Expected rate**: 0.5-2 events/second
|
||||
- **60-second count**: 30-120 events
|
||||
- **Peak times**: Higher during US daytime hours
|
||||
|
||||
### nos.lol (Medium Public Relay)
|
||||
- **Expected rate**: 0.2-0.8 events/second
|
||||
- **60-second count**: 12-48 events
|
||||
|
||||
### Personal/Small Relays
|
||||
- **Expected rate**: 0.01-0.1 events/second
|
||||
- **60-second count**: 1-6 events
|
||||
|
||||
---
|
||||
|
||||
## Using Results to Validate Performance Assumptions
|
||||
|
||||
After measuring your relay's traffic:
|
||||
|
||||
1. **Calculate average events/second**:
|
||||
```
|
||||
events_per_second = total_events / 60
|
||||
```
|
||||
|
||||
2. **Estimate query overhead**:
|
||||
```
|
||||
# For 100k event database:
|
||||
query_time = 70ms
|
||||
overhead_percentage = (query_time * events_per_second) / 1000 * 100
|
||||
|
||||
# Example: 0.5 events/sec
|
||||
overhead = (70 * 0.5) / 1000 * 100 = 3.5%
|
||||
```
|
||||
|
||||
3. **Determine if optimization needed**:
|
||||
- < 5% overhead: No optimization needed
|
||||
- 5-20% overhead: Consider 1-second throttling
|
||||
- > 20% overhead: Use materialized counters
|
||||
|
||||
---
|
||||
|
||||
## Real-Time Monitoring During Development
|
||||
|
||||
### Monitor Your Own Relay
|
||||
|
||||
```bash
|
||||
# Watch events in real-time with count
|
||||
nak req -s $(date +%s) --stream ws://localhost:8888 | \
|
||||
awk '{count++; print count, $0}'
|
||||
```
|
||||
|
||||
### Monitor with Event Details
|
||||
|
||||
```bash
|
||||
# Show event kind and pubkey for each event
|
||||
nak req -s $(date +%s) --stream ws://localhost:8888 | \
|
||||
jq -r '"[\(.kind)] \(.pubkey[0:8])... \(.content[0:50])"'
|
||||
```
|
||||
|
||||
### Continuous Traffic Monitoring
|
||||
|
||||
```bash
|
||||
# Monitor traffic in 10-second windows
|
||||
while true; do
|
||||
echo "=== $(date) ==="
|
||||
count=$(timeout 10 nak req -s $(date +%s) --stream ws://localhost:8888 | wc -l)
|
||||
rate=$(echo "scale=2; $count / 10" | bc)
|
||||
echo "Events: $count (${rate}/sec)"
|
||||
sleep 1
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Testing Commands
|
||||
|
||||
### Simulate Load
|
||||
|
||||
```bash
|
||||
# Send test events to measure query performance
|
||||
for i in {1..100}; do
|
||||
nak event -c "Test event $i" ws://localhost:8888
|
||||
sleep 0.1 # 10 events/second
|
||||
done
|
||||
```
|
||||
|
||||
### Measure Query Response Time
|
||||
|
||||
```bash
|
||||
# Time how long queries take with current database
|
||||
time sqlite3 your_relay.db "SELECT COUNT(*) FROM events"
|
||||
time sqlite3 your_relay.db "SELECT kind, COUNT(*) FROM events GROUP BY kind"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Automated Traffic Analysis Script
|
||||
|
||||
Save this as `analyze_relay_traffic.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Comprehensive relay traffic analysis
|
||||
|
||||
RELAY="${1:-ws://localhost:8888}"
|
||||
DURATION="${2:-60}"
|
||||
|
||||
echo "Analyzing relay: $RELAY"
|
||||
echo "Duration: ${DURATION} seconds"
|
||||
echo ""
|
||||
|
||||
# Collect events
|
||||
TMPFILE=$(mktemp)
|
||||
timeout $DURATION nak req -s $(date +%s) --stream "$RELAY" > "$TMPFILE" 2>/dev/null
|
||||
|
||||
# Calculate statistics
|
||||
TOTAL=$(wc -l < "$TMPFILE")
|
||||
RATE=$(echo "scale=2; $TOTAL / $DURATION" | bc)
|
||||
|
||||
echo "=== Traffic Statistics ==="
|
||||
echo "Total events: $TOTAL"
|
||||
echo "Events/second: $RATE"
|
||||
echo "Events/minute: $(echo "$TOTAL * 60 / $DURATION" | bc)"
|
||||
echo ""
|
||||
|
||||
echo "=== Event Kind Distribution ==="
|
||||
jq -r '.kind' "$TMPFILE" | sort | uniq -c | sort -rn | head -10
|
||||
echo ""
|
||||
|
||||
echo "=== Top Publishers ==="
|
||||
jq -r '.pubkey[0:16]' "$TMPFILE" | sort | uniq -c | sort -rn | head -5
|
||||
echo ""
|
||||
|
||||
echo "=== Performance Estimate ==="
|
||||
echo "For 100k event database:"
|
||||
echo " Query time: ~70ms"
|
||||
echo " Overhead: $(echo "scale=2; 70 * $RATE / 10" | bc)%"
|
||||
echo ""
|
||||
|
||||
# Cleanup
|
||||
rm "$TMPFILE"
|
||||
```
|
||||
|
||||
Usage:
|
||||
```bash
|
||||
chmod +x analyze_relay_traffic.sh
|
||||
./analyze_relay_traffic.sh wss://relay.damus.io 60
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Interpreting Results
|
||||
|
||||
### Low Traffic (< 0.1 events/sec)
|
||||
- **Typical for**: Personal relays, small communities
|
||||
- **Recommendation**: Trigger on every event, no optimization
|
||||
- **Expected overhead**: < 1%
|
||||
|
||||
### Medium Traffic (0.1-0.5 events/sec)
|
||||
- **Typical for**: Medium public relays
|
||||
- **Recommendation**: Trigger on every event, consider throttling if database > 100k
|
||||
- **Expected overhead**: 1-5%
|
||||
|
||||
### High Traffic (0.5-2 events/sec)
|
||||
- **Typical for**: Large public relays
|
||||
- **Recommendation**: Use 1-second throttling
|
||||
- **Expected overhead**: 5-20% without throttling, < 1% with throttling
|
||||
|
||||
### Very High Traffic (> 2 events/sec)
|
||||
- **Typical for**: Major public relays (rare)
|
||||
- **Recommendation**: Use materialized counters
|
||||
- **Expected overhead**: > 20% without optimization
|
||||
|
||||
---
|
||||
|
||||
## Continuous Monitoring in Production
|
||||
|
||||
### Add to Relay Startup
|
||||
|
||||
```bash
|
||||
# In your relay startup script
|
||||
echo "Starting traffic monitoring..."
|
||||
nohup bash -c 'while true; do
|
||||
count=$(timeout 60 nak req -s $(date +%s) --stream ws://localhost:8888 2>/dev/null | wc -l)
|
||||
echo "$(date +%Y-%m-%d\ %H:%M:%S) - Events/min: $count" >> traffic.log
|
||||
done' &
|
||||
```
|
||||
|
||||
### Analyze Historical Traffic
|
||||
|
||||
```bash
|
||||
# View traffic trends
|
||||
cat traffic.log | awk '{print $4}' | \
|
||||
awk '{sum+=$1; count++} END {print "Average:", sum/count, "events/min"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
Use these commands to:
|
||||
1. ✅ Measure real-world traffic on your relay
|
||||
2. ✅ Validate performance assumptions
|
||||
3. ✅ Determine if optimization is needed
|
||||
4. ✅ Monitor traffic trends over time
|
||||
|
||||
**Remember**: Most relays will measure < 1 event/second, making the simple "trigger on every event" approach perfectly viable.
|
||||
630
docs/sql_query_admin_api.md
Normal file
630
docs/sql_query_admin_api.md
Normal file
@@ -0,0 +1,630 @@
|
||||
# SQL Query Admin API Design
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the design for a general-purpose SQL query interface for the C-Relay admin API. This allows administrators to execute read-only SQL queries against the relay database through cryptographically signed kind 23456 events with NIP-44 encrypted command arrays.
|
||||
|
||||
## Security Model
|
||||
|
||||
### Authentication
|
||||
- All queries must be sent as kind 23456 events with NIP-44 encrypted content
|
||||
- Events must be signed by the admin's private key
|
||||
- Admin pubkey verified against `config.admin_pubkey`
|
||||
- Follows the same authentication pattern as existing admin commands
|
||||
|
||||
### Query Restrictions
|
||||
While authentication is cryptographically secure, we implement defensive safeguards:
|
||||
|
||||
1. **Read-Only Enforcement**
|
||||
- Only SELECT statements allowed
|
||||
- Block: INSERT, UPDATE, DELETE, DROP, CREATE, ALTER, PRAGMA (write operations)
|
||||
- Allow: SELECT, WITH (for CTEs)
|
||||
|
||||
2. **Resource Limits**
|
||||
- Query timeout: 5 seconds (configurable)
|
||||
- Result row limit: 1000 rows (configurable)
|
||||
- Result size limit: 1MB (configurable)
|
||||
|
||||
3. **Query Logging**
|
||||
- All queries logged with timestamp, admin pubkey, execution time
|
||||
- Failed queries logged with error message
|
||||
|
||||
## Command Format
|
||||
|
||||
### Admin Event Structure (Kind 23456)
|
||||
```json
|
||||
{
|
||||
"id": "event_id",
|
||||
"pubkey": "admin_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23456,
|
||||
"content": "AqHBUgcM7dXFYLQuDVzGwMST1G8jtWYyVvYxXhVGEu4nAb4LVw...",
|
||||
"tags": [
|
||||
["p", "relay_public_key"]
|
||||
],
|
||||
"sig": "event_signature"
|
||||
}
|
||||
```
|
||||
|
||||
The `content` field contains a NIP-44 encrypted JSON array:
|
||||
```json
|
||||
["sql_query", "SELECT * FROM events LIMIT 10"]
|
||||
```
|
||||
|
||||
### Response Format (Kind 23457)
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44_encrypted_content",
|
||||
"tags": [
|
||||
["p", "admin_public_key"],
|
||||
["e", "request_event_id"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
The `content` field contains NIP-44 encrypted JSON:
|
||||
```json
|
||||
{
|
||||
"query_type": "sql_query",
|
||||
"request_id": "request_event_id",
|
||||
"timestamp": 1234567890,
|
||||
"query": "SELECT * FROM events LIMIT 10",
|
||||
"execution_time_ms": 45,
|
||||
"row_count": 10,
|
||||
"columns": ["id", "pubkey", "created_at", "kind", "content"],
|
||||
"rows": [
|
||||
["abc123...", "def456...", 1234567890, 1, "Hello world"],
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** The response includes the request event ID in two places:
|
||||
1. **In tags**: `["e", "request_event_id"]` - Standard Nostr convention for event references
|
||||
2. **In content**: `"request_id": "request_event_id"` - For easy access after decryption
|
||||
|
||||
### Error Response Format (Kind 23457)
|
||||
```json
|
||||
["EVENT", "temp_sub_id", {
|
||||
"id": "response_event_id",
|
||||
"pubkey": "relay_public_key",
|
||||
"created_at": 1234567890,
|
||||
"kind": 23457,
|
||||
"content": "nip44_encrypted_content",
|
||||
"tags": [
|
||||
["p", "admin_public_key"],
|
||||
["e", "request_event_id"]
|
||||
],
|
||||
"sig": "response_event_signature"
|
||||
}]
|
||||
```
|
||||
|
||||
The `content` field contains NIP-44 encrypted JSON:
|
||||
```json
|
||||
{
|
||||
"query_type": "sql_query",
|
||||
"request_id": "request_event_id",
|
||||
"timestamp": 1234567890,
|
||||
"query": "DELETE FROM events",
|
||||
"status": "error",
|
||||
"error": "Query blocked: DELETE statements not allowed",
|
||||
"error_type": "blocked_statement"
|
||||
}
|
||||
```
|
||||
|
||||
## Available Database Tables and Views
|
||||
|
||||
### Core Tables
|
||||
- **events** - All Nostr events (id, pubkey, created_at, kind, content, tags, sig)
|
||||
- **config** - Configuration key-value pairs
|
||||
- **auth_rules** - Authentication and authorization rules
|
||||
- **subscription_events** - Subscription lifecycle events
|
||||
- **event_broadcasts** - Event broadcast log
|
||||
|
||||
### Useful Views
|
||||
- **recent_events** - Last 1000 events
|
||||
- **event_stats** - Event statistics by type
|
||||
- **configuration_events** - Kind 33334 configuration events
|
||||
- **subscription_analytics** - Subscription metrics by date
|
||||
- **active_subscriptions_log** - Currently active subscriptions
|
||||
- **event_kinds_view** - Event distribution by kind
|
||||
- **top_pubkeys_view** - Top 10 pubkeys by event count
|
||||
- **time_stats_view** - Time-based statistics (24h, 7d, 30d)
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Backend (dm_admin.c)
|
||||
|
||||
#### 1. Query Validation Function
|
||||
```c
|
||||
int validate_sql_query(const char* query, char* error_msg, size_t error_size);
|
||||
```
|
||||
- Check for blocked keywords (case-insensitive)
|
||||
- Validate query syntax (basic checks)
|
||||
- Return 0 on success, -1 on failure
|
||||
|
||||
#### 2. Query Execution Function
|
||||
```c
|
||||
char* execute_sql_query(const char* query, char* error_msg, size_t error_size);
|
||||
```
|
||||
- Set query timeout using sqlite3_busy_timeout()
|
||||
- Execute query with row/size limits
|
||||
- Build JSON response with results
|
||||
- Log query execution
|
||||
- Return JSON string or NULL on error
|
||||
|
||||
#### 3. Command Handler Integration
|
||||
Add to `process_dm_admin_command()` in [`dm_admin.c`](src/dm_admin.c:131):
|
||||
```c
|
||||
else if (strcmp(command_type, "sql_query") == 0) {
|
||||
const char* query = get_tag_value(event, "sql_query", 1);
|
||||
if (!query) {
|
||||
DEBUG_ERROR("DM Admin: Missing sql_query parameter");
|
||||
snprintf(error_message, error_size, "invalid: missing SQL query");
|
||||
} else {
|
||||
result = handle_sql_query_unified(event, query, error_message, error_size, wsi);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Add unified handler function:
|
||||
```c
|
||||
int handle_sql_query_unified(cJSON* event, const char* query,
|
||||
char* error_message, size_t error_size,
|
||||
struct lws* wsi) {
|
||||
// Get request event ID for response correlation
|
||||
cJSON* request_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
if (!request_id_obj || !cJSON_IsString(request_id_obj)) {
|
||||
snprintf(error_message, error_size, "Missing request event ID");
|
||||
return -1;
|
||||
}
|
||||
const char* request_id = cJSON_GetStringValue(request_id_obj);
|
||||
|
||||
// Validate query
|
||||
if (!validate_sql_query(query, error_message, error_size)) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Execute query and include request_id in result
|
||||
char* result_json = execute_sql_query(query, request_id, error_message, error_size);
|
||||
if (!result_json) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Send response as kind 23457 event with request ID in tags
|
||||
cJSON* sender_pubkey_obj = cJSON_GetObjectItem(event, "pubkey");
|
||||
if (!sender_pubkey_obj || !cJSON_IsString(sender_pubkey_obj)) {
|
||||
free(result_json);
|
||||
snprintf(error_message, error_size, "Missing sender pubkey");
|
||||
return -1;
|
||||
}
|
||||
|
||||
const char* sender_pubkey = cJSON_GetStringValue(sender_pubkey_obj);
|
||||
int send_result = send_admin_response(sender_pubkey, result_json, request_id,
|
||||
error_message, error_size, wsi);
|
||||
free(result_json);
|
||||
|
||||
return send_result;
|
||||
}
|
||||
```
|
||||
|
||||
### Frontend (api/index.html)
|
||||
|
||||
#### SQL Query Section UI
|
||||
Add to [`api/index.html`](api/index.html:1):
|
||||
```html
|
||||
<section id="sql-query-section" class="admin-section">
|
||||
<h2>SQL Query Console</h2>
|
||||
|
||||
<div class="query-selector">
|
||||
<label for="query-dropdown">Quick Queries & History:</label>
|
||||
<select id="query-dropdown" onchange="loadSelectedQuery()">
|
||||
<option value="">-- Select a query --</option>
|
||||
<optgroup label="Common Queries">
|
||||
<option value="recent_events">Recent Events</option>
|
||||
<option value="event_stats">Event Statistics</option>
|
||||
<option value="subscriptions">Active Subscriptions</option>
|
||||
<option value="top_pubkeys">Top Pubkeys</option>
|
||||
<option value="event_kinds">Event Kinds Distribution</option>
|
||||
<option value="time_stats">Time-based Statistics</option>
|
||||
</optgroup>
|
||||
<optgroup label="Query History" id="history-group">
|
||||
<!-- Dynamically populated from localStorage -->
|
||||
</optgroup>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="query-editor">
|
||||
<label for="sql-input">SQL Query:</label>
|
||||
<textarea id="sql-input" rows="5" placeholder="SELECT * FROM events LIMIT 10"></textarea>
|
||||
<div class="query-actions">
|
||||
<button onclick="executeSqlQuery()" class="primary-button">Execute Query</button>
|
||||
<button onclick="clearSqlQuery()">Clear</button>
|
||||
<button onclick="clearQueryHistory()" class="danger-button">Clear History</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="query-results">
|
||||
<h3>Results</h3>
|
||||
<div id="query-info" class="info-box"></div>
|
||||
<div id="query-table" class="table-container"></div>
|
||||
</div>
|
||||
</section>
|
||||
```
|
||||
|
||||
#### JavaScript Functions (api/index.js)
|
||||
Add to [`api/index.js`](api/index.js:1):
|
||||
```javascript
|
||||
// Predefined query templates
|
||||
const SQL_QUERY_TEMPLATES = {
|
||||
recent_events: "SELECT id, pubkey, created_at, kind, substr(content, 1, 50) as content FROM events ORDER BY created_at DESC LIMIT 20",
|
||||
event_stats: "SELECT * FROM event_stats",
|
||||
subscriptions: "SELECT * FROM active_subscriptions_log ORDER BY created_at DESC",
|
||||
top_pubkeys: "SELECT * FROM top_pubkeys_view",
|
||||
event_kinds: "SELECT * FROM event_kinds_view ORDER BY count DESC",
|
||||
time_stats: "SELECT * FROM time_stats_view"
|
||||
};
|
||||
|
||||
// Query history management (localStorage)
|
||||
const QUERY_HISTORY_KEY = 'c_relay_sql_history';
|
||||
const MAX_HISTORY_ITEMS = 20;
|
||||
|
||||
// Load query history from localStorage
|
||||
function loadQueryHistory() {
|
||||
try {
|
||||
const history = localStorage.getItem(QUERY_HISTORY_KEY);
|
||||
return history ? JSON.parse(history) : [];
|
||||
} catch (e) {
|
||||
console.error('Failed to load query history:', e);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// Save query to history
|
||||
function saveQueryToHistory(query) {
|
||||
if (!query || query.trim().length === 0) return;
|
||||
|
||||
try {
|
||||
let history = loadQueryHistory();
|
||||
|
||||
// Remove duplicate if exists
|
||||
history = history.filter(q => q !== query);
|
||||
|
||||
// Add to beginning
|
||||
history.unshift(query);
|
||||
|
||||
// Limit size
|
||||
if (history.length > MAX_HISTORY_ITEMS) {
|
||||
history = history.slice(0, MAX_HISTORY_ITEMS);
|
||||
}
|
||||
|
||||
localStorage.setItem(QUERY_HISTORY_KEY, JSON.stringify(history));
|
||||
updateQueryDropdown();
|
||||
} catch (e) {
|
||||
console.error('Failed to save query history:', e);
|
||||
}
|
||||
}
|
||||
|
||||
// Clear query history
|
||||
function clearQueryHistory() {
|
||||
if (confirm('Clear all query history?')) {
|
||||
localStorage.removeItem(QUERY_HISTORY_KEY);
|
||||
updateQueryDropdown();
|
||||
}
|
||||
}
|
||||
|
||||
// Update dropdown with history
|
||||
function updateQueryDropdown() {
|
||||
const historyGroup = document.getElementById('history-group');
|
||||
if (!historyGroup) return;
|
||||
|
||||
// Clear existing history options
|
||||
historyGroup.innerHTML = '';
|
||||
|
||||
const history = loadQueryHistory();
|
||||
if (history.length === 0) {
|
||||
const option = document.createElement('option');
|
||||
option.value = '';
|
||||
option.textContent = '(no history)';
|
||||
option.disabled = true;
|
||||
historyGroup.appendChild(option);
|
||||
return;
|
||||
}
|
||||
|
||||
history.forEach((query, index) => {
|
||||
const option = document.createElement('option');
|
||||
option.value = `history_${index}`;
|
||||
// Truncate long queries for display
|
||||
const displayQuery = query.length > 60 ? query.substring(0, 60) + '...' : query;
|
||||
option.textContent = displayQuery;
|
||||
option.dataset.query = query;
|
||||
historyGroup.appendChild(option);
|
||||
});
|
||||
}
|
||||
|
||||
// Load selected query from dropdown
|
||||
function loadSelectedQuery() {
|
||||
const dropdown = document.getElementById('query-dropdown');
|
||||
const selectedValue = dropdown.value;
|
||||
|
||||
if (!selectedValue) return;
|
||||
|
||||
let query = '';
|
||||
|
||||
// Check if it's a template
|
||||
if (SQL_QUERY_TEMPLATES[selectedValue]) {
|
||||
query = SQL_QUERY_TEMPLATES[selectedValue];
|
||||
}
|
||||
// Check if it's from history
|
||||
else if (selectedValue.startsWith('history_')) {
|
||||
const selectedOption = dropdown.options[dropdown.selectedIndex];
|
||||
query = selectedOption.dataset.query;
|
||||
}
|
||||
|
||||
if (query) {
|
||||
document.getElementById('sql-input').value = query;
|
||||
}
|
||||
|
||||
// Reset dropdown to placeholder
|
||||
dropdown.value = '';
|
||||
}
|
||||
|
||||
// Initialize query history on page load
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
updateQueryDropdown();
|
||||
});
|
||||
|
||||
// Clear the SQL query input
|
||||
function clearSqlQuery() {
|
||||
document.getElementById('sql-input').value = '';
|
||||
document.getElementById('query-info').innerHTML = '';
|
||||
document.getElementById('query-table').innerHTML = '';
|
||||
}
|
||||
|
||||
// Track pending SQL queries by request ID
|
||||
const pendingSqlQueries = new Map();
|
||||
|
||||
// Execute SQL query via admin API
|
||||
async function executeSqlQuery() {
|
||||
const query = document.getElementById('sql-input').value;
|
||||
if (!query.trim()) {
|
||||
showError('Please enter a SQL query');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Show loading state
|
||||
document.getElementById('query-info').innerHTML = '<div class="loading">Executing query...</div>';
|
||||
document.getElementById('query-table').innerHTML = '';
|
||||
|
||||
// Save to history (before execution, so it's saved even if query fails)
|
||||
saveQueryToHistory(query.trim());
|
||||
|
||||
// Send query as kind 23456 admin command
|
||||
const command = ["sql_query", query];
|
||||
const requestEvent = await sendAdminCommand(command);
|
||||
|
||||
// Store query info for when response arrives
|
||||
if (requestEvent && requestEvent.id) {
|
||||
pendingSqlQueries.set(requestEvent.id, {
|
||||
query: query,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
}
|
||||
|
||||
// Note: Response will be handled by the event listener
|
||||
// which will call displaySqlQueryResults() when response arrives
|
||||
} catch (error) {
|
||||
showError('Failed to execute query: ' + error.message);
|
||||
}
|
||||
}
|
||||
|
||||
// Handle SQL query response (called by event listener)
|
||||
function handleSqlQueryResponse(response) {
|
||||
// Check if this is a response to one of our queries
|
||||
if (response.request_id && pendingSqlQueries.has(response.request_id)) {
|
||||
const queryInfo = pendingSqlQueries.get(response.request_id);
|
||||
pendingSqlQueries.delete(response.request_id);
|
||||
|
||||
// Display results
|
||||
displaySqlQueryResults(response);
|
||||
}
|
||||
}
|
||||
|
||||
// Display SQL query results
|
||||
function displaySqlQueryResults(response) {
|
||||
const infoDiv = document.getElementById('query-info');
|
||||
const tableDiv = document.getElementById('query-table');
|
||||
|
||||
if (response.status === 'error' || response.error) {
|
||||
infoDiv.innerHTML = `<div class="error-message">❌ ${response.error || 'Query failed'}</div>`;
|
||||
tableDiv.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
// Show query info with request ID for debugging
|
||||
const rowCount = response.row_count || 0;
|
||||
const execTime = response.execution_time_ms || 0;
|
||||
const requestId = response.request_id ? response.request_id.substring(0, 8) + '...' : 'unknown';
|
||||
infoDiv.innerHTML = `
|
||||
<div class="query-info-success">
|
||||
<span>✅ Query executed successfully</span>
|
||||
<span>Rows: ${rowCount}</span>
|
||||
<span>Execution Time: ${execTime}ms</span>
|
||||
<span class="request-id" title="${response.request_id || ''}">Request: ${requestId}</span>
|
||||
</div>
|
||||
`;
|
||||
|
||||
// Build results table
|
||||
if (response.rows && response.rows.length > 0) {
|
||||
let html = '<table class="sql-results-table"><thead><tr>';
|
||||
response.columns.forEach(col => {
|
||||
html += `<th>${escapeHtml(col)}</th>`;
|
||||
});
|
||||
html += '</tr></thead><tbody>';
|
||||
|
||||
response.rows.forEach(row => {
|
||||
html += '<tr>';
|
||||
row.forEach(cell => {
|
||||
const cellValue = cell === null ? '<em>NULL</em>' : escapeHtml(String(cell));
|
||||
html += `<td>${cellValue}</td>`;
|
||||
});
|
||||
html += '</tr>';
|
||||
});
|
||||
|
||||
html += '</tbody></table>';
|
||||
tableDiv.innerHTML = html;
|
||||
} else {
|
||||
tableDiv.innerHTML = '<p class="no-results">No results returned</p>';
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to escape HTML
|
||||
function escapeHtml(text) {
|
||||
const div = document.createElement('div');
|
||||
div.textContent = text;
|
||||
return div.innerHTML;
|
||||
}
|
||||
```
|
||||
|
||||
## Example Queries
|
||||
|
||||
### Subscription Statistics
|
||||
```sql
|
||||
SELECT
|
||||
date,
|
||||
subscriptions_created,
|
||||
subscriptions_ended,
|
||||
avg_duration_seconds,
|
||||
unique_clients
|
||||
FROM subscription_analytics
|
||||
ORDER BY date DESC
|
||||
LIMIT 7;
|
||||
```
|
||||
|
||||
### Event Distribution by Kind
|
||||
```sql
|
||||
SELECT kind, count, percentage
|
||||
FROM event_kinds_view
|
||||
ORDER BY count DESC;
|
||||
```
|
||||
|
||||
### Recent Events by Specific Pubkey
|
||||
```sql
|
||||
SELECT id, created_at, kind, content
|
||||
FROM events
|
||||
WHERE pubkey = 'abc123...'
|
||||
ORDER BY created_at DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
### Active Subscriptions with Details
|
||||
```sql
|
||||
SELECT
|
||||
subscription_id,
|
||||
client_ip,
|
||||
events_sent,
|
||||
duration_seconds,
|
||||
filter_json
|
||||
FROM active_subscriptions_log
|
||||
ORDER BY created_at DESC;
|
||||
```
|
||||
|
||||
### Database Size and Event Count
|
||||
```sql
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM events) as total_events,
|
||||
(SELECT COUNT(*) FROM subscription_events) as total_subscriptions,
|
||||
(SELECT COUNT(*) FROM auth_rules WHERE active = 1) as active_rules;
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
Add to config table:
|
||||
```sql
|
||||
INSERT INTO config (key, value, data_type, description, category) VALUES
|
||||
('sql_query_enabled', 'true', 'boolean', 'Enable SQL query admin API', 'admin'),
|
||||
('sql_query_timeout', '5', 'integer', 'Query timeout in seconds', 'admin'),
|
||||
('sql_query_row_limit', '1000', 'integer', 'Maximum rows per query', 'admin'),
|
||||
('sql_query_size_limit', '1048576', 'integer', 'Maximum result size in bytes', 'admin'),
|
||||
('sql_query_log_enabled', 'true', 'boolean', 'Log all SQL queries', 'admin');
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### What This Protects Against
|
||||
1. **Unauthorized Access** - Only admin can execute queries (cryptographic verification)
|
||||
2. **Data Modification** - Read-only enforcement prevents accidental/malicious changes
|
||||
3. **Resource Exhaustion** - Timeouts and limits prevent DoS
|
||||
4. **Audit Trail** - All queries logged for security review
|
||||
|
||||
### What This Does NOT Protect Against
|
||||
1. **Admin Compromise** - If admin private key is stolen, attacker has full read access
|
||||
2. **Information Disclosure** - Admin can read all data (by design)
|
||||
3. **Complex Attacks** - Sophisticated SQL injection might bypass simple keyword blocking
|
||||
|
||||
### Recommendations
|
||||
1. **Secure Admin Key** - Store admin private key securely, never commit to git
|
||||
2. **Monitor Query Logs** - Review query logs regularly for suspicious activity
|
||||
3. **Backup Database** - Regular backups in case of issues
|
||||
4. **Test Queries** - Test complex queries on development relay first
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Unit Tests
|
||||
1. Query validation (blocked keywords, syntax)
|
||||
2. Result formatting (JSON structure)
|
||||
3. Error handling (timeouts, limits)
|
||||
|
||||
### Integration Tests
|
||||
1. Execute queries through NIP-17 DM
|
||||
2. Verify authentication (admin vs non-admin)
|
||||
3. Test resource limits (timeout, row limit)
|
||||
4. Test error responses
|
||||
|
||||
### Security Tests
|
||||
1. Attempt blocked statements (INSERT, DELETE, etc.)
|
||||
2. Attempt SQL injection patterns
|
||||
3. Test query timeout with slow queries
|
||||
4. Test row limit with large result sets
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Query History** - Store recent queries for quick re-execution
|
||||
2. **Query Favorites** - Save frequently used queries
|
||||
3. **Export Results** - Download results as CSV/JSON
|
||||
4. **Query Builder** - Visual query builder for common operations
|
||||
5. **Real-time Updates** - WebSocket updates for live data
|
||||
6. **Query Sharing** - Share queries with other admins (if multi-admin support added)
|
||||
|
||||
## Migration Path
|
||||
|
||||
### Phase 1: Backend Implementation
|
||||
1. Add query validation function
|
||||
2. Add query execution function
|
||||
3. Integrate with NIP-17 command handler
|
||||
4. Add configuration options
|
||||
5. Add query logging
|
||||
|
||||
### Phase 2: Frontend Implementation
|
||||
1. Add SQL query section to index.html
|
||||
2. Add query execution JavaScript
|
||||
3. Add predefined query templates
|
||||
4. Add results display formatting
|
||||
|
||||
### Phase 3: Testing and Documentation
|
||||
1. Write unit tests
|
||||
2. Write integration tests
|
||||
3. Update user documentation
|
||||
4. Create query examples guide
|
||||
|
||||
### Phase 4: Enhancement
|
||||
1. Add query history
|
||||
2. Add export functionality
|
||||
3. Optimize performance
|
||||
4. Add more predefined templates
|
||||
258
docs/sql_test_design.md
Normal file
258
docs/sql_test_design.md
Normal file
@@ -0,0 +1,258 @@
|
||||
# SQL Query Test Script Design
|
||||
|
||||
## Overview
|
||||
|
||||
Test script for validating the SQL query admin API functionality. Tests query validation, execution, error handling, and security features.
|
||||
|
||||
## Script: tests/sql_test.sh
|
||||
|
||||
### Test Categories
|
||||
|
||||
#### 1. Query Validation Tests
|
||||
- ✅ Valid SELECT queries accepted
|
||||
- ❌ INSERT statements blocked
|
||||
- ❌ UPDATE statements blocked
|
||||
- ❌ DELETE statements blocked
|
||||
- ❌ DROP statements blocked
|
||||
- ❌ CREATE statements blocked
|
||||
- ❌ ALTER statements blocked
|
||||
- ❌ PRAGMA write operations blocked
|
||||
|
||||
#### 2. Query Execution Tests
|
||||
- ✅ Simple SELECT query
|
||||
- ✅ SELECT with WHERE clause
|
||||
- ✅ SELECT with JOIN
|
||||
- ✅ SELECT with ORDER BY and LIMIT
|
||||
- ✅ Query against views
|
||||
- ✅ Query with aggregate functions (COUNT, SUM, AVG)
|
||||
|
||||
#### 3. Response Format Tests
|
||||
- ✅ Response includes request_id
|
||||
- ✅ Response includes query_type
|
||||
- ✅ Response includes columns array
|
||||
- ✅ Response includes rows array
|
||||
- ✅ Response includes row_count
|
||||
- ✅ Response includes execution_time_ms
|
||||
|
||||
#### 4. Error Handling Tests
|
||||
- ❌ Invalid SQL syntax
|
||||
- ❌ Non-existent table
|
||||
- ❌ Non-existent column
|
||||
- ❌ Query timeout (if configurable)
|
||||
|
||||
#### 5. Security Tests
|
||||
- ❌ SQL injection attempts blocked
|
||||
- ❌ Nested query attacks blocked
|
||||
- ❌ Comment-based attacks blocked
|
||||
|
||||
#### 6. Concurrent Query Tests
|
||||
- ✅ Multiple queries in parallel
|
||||
- ✅ Responses correctly correlated to requests
|
||||
|
||||
## Script Structure
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# SQL Query Admin API Test Script
|
||||
# Tests the sql_query command functionality
|
||||
|
||||
set -e
|
||||
|
||||
RELAY_URL="${RELAY_URL:-ws://localhost:8888}"
|
||||
ADMIN_PRIVKEY="${ADMIN_PRIVKEY:-}"
|
||||
RELAY_PUBKEY="${RELAY_PUBKEY:-}"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TESTS_RUN=0
|
||||
TESTS_PASSED=0
|
||||
TESTS_FAILED=0
|
||||
|
||||
# Helper functions
|
||||
print_test() {
|
||||
echo -e "${YELLOW}TEST: $1${NC}"
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
}
|
||||
|
||||
print_pass() {
|
||||
echo -e "${GREEN}✓ PASS: $1${NC}"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
}
|
||||
|
||||
print_fail() {
|
||||
echo -e "${RED}✗ FAIL: $1${NC}"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
}
|
||||
|
||||
# Send SQL query command
|
||||
send_sql_query() {
|
||||
local query="$1"
|
||||
# Implementation using nostr CLI tools or curl
|
||||
# Returns response JSON
|
||||
}
|
||||
|
||||
# Test functions
|
||||
test_valid_select() {
|
||||
print_test "Valid SELECT query"
|
||||
local response=$(send_sql_query "SELECT * FROM events LIMIT 1")
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"'; then
|
||||
print_pass "Valid SELECT accepted"
|
||||
else
|
||||
print_fail "Valid SELECT rejected"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_insert() {
|
||||
print_test "INSERT statement blocked"
|
||||
local response=$(send_sql_query "INSERT INTO events VALUES (...)")
|
||||
if echo "$response" | grep -q '"error"'; then
|
||||
print_pass "INSERT correctly blocked"
|
||||
else
|
||||
print_fail "INSERT not blocked"
|
||||
fi
|
||||
}
|
||||
|
||||
# ... more test functions ...
|
||||
|
||||
# Main test execution
|
||||
main() {
|
||||
echo "================================"
|
||||
echo "SQL Query Admin API Tests"
|
||||
echo "================================"
|
||||
echo ""
|
||||
|
||||
# Check prerequisites
|
||||
if [ -z "$ADMIN_PRIVKEY" ]; then
|
||||
echo "Error: ADMIN_PRIVKEY not set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Run test suites
|
||||
echo "1. Query Validation Tests"
|
||||
test_valid_select
|
||||
test_blocked_insert
|
||||
test_blocked_update
|
||||
test_blocked_delete
|
||||
test_blocked_drop
|
||||
|
||||
echo ""
|
||||
echo "2. Query Execution Tests"
|
||||
test_simple_select
|
||||
test_select_with_where
|
||||
test_select_with_join
|
||||
test_select_views
|
||||
|
||||
echo ""
|
||||
echo "3. Response Format Tests"
|
||||
test_response_format
|
||||
test_request_id_correlation
|
||||
|
||||
echo ""
|
||||
echo "4. Error Handling Tests"
|
||||
test_invalid_syntax
|
||||
test_nonexistent_table
|
||||
|
||||
echo ""
|
||||
echo "5. Security Tests"
|
||||
test_sql_injection
|
||||
|
||||
echo ""
|
||||
echo "6. Concurrent Query Tests"
|
||||
test_concurrent_queries
|
||||
|
||||
# Print summary
|
||||
echo ""
|
||||
echo "================================"
|
||||
echo "Test Summary"
|
||||
echo "================================"
|
||||
echo "Tests Run: $TESTS_RUN"
|
||||
echo "Tests Passed: $TESTS_PASSED"
|
||||
echo "Tests Failed: $TESTS_FAILED"
|
||||
|
||||
if [ $TESTS_FAILED -eq 0 ]; then
|
||||
echo -e "${GREEN}All tests passed!${NC}"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}Some tests failed${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
main "$@"
|
||||
```
|
||||
|
||||
## Test Data Setup
|
||||
|
||||
The script should work with the existing relay database without requiring special test data, using:
|
||||
- Existing events table
|
||||
- Existing views (event_stats, recent_events, etc.)
|
||||
- Existing config table
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Set environment variables
|
||||
export ADMIN_PRIVKEY="your_admin_private_key_hex"
|
||||
export RELAY_PUBKEY="relay_public_key_hex"
|
||||
export RELAY_URL="ws://localhost:8888"
|
||||
|
||||
# Run tests
|
||||
./tests/sql_test.sh
|
||||
|
||||
# Run specific test category
|
||||
./tests/sql_test.sh validation
|
||||
./tests/sql_test.sh security
|
||||
```
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
The script should:
|
||||
- Return exit code 0 on success, 1 on failure
|
||||
- Output TAP (Test Anything Protocol) format for CI integration
|
||||
- Be runnable in automated test pipelines
|
||||
- Not require manual intervention
|
||||
|
||||
## Dependencies
|
||||
|
||||
- `bash` (version 4+)
|
||||
- `curl` or `websocat` for WebSocket communication
|
||||
- `jq` for JSON parsing
|
||||
- Nostr CLI tools (optional, for event signing)
|
||||
- Running c-relay instance
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
================================
|
||||
SQL Query Admin API Tests
|
||||
================================
|
||||
|
||||
1. Query Validation Tests
|
||||
TEST: Valid SELECT query
|
||||
✓ PASS: Valid SELECT accepted
|
||||
TEST: INSERT statement blocked
|
||||
✓ PASS: INSERT correctly blocked
|
||||
TEST: UPDATE statement blocked
|
||||
✓ PASS: UPDATE correctly blocked
|
||||
|
||||
2. Query Execution Tests
|
||||
TEST: Simple SELECT query
|
||||
✓ PASS: Query executed successfully
|
||||
TEST: SELECT with WHERE clause
|
||||
✓ PASS: WHERE clause works correctly
|
||||
|
||||
...
|
||||
|
||||
================================
|
||||
Test Summary
|
||||
================================
|
||||
Tests Run: 24
|
||||
Tests Passed: 24
|
||||
Tests Failed: 0
|
||||
All tests passed!
|
||||
209
docs/subscription_matching_debug_plan.md
Normal file
209
docs/subscription_matching_debug_plan.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Subscription Matching Debug Plan
|
||||
|
||||
## Problem
|
||||
The relay is not matching kind 1059 (NIP-17 gift wrap) events to subscriptions, even though a subscription exists with `kinds:[1059]` filter. The log shows:
|
||||
```
|
||||
Event broadcast complete: 0 subscriptions matched
|
||||
```
|
||||
|
||||
But we have this subscription:
|
||||
```
|
||||
sub:3 146.70.187.119 0x78edc9b43210 8m 27s kinds:[1059], since:10/23/2025, 4:27:59 PM, limit:50
|
||||
```
|
||||
|
||||
## Investigation Strategy
|
||||
|
||||
### 1. Add Debug Output to `event_matches_filter()` (lines 386-564)
|
||||
Add debug logging at each filter check to trace the matching logic:
|
||||
|
||||
- **Entry point**: Log the event kind and filter being tested
|
||||
- **Kinds filter check** (lines 392-415): Log whether kinds filter exists, the event kind value, and each filter kind being compared
|
||||
- **Authors filter check** (lines 417-442): Log if authors filter exists and matching results
|
||||
- **IDs filter check** (lines 444-469): Log if IDs filter exists and matching results
|
||||
- **Since filter check** (lines 471-482): Log the event timestamp vs filter since value
|
||||
- **Until filter check** (lines 484-495): Log the event timestamp vs filter until value
|
||||
- **Tag filters check** (lines 497-561): Log tag filter matching details
|
||||
- **Exit point**: Log whether the overall filter matched
|
||||
|
||||
### 2. Add Debug Output to `event_matches_subscription()` (lines 567-581)
|
||||
Add logging to show:
|
||||
- How many filters are in the subscription
|
||||
- Which filter (if any) matched
|
||||
- Overall subscription match result
|
||||
|
||||
### 3. Add Debug Output to `broadcast_event_to_subscriptions()` (lines 584-726)
|
||||
Add logging to show:
|
||||
- The event being broadcast (kind, id, created_at)
|
||||
- Total number of active subscriptions being checked
|
||||
- How many subscriptions matched after the first pass
|
||||
|
||||
### 4. Key Areas to Focus On
|
||||
|
||||
Based on the code analysis, the most likely issues are:
|
||||
|
||||
1. **Kind matching logic** (lines 392-415): The event kind might not be extracted correctly, or the comparison might be failing
|
||||
2. **Since timestamp** (lines 471-482): The subscription has a `since` filter - if the event timestamp is before this, it won't match
|
||||
3. **Event structure**: The event JSON might not have the expected structure
|
||||
|
||||
### 5. Specific Debug Additions
|
||||
|
||||
#### In `event_matches_filter()` at line 386:
|
||||
```c
|
||||
// Add at start of function
|
||||
cJSON* event_kind_obj = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at_obj = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("FILTER_MATCH: Testing event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind_obj ? (int)cJSON_GetNumberValue(event_kind_obj) : -1,
|
||||
event_id_obj && cJSON_IsString(event_id_obj) ? cJSON_GetStringValue(event_id_obj) : "null",
|
||||
event_created_at_obj ? (long)cJSON_GetNumberValue(event_created_at_obj) : 0);
|
||||
```
|
||||
|
||||
#### In kinds filter check (after line 392):
|
||||
```c
|
||||
if (filter->kinds && cJSON_IsArray(filter->kinds)) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking kinds filter with %d kinds", cJSON_GetArraySize(filter->kinds));
|
||||
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
if (!event_kind || !cJSON_IsNumber(event_kind)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid kind field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int event_kind_val = (int)cJSON_GetNumberValue(event_kind);
|
||||
DEBUG_TRACE("FILTER_MATCH: Event kind=%d", event_kind_val);
|
||||
|
||||
int kind_match = 0;
|
||||
cJSON* kind_item = NULL;
|
||||
cJSON_ArrayForEach(kind_item, filter->kinds) {
|
||||
if (cJSON_IsNumber(kind_item)) {
|
||||
int filter_kind = (int)cJSON_GetNumberValue(kind_item);
|
||||
DEBUG_TRACE("FILTER_MATCH: Comparing event kind %d with filter kind %d", event_kind_val, filter_kind);
|
||||
if (filter_kind == event_kind_val) {
|
||||
kind_match = 1;
|
||||
DEBUG_TRACE("FILTER_MATCH: Kind matched!");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!kind_match) {
|
||||
DEBUG_TRACE("FILTER_MATCH: No kind match, filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Kinds filter passed");
|
||||
}
|
||||
```
|
||||
|
||||
#### In since filter check (after line 472):
|
||||
```c
|
||||
if (filter->since > 0) {
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
if (!event_created_at || !cJSON_IsNumber(event_created_at)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid created_at field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
long event_timestamp = (long)cJSON_GetNumberValue(event_created_at);
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking since filter: event_ts=%ld filter_since=%ld",
|
||||
event_timestamp, filter->since);
|
||||
|
||||
if (event_timestamp < filter->since) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Event too old (before since), filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Since filter passed");
|
||||
}
|
||||
```
|
||||
|
||||
#### At end of `event_matches_filter()` (before line 563):
|
||||
```c
|
||||
DEBUG_TRACE("FILTER_MATCH: All filters passed, event matches!");
|
||||
return 1; // All filters passed
|
||||
```
|
||||
|
||||
#### In `event_matches_subscription()` at line 567:
|
||||
```c
|
||||
int event_matches_subscription(cJSON* event, subscription_t* subscription) {
|
||||
if (!event || !subscription || !subscription->filters) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: Testing subscription '%s'", subscription->id);
|
||||
|
||||
int filter_num = 0;
|
||||
subscription_filter_t* filter = subscription->filters;
|
||||
while (filter) {
|
||||
filter_num++;
|
||||
DEBUG_TRACE("SUB_MATCH: Testing filter #%d", filter_num);
|
||||
|
||||
if (event_matches_filter(event, filter)) {
|
||||
DEBUG_TRACE("SUB_MATCH: Filter #%d matched! Subscription '%s' matches",
|
||||
filter_num, subscription->id);
|
||||
return 1; // Match found (OR logic)
|
||||
}
|
||||
filter = filter->next;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: No filters matched for subscription '%s'", subscription->id);
|
||||
return 0; // No filters matched
|
||||
}
|
||||
```
|
||||
|
||||
#### In `broadcast_event_to_subscriptions()` at line 584:
|
||||
```c
|
||||
int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
if (!event) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Log event details
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("BROADCAST: Event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind ? (int)cJSON_GetNumberValue(event_kind) : -1,
|
||||
event_id && cJSON_IsString(event_id) ? cJSON_GetStringValue(event_id) : "null",
|
||||
event_created_at ? (long)cJSON_GetNumberValue(event_created_at) : 0);
|
||||
|
||||
// ... existing expiration check code ...
|
||||
|
||||
// After line 611 (before pthread_mutex_lock):
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
int total_subs = 0;
|
||||
subscription_t* count_sub = g_subscription_manager.active_subscriptions;
|
||||
while (count_sub) {
|
||||
total_subs++;
|
||||
count_sub = count_sub->next;
|
||||
}
|
||||
DEBUG_TRACE("BROADCAST: Checking %d active subscriptions", total_subs);
|
||||
|
||||
subscription_t* sub = g_subscription_manager.active_subscriptions;
|
||||
// ... rest of matching logic ...
|
||||
```
|
||||
|
||||
## Expected Outcome
|
||||
|
||||
With these debug additions, we should see output like:
|
||||
```
|
||||
BROADCAST: Event kind=1059 id=abc12345 created_at=1729712279
|
||||
BROADCAST: Checking 1 active subscriptions
|
||||
SUB_MATCH: Testing subscription 'sub:3'
|
||||
SUB_MATCH: Testing filter #1
|
||||
FILTER_MATCH: Testing event kind=1059 id=abc12345 created_at=1729712279
|
||||
FILTER_MATCH: Checking kinds filter with 1 kinds
|
||||
FILTER_MATCH: Event kind=1059
|
||||
FILTER_MATCH: Comparing event kind 1059 with filter kind 1059
|
||||
FILTER_MATCH: Kind matched!
|
||||
FILTER_MATCH: Kinds filter passed
|
||||
FILTER_MATCH: Checking since filter: event_ts=1729712279 filter_since=1729708079
|
||||
FILTER_MATCH: Since filter passed
|
||||
FILTER_MATCH: All filters passed, event matches!
|
||||
SUB_MATCH: Filter #1 matched! Subscription 'sub:3' matches
|
||||
Event broadcast complete: 1 subscriptions matched
|
||||
```
|
||||
|
||||
This will help us identify exactly where the matching is failing.
|
||||
200
docs/websocket_write_queue_design.md
Normal file
200
docs/websocket_write_queue_design.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# WebSocket Write Queue Design
|
||||
|
||||
## Problem Statement
|
||||
|
||||
The current partial write handling implementation uses a single buffer per session, which fails when multiple events need to be sent to the same client in rapid succession. This causes:
|
||||
|
||||
1. First event gets partial write → queued successfully
|
||||
2. Second event tries to write → **FAILS** with "write already pending"
|
||||
3. Subsequent events fail similarly, causing data loss
|
||||
|
||||
### Server Log Evidence
|
||||
```
|
||||
[WARN] WS_FRAME_PARTIAL: EVENT partial write, sub=1 sent=3210 expected=5333
|
||||
[TRACE] Queued partial write: len=2123
|
||||
[WARN] WS_FRAME_PARTIAL: EVENT partial write, sub=1 sent=3210 expected=5333
|
||||
[WARN] queue_websocket_write: write already pending, cannot queue new write
|
||||
[ERROR] Failed to queue partial EVENT write for sub=1
|
||||
```
|
||||
|
||||
## Root Cause
|
||||
|
||||
WebSocket frames must be sent **atomically** - you cannot interleave multiple frames. The current single-buffer approach correctly enforces this, but it rejects new writes instead of queuing them.
|
||||
|
||||
## Solution: Write Queue Architecture
|
||||
|
||||
### Design Principles
|
||||
|
||||
1. **Frame Atomicity**: Complete one WebSocket frame before starting the next
|
||||
2. **Sequential Processing**: Process queued writes in FIFO order
|
||||
3. **Memory Safety**: Proper cleanup on connection close or errors
|
||||
4. **Thread Safety**: Protect queue operations with existing session lock
|
||||
|
||||
### Data Structures
|
||||
|
||||
#### Write Queue Node
|
||||
```c
|
||||
struct write_queue_node {
|
||||
unsigned char* buffer; // Buffer with LWS_PRE space
|
||||
size_t total_len; // Total length of data to write
|
||||
size_t offset; // How much has been written so far
|
||||
int write_type; // LWS_WRITE_TEXT, etc.
|
||||
struct write_queue_node* next; // Next node in queue
|
||||
};
|
||||
```
|
||||
|
||||
#### Per-Session Write Queue
|
||||
```c
|
||||
struct per_session_data {
|
||||
// ... existing fields ...
|
||||
|
||||
// Write queue for handling multiple pending writes
|
||||
struct write_queue_node* write_queue_head; // First item to write
|
||||
struct write_queue_node* write_queue_tail; // Last item in queue
|
||||
int write_queue_length; // Number of items in queue
|
||||
int write_in_progress; // Flag: 1 if currently writing
|
||||
};
|
||||
```
|
||||
|
||||
### Algorithm Flow
|
||||
|
||||
#### 1. Enqueue Write (`queue_websocket_write`)
|
||||
|
||||
```
|
||||
IF write_queue is empty AND no write in progress:
|
||||
- Attempt immediate write with lws_write()
|
||||
- IF complete:
|
||||
- Return success
|
||||
- ELSE (partial write):
|
||||
- Create queue node with remaining data
|
||||
- Add to queue
|
||||
- Set write_in_progress flag
|
||||
- Request LWS_CALLBACK_SERVER_WRITEABLE
|
||||
ELSE:
|
||||
- Create queue node with full data
|
||||
- Append to queue tail
|
||||
- IF no write in progress:
|
||||
- Request LWS_CALLBACK_SERVER_WRITEABLE
|
||||
```
|
||||
|
||||
#### 2. Process Queue (`process_pending_write`)
|
||||
|
||||
```
|
||||
WHILE write_queue is not empty:
|
||||
- Get head node
|
||||
- Calculate remaining data (total_len - offset)
|
||||
- Attempt write with lws_write()
|
||||
|
||||
IF write fails (< 0):
|
||||
- Log error
|
||||
- Remove and free head node
|
||||
- Continue to next node
|
||||
|
||||
ELSE IF partial write (< remaining):
|
||||
- Update offset
|
||||
- Request LWS_CALLBACK_SERVER_WRITEABLE
|
||||
- Break (wait for next callback)
|
||||
|
||||
ELSE (complete write):
|
||||
- Remove and free head node
|
||||
- Continue to next node
|
||||
|
||||
IF queue is empty:
|
||||
- Clear write_in_progress flag
|
||||
```
|
||||
|
||||
#### 3. Cleanup (`LWS_CALLBACK_CLOSED`)
|
||||
|
||||
```
|
||||
WHILE write_queue is not empty:
|
||||
- Get head node
|
||||
- Free buffer
|
||||
- Free node
|
||||
- Move to next
|
||||
Clear queue pointers
|
||||
```
|
||||
|
||||
### Memory Management
|
||||
|
||||
1. **Allocation**: Each queue node allocates buffer with `LWS_PRE + data_len`
|
||||
2. **Ownership**: Queue owns all buffers until write completes or connection closes
|
||||
3. **Deallocation**: Free buffer and node when:
|
||||
- Write completes successfully
|
||||
- Write fails with error
|
||||
- Connection closes
|
||||
|
||||
### Thread Safety
|
||||
|
||||
- Use existing `pss->session_lock` to protect queue operations
|
||||
- Lock during:
|
||||
- Enqueue operations
|
||||
- Dequeue operations
|
||||
- Queue traversal for cleanup
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
1. **Queue Length Limit**: Implement max queue length (e.g., 100 items) to prevent memory exhaustion
|
||||
2. **Memory Pressure**: Monitor total queued bytes per session
|
||||
3. **Backpressure**: If queue exceeds limit, close connection with NOTICE
|
||||
|
||||
### Error Handling
|
||||
|
||||
1. **Allocation Failure**: Return error, log, send NOTICE to client
|
||||
2. **Write Failure**: Remove failed frame, continue with next
|
||||
3. **Queue Overflow**: Close connection with appropriate NOTICE
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Data Structure Changes
|
||||
1. Add `write_queue_node` structure to `websockets.h`
|
||||
2. Update `per_session_data` with queue fields
|
||||
3. Remove old single-buffer fields
|
||||
|
||||
### Phase 2: Queue Operations
|
||||
1. Implement `enqueue_write()` helper
|
||||
2. Implement `dequeue_write()` helper
|
||||
3. Update `queue_websocket_write()` to use queue
|
||||
4. Update `process_pending_write()` to process queue
|
||||
|
||||
### Phase 3: Integration
|
||||
1. Update all `lws_write()` call sites
|
||||
2. Update `LWS_CALLBACK_CLOSED` cleanup
|
||||
3. Add queue length monitoring
|
||||
|
||||
### Phase 4: Testing
|
||||
1. Test with rapid multiple events to same client
|
||||
2. Test with large events (>4KB)
|
||||
3. Test under load with concurrent connections
|
||||
4. Verify no "Invalid frame header" errors
|
||||
|
||||
## Expected Outcomes
|
||||
|
||||
1. **No More Rejections**: All writes queued successfully
|
||||
2. **Frame Integrity**: Complete frames sent atomically
|
||||
3. **Memory Safety**: Proper cleanup on all paths
|
||||
4. **Performance**: Minimal overhead for queue management
|
||||
|
||||
## Metrics to Monitor
|
||||
|
||||
1. Average queue length per session
|
||||
2. Maximum queue length observed
|
||||
3. Queue overflow events (if limit implemented)
|
||||
4. Write completion rate
|
||||
5. Partial write frequency
|
||||
|
||||
## Alternative Approaches Considered
|
||||
|
||||
### 1. Larger Single Buffer
|
||||
**Rejected**: Doesn't solve the fundamental problem of multiple concurrent writes
|
||||
|
||||
### 2. Immediate Write Retry
|
||||
**Rejected**: Could cause busy-waiting and CPU waste
|
||||
|
||||
### 3. Drop Frames on Conflict
|
||||
**Rejected**: Violates reliability requirements
|
||||
|
||||
## References
|
||||
|
||||
- libwebsockets documentation on `lws_write()` and `LWS_CALLBACK_SERVER_WRITEABLE`
|
||||
- WebSocket RFC 6455 on frame structure
|
||||
- Nostr NIP-01 on relay-to-client communication
|
||||
@@ -121,10 +121,43 @@ increment_version() {
|
||||
print_status "Current version: $LATEST_TAG"
|
||||
print_status "New version: $NEW_VERSION"
|
||||
|
||||
# Update version in src/main.h
|
||||
update_version_in_header "$NEW_VERSION" "$MAJOR" "${NEW_MINOR:-$MINOR}" "${NEW_PATCH:-$PATCH}"
|
||||
|
||||
# Export for use in other functions
|
||||
export NEW_VERSION
|
||||
}
|
||||
|
||||
# Function to update version macros in src/main.h
|
||||
update_version_in_header() {
|
||||
local new_version="$1"
|
||||
local major="$2"
|
||||
local minor="$3"
|
||||
local patch="$4"
|
||||
|
||||
print_status "Updating version in src/main.h..."
|
||||
|
||||
# Check if src/main.h exists
|
||||
if [[ ! -f "src/main.h" ]]; then
|
||||
print_error "src/main.h not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Update VERSION macro
|
||||
sed -i "s/#define VERSION \".*\"/#define VERSION \"$new_version\"/" src/main.h
|
||||
|
||||
# Update VERSION_MAJOR macro
|
||||
sed -i "s/#define VERSION_MAJOR [0-9]\+/#define VERSION_MAJOR $major/" src/main.h
|
||||
|
||||
# Update VERSION_MINOR macro
|
||||
sed -i "s/#define VERSION_MINOR .*/#define VERSION_MINOR $minor/" src/main.h
|
||||
|
||||
# Update VERSION_PATCH macro
|
||||
sed -i "s/#define VERSION_PATCH [0-9]\+/#define VERSION_PATCH $patch/" src/main.h
|
||||
|
||||
print_success "Updated version in src/main.h to $new_version"
|
||||
}
|
||||
|
||||
# Function to commit and push changes
|
||||
git_commit_and_push() {
|
||||
print_status "Preparing git commit..."
|
||||
|
||||
@@ -133,6 +133,11 @@ if [ -n "$PORT_OVERRIDE" ]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Validate strict port flag (only makes sense with port override)
|
||||
if [ "$USE_TEST_KEYS" = true ] && [ -z "$PORT_OVERRIDE" ]; then
|
||||
echo "WARNING: --strict-port is always used with test keys. Consider specifying a custom port with -p."
|
||||
fi
|
||||
|
||||
# Validate debug level if provided
|
||||
if [ -n "$DEBUG_LEVEL" ]; then
|
||||
if ! [[ "$DEBUG_LEVEL" =~ ^[0-5]$ ]]; then
|
||||
@@ -163,6 +168,8 @@ if [ "$HELP" = true ]; then
|
||||
echo " $0 # Fresh start with random keys"
|
||||
echo " $0 -a <admin-hex> -r <relay-hex> # Use custom keys"
|
||||
echo " $0 -a <admin-hex> -p 9000 # Custom admin key on port 9000"
|
||||
echo " $0 -p 7777 --strict-port # Fail if port 7777 unavailable (no fallback)"
|
||||
echo " $0 -p 8080 --strict-port -d=3 # Custom port with strict binding and debug"
|
||||
echo " $0 --debug-level=3 # Start with debug level 3 (info)"
|
||||
echo " $0 -d=5 # Start with debug level 5 (trace)"
|
||||
echo " $0 --preserve-database # Preserve existing database and keys"
|
||||
|
||||
51
notes.txt
51
notes.txt
@@ -39,6 +39,53 @@ Even simpler: Use this one-liner
|
||||
cd /usr/local/bin/c_relay
|
||||
sudo -u c-relay ./c_relay --debug-level=5 & sleep 2 && sudo gdb -p $(pgrep c_relay)
|
||||
|
||||
Once gdb attaches, type continue and wait for the crash. This way the relay starts normally and gdb just monitors it.
|
||||
Inside gdb, after attaching:
|
||||
|
||||
Which approach would you like to try?
|
||||
(gdb) continue
|
||||
Or shorter:
|
||||
(gdb) c
|
||||
|
||||
|
||||
How to View the Logs
|
||||
Check systemd journal:
|
||||
# View all c-relay logs
|
||||
sudo journalctl -u c-relay
|
||||
|
||||
# View recent logs (last 50 lines)
|
||||
sudo journalctl -u c-relay -n 50
|
||||
|
||||
# Follow logs in real-time
|
||||
sudo journalctl -u c-relay -f
|
||||
|
||||
# View logs since last boot
|
||||
sudo journalctl -u c-relay -b
|
||||
|
||||
Check if service is running:
|
||||
|
||||
|
||||
|
||||
To immediately trim the syslog file size:
|
||||
|
||||
Safe Syslog Truncation
|
||||
Stop syslog service first:
|
||||
sudo systemctl stop rsyslog
|
||||
|
||||
Truncate the syslog file:
|
||||
sudo truncate -s 0 /var/log/syslog
|
||||
|
||||
Restart syslog service:
|
||||
sudo systemctl start rsyslog
|
||||
sudo systemctl status rsyslog
|
||||
|
||||
|
||||
sudo -u c-relay ./c_relay --debug-level=5 -r 85d0b37e2ae822966dcadd06b2dc9368cde73865f90ea4d44f8b57d47ef0820a -a 1ec454734dcbf6fe54901ce25c0c7c6bca5edd89443416761fadc321d38df139
|
||||
|
||||
./c_relay_static_x86_64 -p 7889 --debug-level=5 -r 85d0b37e2ae822966dcadd06b2dc9368cde73865f90ea4d44f8b57d47ef0820a -a 1ec454734dcbf6fe54901ce25c0c7c6bca5edd89443416761fadc321d38df139
|
||||
|
||||
|
||||
sudo ufw allow 8888/tcp
|
||||
sudo ufw delete allow 8888/tcp
|
||||
|
||||
lsof -i :7777
|
||||
kill $(lsof -t -i :7777)
|
||||
kill -9 $(lsof -t -i :7777)
|
||||
46
src/api.h
46
src/api.h
@@ -1,8 +1,9 @@
|
||||
// API module for serving embedded web content
|
||||
// API module for serving embedded web content and admin API functions
|
||||
#ifndef API_H
|
||||
#define API_H
|
||||
|
||||
#include <libwebsockets.h>
|
||||
#include <cjson/cJSON.h>
|
||||
|
||||
// Embedded file session data structure for managing buffer lifetime
|
||||
struct embedded_file_session_data {
|
||||
@@ -14,10 +15,53 @@ struct embedded_file_session_data {
|
||||
int body_sent;
|
||||
};
|
||||
|
||||
// Configuration change pending structure
|
||||
typedef struct pending_config_change {
|
||||
char admin_pubkey[65]; // Who requested the change
|
||||
char config_key[128]; // What config to change
|
||||
char old_value[256]; // Current value
|
||||
char new_value[256]; // Requested new value
|
||||
time_t timestamp; // When requested
|
||||
char change_id[33]; // Unique ID for this change (first 32 chars of hash)
|
||||
struct pending_config_change* next; // Linked list for concurrent changes
|
||||
} pending_config_change_t;
|
||||
|
||||
// Handle HTTP request for embedded API files
|
||||
int handle_embedded_file_request(struct lws* wsi, const char* requested_uri);
|
||||
|
||||
// Generate stats JSON from database queries
|
||||
char* generate_stats_json(void);
|
||||
|
||||
// Generate human-readable stats text
|
||||
char* generate_stats_text(void);
|
||||
|
||||
// Generate config text from database
|
||||
char* generate_config_text(void);
|
||||
|
||||
// Send admin response with request ID correlation
|
||||
int send_admin_response(const char* sender_pubkey, const char* response_content, const char* request_id,
|
||||
char* error_message, size_t error_size, struct lws* wsi);
|
||||
|
||||
// Configuration change system functions
|
||||
int parse_config_command(const char* message, char* key, char* value);
|
||||
int validate_config_change(const char* key, const char* value);
|
||||
char* store_pending_config_change(const char* admin_pubkey, const char* key,
|
||||
const char* old_value, const char* new_value);
|
||||
pending_config_change_t* find_pending_change(const char* admin_pubkey, const char* change_id);
|
||||
int apply_config_change(const char* key, const char* value);
|
||||
void cleanup_expired_pending_changes(void);
|
||||
int handle_config_confirmation(const char* admin_pubkey, const char* response);
|
||||
char* generate_config_change_confirmation(const char* key, const char* old_value, const char* new_value);
|
||||
int process_config_change_request(const char* admin_pubkey, const char* message);
|
||||
|
||||
// SQL query functions
|
||||
int validate_sql_query(const char* query, char* error_message, size_t error_size);
|
||||
char* execute_sql_query(const char* query, const char* request_id, char* error_message, size_t error_size);
|
||||
int handle_sql_query_unified(cJSON* event, const char* query, char* error_message, size_t error_size, struct lws* wsi);
|
||||
|
||||
// Monitoring system functions
|
||||
void monitoring_on_event_stored(void);
|
||||
void monitoring_on_subscription_change(void);
|
||||
int get_monitoring_throttle_seconds(void);
|
||||
|
||||
#endif // API_H
|
||||
53
src/config.c
53
src/config.c
@@ -3,6 +3,19 @@
|
||||
#include "debug.h"
|
||||
#include "default_config_event.h"
|
||||
#include "dm_admin.h"
|
||||
|
||||
// Undefine VERSION macros before including nostr_core.h to avoid redefinition warnings
|
||||
// This must come AFTER default_config_event.h so that RELAY_VERSION macro expansion works correctly
|
||||
#ifdef VERSION
|
||||
#undef VERSION
|
||||
#endif
|
||||
#ifdef VERSION_MINOR
|
||||
#undef VERSION_MINOR
|
||||
#endif
|
||||
#ifdef VERSION_PATCH
|
||||
#undef VERSION_PATCH
|
||||
#endif
|
||||
|
||||
#include "../nostr_core_lib/nostr_core/nostr_core.h"
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
@@ -801,7 +814,7 @@ int first_time_startup_sequence(const cli_options_t* cli_options, char* admin_pu
|
||||
return 0;
|
||||
}
|
||||
|
||||
int startup_existing_relay(const char* relay_pubkey, const cli_options_t* cli_options) {
|
||||
int startup_existing_relay(const char* relay_pubkey, const cli_options_t* cli_options __attribute__((unused))) {
|
||||
if (!relay_pubkey) {
|
||||
DEBUG_ERROR("Invalid relay pubkey for existing relay startup");
|
||||
return -1;
|
||||
@@ -824,26 +837,7 @@ int startup_existing_relay(const char* relay_pubkey, const cli_options_t* cli_op
|
||||
|
||||
// NOTE: Database is already initialized in main.c before calling this function
|
||||
// Config table should already exist with complete configuration
|
||||
|
||||
// Check if CLI overrides need to be applied
|
||||
int has_overrides = 0;
|
||||
if (cli_options) {
|
||||
if (cli_options->port_override > 0) has_overrides = 1;
|
||||
if (cli_options->admin_pubkey_override[0] != '\0') has_overrides = 1;
|
||||
if (cli_options->relay_privkey_override[0] != '\0') has_overrides = 1;
|
||||
}
|
||||
|
||||
if (has_overrides) {
|
||||
// Apply CLI overrides to existing database
|
||||
DEBUG_INFO("Applying CLI overrides to existing database");
|
||||
if (apply_cli_overrides_atomic(cli_options) != 0) {
|
||||
DEBUG_ERROR("Failed to apply CLI overrides to existing database");
|
||||
return -1;
|
||||
}
|
||||
} else {
|
||||
// No CLI overrides - config table is already available
|
||||
DEBUG_INFO("No CLI overrides - config table is already available");
|
||||
}
|
||||
// CLI overrides will be applied after this function returns in main.c
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -4099,6 +4093,23 @@ int populate_all_config_values_atomic(const char* admin_pubkey, const char* rela
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Insert monitoring system config entry (ephemeral kind 24567)
|
||||
// Note: Monitoring is automatically activated when clients subscribe to kind 24567
|
||||
sqlite3_reset(stmt);
|
||||
sqlite3_bind_text(stmt, 1, "kind_24567_reporting_throttle_sec", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, "5", -1, SQLITE_STATIC); // integer, default 5 seconds
|
||||
sqlite3_bind_text(stmt, 3, "integer", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 4, "Minimum seconds between monitoring event reports (ephemeral kind 24567)", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 5, "monitoring", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_int(stmt, 6, 0); // does not require restart
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc != SQLITE_DONE) {
|
||||
DEBUG_ERROR("Failed to insert kind_24567_reporting_throttle_sec: %s", sqlite3_errmsg(g_db));
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_exec(g_db, "ROLLBACK;", NULL, NULL, NULL);
|
||||
return -1;
|
||||
}
|
||||
|
||||
sqlite3_finalize(stmt);
|
||||
|
||||
// Commit transaction
|
||||
|
||||
@@ -72,7 +72,13 @@ static const struct {
|
||||
|
||||
// Performance Settings
|
||||
{"default_limit", "500"},
|
||||
{"max_limit", "5000"}
|
||||
{"max_limit", "5000"},
|
||||
|
||||
// Proxy Settings
|
||||
// Trust proxy headers (X-Forwarded-For, X-Real-IP) for accurate client IP detection
|
||||
// Safe for informational/debugging use. Only becomes a security concern if you implement
|
||||
// IP-based rate limiting or access control (which would require firewall protection anyway)
|
||||
{"trust_proxy_headers", "true"}
|
||||
};
|
||||
|
||||
// Number of default configuration values
|
||||
|
||||
1165
src/dm_admin.c
1165
src/dm_admin.c
File diff suppressed because it is too large
Load Diff
@@ -1,3 +1,5 @@
|
||||
// Note to assistants. dm_admin is only for functions relating to direct messaging
|
||||
|
||||
#ifndef DM_ADMIN_H
|
||||
#define DM_ADMIN_H
|
||||
|
||||
@@ -24,4 +26,11 @@ int send_nip17_response(const char* sender_pubkey, const char* response_content,
|
||||
char* generate_config_text(void);
|
||||
char* generate_stats_text(void);
|
||||
|
||||
// SQL query admin functions
|
||||
int validate_sql_query(const char* query, char* error_message, size_t error_size);
|
||||
char* execute_sql_query(const char* query, const char* request_id, char* error_message, size_t error_size);
|
||||
int handle_sql_query_unified(cJSON* event, const char* query, char* error_message, size_t error_size, struct lws* wsi);
|
||||
int send_admin_response(const char* sender_pubkey, const char* response_content, const char* request_id,
|
||||
char* error_message, size_t error_size, struct lws* wsi);
|
||||
|
||||
#endif // DM_ADMIN_H
|
||||
File diff suppressed because one or more lines are too long
109
src/main.c
109
src/main.c
@@ -149,6 +149,9 @@ int mark_event_as_deleted(const char* event_id, const char* deletion_event_id, c
|
||||
// Forward declaration for database functions
|
||||
int store_event(cJSON* event);
|
||||
|
||||
// Forward declaration for monitoring system
|
||||
void monitoring_on_event_stored(void);
|
||||
|
||||
// Forward declarations for NIP-11 relay information handling
|
||||
void init_relay_info();
|
||||
void cleanup_relay_info();
|
||||
@@ -214,11 +217,9 @@ void send_notice_message(struct lws* wsi, const char* message) {
|
||||
char* msg_str = cJSON_Print(notice_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, NULL, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue NOTICE message");
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
@@ -312,14 +313,35 @@ int init_database(const char* database_path_override) {
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
// Check config table row count immediately after database open
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
int rc = sqlite3_prepare_v2(g_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
DEBUG_LOG("Config table row count immediately after sqlite3_open(): %d", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
} else {
|
||||
DEBUG_LOG("Config table does not exist yet (first-time startup)");
|
||||
// Capture and log the actual SQLite error instead of assuming table doesn't exist
|
||||
const char* err_msg = sqlite3_errmsg(g_db);
|
||||
DEBUG_LOG("Failed to prepare config table query: %s (error code: %d)", err_msg, rc);
|
||||
|
||||
// Check if it's actually a missing table vs other error
|
||||
if (rc == SQLITE_ERROR) {
|
||||
// Try to check if config table exists
|
||||
sqlite3_stmt* check_stmt;
|
||||
int check_rc = sqlite3_prepare_v2(g_db, "SELECT name FROM sqlite_master WHERE type='table' AND name='config'", -1, &check_stmt, NULL);
|
||||
if (check_rc == SQLITE_OK) {
|
||||
int has_table = (sqlite3_step(check_stmt) == SQLITE_ROW);
|
||||
sqlite3_finalize(check_stmt);
|
||||
if (has_table) {
|
||||
DEBUG_LOG("Config table EXISTS but query failed - possible database corruption or locking issue");
|
||||
} else {
|
||||
DEBUG_LOG("Config table does not exist yet (first-time startup)");
|
||||
}
|
||||
} else {
|
||||
DEBUG_LOG("Failed to check table existence: %s (error code: %d)", sqlite3_errmsg(g_db), check_rc);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// DEBUG_GUARD_END
|
||||
@@ -731,6 +753,10 @@ int store_event(cJSON* event) {
|
||||
}
|
||||
|
||||
free(tags_json);
|
||||
|
||||
// Call monitoring hook after successful event storage
|
||||
monitoring_on_event_stored();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -907,12 +933,11 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
||||
char* msg_str = cJSON_Print(event_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, NULL, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue config EVENT message");
|
||||
} else {
|
||||
config_events_sent++;
|
||||
free(buf);
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
@@ -950,11 +975,9 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
||||
char* closed_str = cJSON_Print(closed_msg);
|
||||
if (closed_str) {
|
||||
size_t closed_len = strlen(closed_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + closed_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, closed_str, closed_len);
|
||||
lws_write(wsi, buf + LWS_PRE, closed_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, closed_str, closed_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue CLOSED message");
|
||||
}
|
||||
free(closed_str);
|
||||
}
|
||||
@@ -1284,11 +1307,9 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
||||
char* msg_str = cJSON_Print(event_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue EVENT message for sub=%s", sub_id);
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
@@ -1428,7 +1449,7 @@ void print_usage(const char* program_name) {
|
||||
printf("Options:\n");
|
||||
printf(" -h, --help Show this help message\n");
|
||||
printf(" -v, --version Show version information\n");
|
||||
printf(" -p, --port PORT Override relay port (first-time startup only)\n");
|
||||
printf(" -p, --port PORT Override relay port (first-time startup and existing relay restarts)\n");
|
||||
printf(" --strict-port Fail if exact port is unavailable (no port increment)\n");
|
||||
printf(" -a, --admin-pubkey KEY Override admin public key (64-char hex or npub)\n");
|
||||
printf(" -r, --relay-privkey KEY Override relay private key (64-char hex or nsec)\n");
|
||||
@@ -1438,13 +1459,14 @@ void print_usage(const char* program_name) {
|
||||
printf("Configuration:\n");
|
||||
printf(" This relay uses event-based configuration stored in the database.\n");
|
||||
printf(" On first startup, keys are automatically generated and printed once.\n");
|
||||
printf(" Command line options like --port only apply during first-time setup.\n");
|
||||
printf(" Command line options like --port apply during first-time setup and existing relay restarts.\n");
|
||||
printf(" After initial setup, all configuration is managed via database events.\n");
|
||||
printf(" Database file: <relay_pubkey>.db (created automatically)\n");
|
||||
printf("\n");
|
||||
printf("Port Binding:\n");
|
||||
printf(" Default: Try up to 10 consecutive ports if requested port is busy\n");
|
||||
printf(" --strict-port: Fail immediately if exact requested port is unavailable\n");
|
||||
printf(" --strict-port works with any custom port specified via -p or --port\n");
|
||||
printf("\n");
|
||||
printf("Examples:\n");
|
||||
printf(" %s # Start relay (auto-configure on first run)\n", program_name);
|
||||
@@ -1791,7 +1813,7 @@ int main(int argc, char* argv[]) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Setup existing relay (sets database path and loads config)
|
||||
// Setup existing relay FIRST (sets database path)
|
||||
if (startup_existing_relay(relay_pubkey, &cli_options) != 0) {
|
||||
DEBUG_ERROR("Failed to setup existing relay");
|
||||
cleanup_configuration_system();
|
||||
@@ -1804,23 +1826,7 @@ int main(int argc, char* argv[]) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Check config table row count before database initialization
|
||||
{
|
||||
sqlite3* temp_db = NULL;
|
||||
if (sqlite3_open(g_database_path, &temp_db) == SQLITE_OK) {
|
||||
sqlite3_stmt* stmt;
|
||||
if (sqlite3_prepare_v2(temp_db, "SELECT COUNT(*) FROM config", -1, &stmt, NULL) == SQLITE_OK) {
|
||||
if (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||
int row_count = sqlite3_column_int(stmt, 0);
|
||||
printf(" Config table row count before database initialization: %d\n", row_count);
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
sqlite3_close(temp_db);
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize database with existing database path
|
||||
// Initialize database with the database path set by startup_existing_relay()
|
||||
DEBUG_TRACE("Initializing existing database");
|
||||
if (init_database(g_database_path) != 0) {
|
||||
DEBUG_ERROR("Failed to initialize existing database");
|
||||
@@ -1835,6 +1841,20 @@ int main(int argc, char* argv[]) {
|
||||
}
|
||||
DEBUG_LOG("Existing database initialized");
|
||||
|
||||
// Apply CLI overrides atomically (now that database is initialized)
|
||||
if (apply_cli_overrides_atomic(&cli_options) != 0) {
|
||||
DEBUG_ERROR("Failed to apply CLI overrides for existing relay");
|
||||
cleanup_configuration_system();
|
||||
free(relay_pubkey);
|
||||
for (int i = 0; existing_files[i]; i++) {
|
||||
free(existing_files[i]);
|
||||
}
|
||||
free(existing_files);
|
||||
nostr_cleanup();
|
||||
close_database();
|
||||
return 1;
|
||||
}
|
||||
|
||||
// DEBUG_GUARD_START
|
||||
if (g_debug_level >= DEBUG_LEVEL_DEBUG) {
|
||||
sqlite3_stmt* stmt;
|
||||
@@ -1979,6 +1999,7 @@ int main(int argc, char* argv[]) {
|
||||
|
||||
// Initialize NIP-40 expiration configuration
|
||||
init_expiration_config();
|
||||
|
||||
// Update subscription manager configuration
|
||||
update_subscription_manager_config();
|
||||
|
||||
@@ -2002,8 +2023,8 @@ int main(int argc, char* argv[]) {
|
||||
|
||||
|
||||
|
||||
// Start WebSocket Nostr relay server (port from configuration)
|
||||
int result = start_websocket_relay(-1, cli_options.strict_port); // Let config system determine port, pass strict_port flag
|
||||
// Start WebSocket Nostr relay server (port from CLI override or configuration)
|
||||
int result = start_websocket_relay(cli_options.port_override, cli_options.strict_port); // Use CLI port override if specified, otherwise config
|
||||
|
||||
// Cleanup
|
||||
cleanup_relay_info();
|
||||
|
||||
@@ -10,10 +10,10 @@
|
||||
#define MAIN_H
|
||||
|
||||
// Version information (auto-updated by build system)
|
||||
#define VERSION "v0.4.6"
|
||||
#define VERSION "v0.7.39"
|
||||
#define VERSION_MAJOR 0
|
||||
#define VERSION_MINOR 4
|
||||
#define VERSION_PATCH 6
|
||||
#define VERSION_MINOR 7
|
||||
#define VERSION_PATCH 39
|
||||
|
||||
// Relay metadata (authoritative source for NIP-11 information)
|
||||
#define RELAY_NAME "C-Relay"
|
||||
|
||||
@@ -70,11 +70,9 @@ void send_nip42_auth_challenge(struct lws* wsi, struct per_session_data* pss) {
|
||||
char* msg_str = cJSON_Print(auth_msg);
|
||||
if (msg_str) {
|
||||
size_t msg_len = strlen(msg_str);
|
||||
unsigned char* buf = malloc(LWS_PRE + msg_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
lws_write(wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue AUTH challenge message");
|
||||
}
|
||||
free(msg_str);
|
||||
}
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
/* Embedded SQL Schema for C Nostr Relay
|
||||
* Generated from db/schema.sql - Do not edit manually
|
||||
* Schema Version: 7
|
||||
* Schema Version: 8
|
||||
*/
|
||||
#ifndef SQL_SCHEMA_H
|
||||
#define SQL_SCHEMA_H
|
||||
|
||||
/* Schema version constant */
|
||||
#define EMBEDDED_SCHEMA_VERSION "7"
|
||||
#define EMBEDDED_SCHEMA_VERSION "8"
|
||||
|
||||
/* Embedded SQL schema as C string literal */
|
||||
static const char* const EMBEDDED_SCHEMA_SQL =
|
||||
@@ -15,7 +15,7 @@ static const char* const EMBEDDED_SCHEMA_SQL =
|
||||
-- Configuration system using config table\n\
|
||||
\n\
|
||||
-- Schema version tracking\n\
|
||||
PRAGMA user_version = 7;\n\
|
||||
PRAGMA user_version = 8;\n\
|
||||
\n\
|
||||
-- Enable foreign key support\n\
|
||||
PRAGMA foreign_keys = ON;\n\
|
||||
@@ -58,8 +58,8 @@ CREATE TABLE schema_info (\n\
|
||||
\n\
|
||||
-- Insert schema metadata\n\
|
||||
INSERT INTO schema_info (key, value) VALUES\n\
|
||||
('version', '7'),\n\
|
||||
('description', 'Hybrid Nostr relay schema with event-based and table-based configuration'),\n\
|
||||
('version', '8'),\n\
|
||||
('description', 'Hybrid Nostr relay schema with subscription deduplication support'),\n\
|
||||
('created_at', strftime('%s', 'now'));\n\
|
||||
\n\
|
||||
-- Helper views for common queries\n\
|
||||
@@ -181,17 +181,19 @@ END;\n\
|
||||
-- Persistent Subscriptions Logging Tables (Phase 2)\n\
|
||||
-- Optional database logging for subscription analytics and debugging\n\
|
||||
\n\
|
||||
-- Subscription events log\n\
|
||||
CREATE TABLE subscription_events (\n\
|
||||
-- Subscriptions log (renamed from subscription_events for clarity)\n\
|
||||
CREATE TABLE subscriptions (\n\
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,\n\
|
||||
subscription_id TEXT NOT NULL, -- Subscription ID from client\n\
|
||||
wsi_pointer TEXT NOT NULL, -- WebSocket pointer address (hex string)\n\
|
||||
client_ip TEXT NOT NULL, -- Client IP address\n\
|
||||
event_type TEXT NOT NULL CHECK (event_type IN ('created', 'closed', 'expired', 'disconnected')),\n\
|
||||
filter_json TEXT, -- JSON representation of filters (for created events)\n\
|
||||
events_sent INTEGER DEFAULT 0, -- Number of events sent to this subscription\n\
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),\n\
|
||||
ended_at INTEGER, -- When subscription ended (for closed/expired/disconnected)\n\
|
||||
duration INTEGER -- Computed: ended_at - created_at\n\
|
||||
duration INTEGER, -- Computed: ended_at - created_at\n\
|
||||
UNIQUE(subscription_id, wsi_pointer) -- Prevent duplicate subscriptions per connection\n\
|
||||
);\n\
|
||||
\n\
|
||||
-- Subscription metrics summary\n\
|
||||
@@ -218,10 +220,11 @@ CREATE TABLE event_broadcasts (\n\
|
||||
);\n\
|
||||
\n\
|
||||
-- Indexes for subscription logging performance\n\
|
||||
CREATE INDEX idx_subscription_events_id ON subscription_events(subscription_id);\n\
|
||||
CREATE INDEX idx_subscription_events_type ON subscription_events(event_type);\n\
|
||||
CREATE INDEX idx_subscription_events_created ON subscription_events(created_at DESC);\n\
|
||||
CREATE INDEX idx_subscription_events_client ON subscription_events(client_ip);\n\
|
||||
CREATE INDEX idx_subscriptions_id ON subscriptions(subscription_id);\n\
|
||||
CREATE INDEX idx_subscriptions_type ON subscriptions(event_type);\n\
|
||||
CREATE INDEX idx_subscriptions_created ON subscriptions(created_at DESC);\n\
|
||||
CREATE INDEX idx_subscriptions_client ON subscriptions(client_ip);\n\
|
||||
CREATE INDEX idx_subscriptions_wsi ON subscriptions(wsi_pointer);\n\
|
||||
\n\
|
||||
CREATE INDEX idx_subscription_metrics_date ON subscription_metrics(date DESC);\n\
|
||||
\n\
|
||||
@@ -231,10 +234,10 @@ CREATE INDEX idx_event_broadcasts_time ON event_broadcasts(broadcast_at DESC);\n
|
||||
\n\
|
||||
-- Trigger to update subscription duration when ended\n\
|
||||
CREATE TRIGGER update_subscription_duration\n\
|
||||
AFTER UPDATE OF ended_at ON subscription_events\n\
|
||||
AFTER UPDATE OF ended_at ON subscriptions\n\
|
||||
WHEN NEW.ended_at IS NOT NULL AND OLD.ended_at IS NULL\n\
|
||||
BEGIN\n\
|
||||
UPDATE subscription_events\n\
|
||||
UPDATE subscriptions\n\
|
||||
SET duration = NEW.ended_at - NEW.created_at\n\
|
||||
WHERE id = NEW.id;\n\
|
||||
END;\n\
|
||||
@@ -249,7 +252,7 @@ SELECT\n\
|
||||
MAX(events_sent) as max_events_sent,\n\
|
||||
AVG(events_sent) as avg_events_sent,\n\
|
||||
COUNT(DISTINCT client_ip) as unique_clients\n\
|
||||
FROM subscription_events\n\
|
||||
FROM subscriptions\n\
|
||||
GROUP BY date(created_at, 'unixepoch')\n\
|
||||
ORDER BY date DESC;\n\
|
||||
\n\
|
||||
@@ -262,10 +265,10 @@ SELECT\n\
|
||||
events_sent,\n\
|
||||
created_at,\n\
|
||||
(strftime('%s', 'now') - created_at) as duration_seconds\n\
|
||||
FROM subscription_events\n\
|
||||
FROM subscriptions\n\
|
||||
WHERE event_type = 'created'\n\
|
||||
AND subscription_id NOT IN (\n\
|
||||
SELECT subscription_id FROM subscription_events\n\
|
||||
SELECT subscription_id FROM subscriptions\n\
|
||||
WHERE event_type IN ('closed', 'expired', 'disconnected')\n\
|
||||
);\n\
|
||||
\n\
|
||||
|
||||
@@ -25,6 +25,9 @@ int validate_timestamp_range(long since, long until, char* error_message, size_t
|
||||
int validate_numeric_limits(int limit, char* error_message, size_t error_size);
|
||||
int validate_search_term(const char* search_term, char* error_message, size_t error_size);
|
||||
|
||||
// Forward declaration for monitoring function
|
||||
void monitoring_on_subscription_change(void);
|
||||
|
||||
// Global database variable
|
||||
extern sqlite3* g_db;
|
||||
|
||||
@@ -123,7 +126,7 @@ void free_subscription_filter(subscription_filter_t* filter) {
|
||||
}
|
||||
|
||||
// Validate subscription ID format and length
|
||||
static int validate_subscription_id(const char* sub_id) {
|
||||
int validate_subscription_id(const char* sub_id) {
|
||||
if (!sub_id) {
|
||||
return 0; // NULL pointer
|
||||
}
|
||||
@@ -133,11 +136,11 @@ static int validate_subscription_id(const char* sub_id) {
|
||||
return 0; // Empty or too long
|
||||
}
|
||||
|
||||
// Check for valid characters (alphanumeric, underscore, hyphen, colon)
|
||||
// Check for valid characters (alphanumeric, underscore, hyphen, colon, comma)
|
||||
for (size_t i = 0; i < len; i++) {
|
||||
char c = sub_id[i];
|
||||
if (!((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') ||
|
||||
(c >= '0' && c <= '9') || c == '_' || c == '-' || c == ':')) {
|
||||
(c >= '0' && c <= '9') || c == '_' || c == '-' || c == ':' || c == ',')) {
|
||||
return 0; // Invalid character
|
||||
}
|
||||
}
|
||||
@@ -241,8 +244,31 @@ int add_subscription_to_manager(subscription_t* sub) {
|
||||
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
// Check global limits
|
||||
if (g_subscription_manager.total_subscriptions >= g_subscription_manager.max_total_subscriptions) {
|
||||
// Check for existing subscription with same ID and WebSocket connection
|
||||
// Remove it first to prevent duplicates (implements subscription replacement per NIP-01)
|
||||
subscription_t** current = &g_subscription_manager.active_subscriptions;
|
||||
int found_duplicate = 0;
|
||||
subscription_t* duplicate_old = NULL;
|
||||
|
||||
while (*current) {
|
||||
subscription_t* existing = *current;
|
||||
|
||||
// Match by subscription ID and WebSocket pointer
|
||||
if (strcmp(existing->id, sub->id) == 0 && existing->wsi == sub->wsi) {
|
||||
// Found duplicate: mark inactive and unlink from global list under lock
|
||||
existing->active = 0;
|
||||
*current = existing->next;
|
||||
g_subscription_manager.total_subscriptions--;
|
||||
found_duplicate = 1;
|
||||
duplicate_old = existing; // defer free until after per-session unlink
|
||||
break;
|
||||
}
|
||||
|
||||
current = &(existing->next);
|
||||
}
|
||||
|
||||
// Check global limits (only if not replacing an existing subscription)
|
||||
if (!found_duplicate && g_subscription_manager.total_subscriptions >= g_subscription_manager.max_total_subscriptions) {
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
DEBUG_ERROR("Maximum total subscriptions reached");
|
||||
return -1;
|
||||
@@ -252,13 +278,44 @@ int add_subscription_to_manager(subscription_t* sub) {
|
||||
sub->next = g_subscription_manager.active_subscriptions;
|
||||
g_subscription_manager.active_subscriptions = sub;
|
||||
g_subscription_manager.total_subscriptions++;
|
||||
g_subscription_manager.total_created++;
|
||||
|
||||
// Only increment total_created if this is a new subscription (not a replacement)
|
||||
if (!found_duplicate) {
|
||||
g_subscription_manager.total_created++;
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
// Log subscription creation to database
|
||||
// If we replaced an existing subscription, unlink it from the per-session list before freeing
|
||||
if (duplicate_old) {
|
||||
// Obtain per-session data for this wsi
|
||||
struct per_session_data* pss = (struct per_session_data*) lws_wsi_user(duplicate_old->wsi);
|
||||
if (pss) {
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
struct subscription** scur = &pss->subscriptions;
|
||||
while (*scur) {
|
||||
if (*scur == duplicate_old) {
|
||||
// Unlink by pointer identity to avoid removing the newly-added one
|
||||
*scur = duplicate_old->session_next;
|
||||
if (pss->subscription_count > 0) {
|
||||
pss->subscription_count--;
|
||||
}
|
||||
break;
|
||||
}
|
||||
scur = &((*scur)->session_next);
|
||||
}
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
}
|
||||
// Now safe to free the old subscription
|
||||
free_subscription(duplicate_old);
|
||||
}
|
||||
|
||||
// Log subscription creation to database (INSERT OR REPLACE handles duplicates)
|
||||
log_subscription_created(sub);
|
||||
|
||||
// Trigger monitoring update for subscription changes
|
||||
monitoring_on_subscription_change();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -306,6 +363,9 @@ int remove_subscription_from_manager(const char* sub_id, struct lws* wsi) {
|
||||
// Update events sent counter before freeing
|
||||
update_subscription_events_sent(sub_id_copy, events_sent_copy);
|
||||
|
||||
// Trigger monitoring update for subscription changes
|
||||
monitoring_on_subscription_change();
|
||||
|
||||
free_subscription(sub);
|
||||
return 0;
|
||||
}
|
||||
@@ -324,37 +384,52 @@ int remove_subscription_from_manager(const char* sub_id, struct lws* wsi) {
|
||||
|
||||
// Check if an event matches a subscription filter
|
||||
int event_matches_filter(cJSON* event, subscription_filter_t* filter) {
|
||||
DEBUG_TRACE("Checking event against subscription filter");
|
||||
|
||||
if (!event || !filter) {
|
||||
DEBUG_TRACE("Exiting event_matches_filter - null parameters");
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Debug: Log event details being tested
|
||||
cJSON* event_kind_obj = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id_obj = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at_obj = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("FILTER_MATCH: Testing event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind_obj ? (int)cJSON_GetNumberValue(event_kind_obj) : -1,
|
||||
event_id_obj && cJSON_IsString(event_id_obj) ? cJSON_GetStringValue(event_id_obj) : "null",
|
||||
event_created_at_obj ? (long)cJSON_GetNumberValue(event_created_at_obj) : 0);
|
||||
|
||||
// Check kinds filter
|
||||
if (filter->kinds && cJSON_IsArray(filter->kinds)) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking kinds filter with %d kinds", cJSON_GetArraySize(filter->kinds));
|
||||
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
if (!event_kind || !cJSON_IsNumber(event_kind)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid kind field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int event_kind_val = (int)cJSON_GetNumberValue(event_kind);
|
||||
int kind_match = 0;
|
||||
DEBUG_TRACE("FILTER_MATCH: Event kind=%d", event_kind_val);
|
||||
|
||||
int kind_match = 0;
|
||||
cJSON* kind_item = NULL;
|
||||
cJSON_ArrayForEach(kind_item, filter->kinds) {
|
||||
if (cJSON_IsNumber(kind_item)) {
|
||||
int filter_kind = (int)cJSON_GetNumberValue(kind_item);
|
||||
DEBUG_TRACE("FILTER_MATCH: Comparing event kind %d with filter kind %d", event_kind_val, filter_kind);
|
||||
if (filter_kind == event_kind_val) {
|
||||
kind_match = 1;
|
||||
DEBUG_TRACE("FILTER_MATCH: Kind matched!");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!kind_match) {
|
||||
DEBUG_TRACE("FILTER_MATCH: No kind match, filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Kinds filter passed");
|
||||
}
|
||||
|
||||
// Check authors filter
|
||||
@@ -415,13 +490,19 @@ int event_matches_filter(cJSON* event, subscription_filter_t* filter) {
|
||||
if (filter->since > 0) {
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
if (!event_created_at || !cJSON_IsNumber(event_created_at)) {
|
||||
DEBUG_WARN("FILTER_MATCH: Event has no valid created_at field");
|
||||
return 0;
|
||||
}
|
||||
|
||||
long event_timestamp = (long)cJSON_GetNumberValue(event_created_at);
|
||||
DEBUG_TRACE("FILTER_MATCH: Checking since filter: event_ts=%ld filter_since=%ld",
|
||||
event_timestamp, filter->since);
|
||||
|
||||
if (event_timestamp < filter->since) {
|
||||
DEBUG_TRACE("FILTER_MATCH: Event too old (before since), filter rejected");
|
||||
return 0;
|
||||
}
|
||||
DEBUG_TRACE("FILTER_MATCH: Since filter passed");
|
||||
}
|
||||
|
||||
// Check until filter
|
||||
@@ -503,7 +584,7 @@ int event_matches_filter(cJSON* event, subscription_filter_t* filter) {
|
||||
}
|
||||
}
|
||||
|
||||
DEBUG_TRACE("Exiting event_matches_filter - match found");
|
||||
DEBUG_TRACE("FILTER_MATCH: All filters passed, event matches!");
|
||||
return 1; // All filters passed
|
||||
}
|
||||
|
||||
@@ -513,23 +594,29 @@ int event_matches_subscription(cJSON* event, subscription_t* subscription) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: Testing subscription '%s'", subscription->id);
|
||||
|
||||
int filter_num = 0;
|
||||
subscription_filter_t* filter = subscription->filters;
|
||||
while (filter) {
|
||||
filter_num++;
|
||||
DEBUG_TRACE("SUB_MATCH: Testing filter #%d", filter_num);
|
||||
|
||||
if (event_matches_filter(event, filter)) {
|
||||
DEBUG_TRACE("SUB_MATCH: Filter #%d matched! Subscription '%s' matches",
|
||||
filter_num, subscription->id);
|
||||
return 1; // Match found (OR logic)
|
||||
}
|
||||
filter = filter->next;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("SUB_MATCH: No filters matched for subscription '%s'", subscription->id);
|
||||
return 0; // No filters matched
|
||||
}
|
||||
|
||||
// Broadcast event to all matching subscriptions (thread-safe)
|
||||
int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
DEBUG_TRACE("Broadcasting event to subscriptions");
|
||||
|
||||
if (!event) {
|
||||
DEBUG_TRACE("Exiting broadcast_event_to_subscriptions - null event");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -546,6 +633,16 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
|
||||
int broadcasts = 0;
|
||||
|
||||
// Log event details
|
||||
cJSON* event_kind = cJSON_GetObjectItem(event, "kind");
|
||||
cJSON* event_id = cJSON_GetObjectItem(event, "id");
|
||||
cJSON* event_created_at = cJSON_GetObjectItem(event, "created_at");
|
||||
|
||||
DEBUG_TRACE("BROADCAST: Event kind=%d id=%.8s created_at=%ld",
|
||||
event_kind ? (int)cJSON_GetNumberValue(event_kind) : -1,
|
||||
event_id && cJSON_IsString(event_id) ? cJSON_GetStringValue(event_id) : "null",
|
||||
event_created_at ? (long)cJSON_GetNumberValue(event_created_at) : 0);
|
||||
|
||||
// Create a temporary list of matching subscriptions to avoid holding lock during I/O
|
||||
typedef struct temp_sub {
|
||||
struct lws* wsi;
|
||||
@@ -560,6 +657,14 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
// First pass: collect matching subscriptions while holding lock
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
int total_subs = 0;
|
||||
subscription_t* count_sub = g_subscription_manager.active_subscriptions;
|
||||
while (count_sub) {
|
||||
total_subs++;
|
||||
count_sub = count_sub->next;
|
||||
}
|
||||
DEBUG_TRACE("BROADCAST: Checking %d active subscriptions", total_subs);
|
||||
|
||||
subscription_t* sub = g_subscription_manager.active_subscriptions;
|
||||
while (sub) {
|
||||
if (sub->active && sub->wsi && event_matches_subscription(event, sub)) {
|
||||
@@ -611,10 +716,17 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, msg_str, msg_len);
|
||||
|
||||
// Send to WebSocket connection with error checking
|
||||
// Note: lws_write can fail if connection is closed, but won't crash
|
||||
int write_result = lws_write(current_temp->wsi, buf + LWS_PRE, msg_len, LWS_WRITE_TEXT);
|
||||
if (write_result >= 0) {
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=EVENT sub=%s len=%zu data=%.100s%s",
|
||||
current_temp->id,
|
||||
msg_len,
|
||||
msg_str,
|
||||
msg_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
struct per_session_data* pss = (struct per_session_data*)lws_wsi_user(current_temp->wsi);
|
||||
if (queue_message(current_temp->wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT) == 0) {
|
||||
// Message queued successfully
|
||||
broadcasts++;
|
||||
|
||||
// Update events sent counter for this subscription
|
||||
@@ -636,6 +748,8 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
if (event_id_obj && cJSON_IsString(event_id_obj)) {
|
||||
log_event_broadcast(cJSON_GetStringValue(event_id_obj), current_temp->id, current_temp->client_ip);
|
||||
}
|
||||
} else {
|
||||
DEBUG_ERROR("Failed to queue EVENT message for sub=%s", current_temp->id);
|
||||
}
|
||||
|
||||
free(buf);
|
||||
@@ -660,10 +774,41 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
DEBUG_LOG("Event broadcast complete: %d subscriptions matched", broadcasts);
|
||||
DEBUG_TRACE("Exiting broadcast_event_to_subscriptions");
|
||||
return broadcasts;
|
||||
}
|
||||
|
||||
// Check if any active subscription exists for a specific event kind (thread-safe)
|
||||
int has_subscriptions_for_kind(int event_kind) {
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
subscription_t* sub = g_subscription_manager.active_subscriptions;
|
||||
while (sub) {
|
||||
if (sub->active && sub->filters) {
|
||||
subscription_filter_t* filter = sub->filters;
|
||||
while (filter) {
|
||||
// Check if this filter includes our event kind
|
||||
if (filter->kinds && cJSON_IsArray(filter->kinds)) {
|
||||
cJSON* kind_item = NULL;
|
||||
cJSON_ArrayForEach(kind_item, filter->kinds) {
|
||||
if (cJSON_IsNumber(kind_item)) {
|
||||
int filter_kind = (int)cJSON_GetNumberValue(kind_item);
|
||||
if (filter_kind == event_kind) {
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
return 1; // Found matching subscription
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
filter = filter->next;
|
||||
}
|
||||
}
|
||||
sub = sub->next;
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
return 0; // No matching subscriptions
|
||||
}
|
||||
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
@@ -675,6 +820,10 @@ int broadcast_event_to_subscriptions(cJSON* event) {
|
||||
void log_subscription_created(const subscription_t* sub) {
|
||||
if (!g_db || !sub) return;
|
||||
|
||||
// Convert wsi pointer to string
|
||||
char wsi_str[32];
|
||||
snprintf(wsi_str, sizeof(wsi_str), "%p", (void*)sub->wsi);
|
||||
|
||||
// Create filter JSON for logging
|
||||
char* filter_json = NULL;
|
||||
if (sub->filters) {
|
||||
@@ -721,16 +870,18 @@ void log_subscription_created(const subscription_t* sub) {
|
||||
cJSON_Delete(filters_array);
|
||||
}
|
||||
|
||||
// Use INSERT OR REPLACE to handle duplicates automatically
|
||||
const char* sql =
|
||||
"INSERT INTO subscription_events (subscription_id, client_ip, event_type, filter_json) "
|
||||
"VALUES (?, ?, 'created', ?)";
|
||||
"INSERT OR REPLACE INTO subscriptions (subscription_id, wsi_pointer, client_ip, event_type, filter_json) "
|
||||
"VALUES (?, ?, ?, 'created', ?)";
|
||||
|
||||
sqlite3_stmt* stmt;
|
||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, sub->id, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, sub->client_ip, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 3, filter_json ? filter_json : "[]", -1, SQLITE_TRANSIENT);
|
||||
sqlite3_bind_text(stmt, 2, wsi_str, -1, SQLITE_TRANSIENT);
|
||||
sqlite3_bind_text(stmt, 3, sub->client_ip, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 4, filter_json ? filter_json : "[]", -1, SQLITE_TRANSIENT);
|
||||
|
||||
sqlite3_step(stmt);
|
||||
sqlite3_finalize(stmt);
|
||||
@@ -745,8 +896,8 @@ void log_subscription_closed(const char* sub_id, const char* client_ip, const ch
|
||||
if (!g_db || !sub_id) return;
|
||||
|
||||
const char* sql =
|
||||
"INSERT INTO subscription_events (subscription_id, client_ip, event_type) "
|
||||
"VALUES (?, ?, 'closed')";
|
||||
"INSERT INTO subscriptions (subscription_id, wsi_pointer, client_ip, event_type) "
|
||||
"VALUES (?, '', ?, 'closed')";
|
||||
|
||||
sqlite3_stmt* stmt;
|
||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||
@@ -760,7 +911,7 @@ void log_subscription_closed(const char* sub_id, const char* client_ip, const ch
|
||||
|
||||
// Update the corresponding 'created' entry with end time and events sent
|
||||
const char* update_sql =
|
||||
"UPDATE subscription_events "
|
||||
"UPDATE subscriptions "
|
||||
"SET ended_at = strftime('%s', 'now') "
|
||||
"WHERE subscription_id = ? AND event_type = 'created' AND ended_at IS NULL";
|
||||
|
||||
@@ -778,7 +929,7 @@ void log_subscription_disconnected(const char* client_ip) {
|
||||
|
||||
// Mark all active subscriptions for this client as disconnected
|
||||
const char* sql =
|
||||
"UPDATE subscription_events "
|
||||
"UPDATE subscriptions "
|
||||
"SET ended_at = strftime('%s', 'now') "
|
||||
"WHERE client_ip = ? AND event_type = 'created' AND ended_at IS NULL";
|
||||
|
||||
@@ -793,8 +944,8 @@ void log_subscription_disconnected(const char* client_ip) {
|
||||
if (changes > 0) {
|
||||
// Log a disconnection event
|
||||
const char* insert_sql =
|
||||
"INSERT INTO subscription_events (subscription_id, client_ip, event_type) "
|
||||
"VALUES ('disconnect', ?, 'disconnected')";
|
||||
"INSERT INTO subscriptions (subscription_id, wsi_pointer, client_ip, event_type) "
|
||||
"VALUES ('disconnect', '', ?, 'disconnected')";
|
||||
|
||||
rc = sqlite3_prepare_v2(g_db, insert_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
@@ -831,7 +982,7 @@ void update_subscription_events_sent(const char* sub_id, int events_sent) {
|
||||
if (!g_db || !sub_id) return;
|
||||
|
||||
const char* sql =
|
||||
"UPDATE subscription_events "
|
||||
"UPDATE subscriptions "
|
||||
"SET events_sent = ? "
|
||||
"WHERE subscription_id = ? AND event_type = 'created'";
|
||||
|
||||
|
||||
@@ -93,6 +93,7 @@ struct subscription_manager {
|
||||
};
|
||||
|
||||
// Function declarations
|
||||
int validate_subscription_id(const char* sub_id);
|
||||
subscription_filter_t* create_subscription_filter(cJSON* filter_json);
|
||||
void free_subscription_filter(subscription_filter_t* filter);
|
||||
subscription_t* create_subscription(const char* sub_id, struct lws* wsi, cJSON* filters_array, const char* client_ip);
|
||||
@@ -117,4 +118,7 @@ void log_subscription_disconnected(const char* client_ip);
|
||||
void log_event_broadcast(const char* event_id, const char* sub_id, const char* client_ip);
|
||||
void update_subscription_events_sent(const char* sub_id, int events_sent);
|
||||
|
||||
// Subscription query functions
|
||||
int has_subscriptions_for_kind(int event_kind);
|
||||
|
||||
#endif // SUBSCRIPTIONS_H
|
||||
436
src/websockets.c
436
src/websockets.c
@@ -108,6 +108,136 @@ struct subscription_manager g_subscription_manager;
|
||||
|
||||
|
||||
|
||||
// Message queue functions for proper libwebsockets pattern
|
||||
|
||||
/**
|
||||
* Queue a message for WebSocket writing following libwebsockets' proper pattern.
|
||||
* This function adds messages to a per-session queue and requests writeable callback.
|
||||
*
|
||||
* @param wsi WebSocket instance
|
||||
* @param pss Per-session data containing message queue
|
||||
* @param message Message string to write
|
||||
* @param length Length of message string
|
||||
* @param type LWS_WRITE_* type (LWS_WRITE_TEXT, etc.)
|
||||
* @return 0 on success, -1 on error
|
||||
*/
|
||||
int queue_message(struct lws* wsi, struct per_session_data* pss, const char* message, size_t length, enum lws_write_protocol type) {
|
||||
if (!wsi || !pss || !message || length == 0) {
|
||||
DEBUG_ERROR("queue_message: invalid parameters");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Allocate message queue node
|
||||
struct message_queue_node* node = malloc(sizeof(struct message_queue_node));
|
||||
if (!node) {
|
||||
DEBUG_ERROR("queue_message: failed to allocate queue node");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Allocate buffer with LWS_PRE space
|
||||
size_t buffer_size = LWS_PRE + length;
|
||||
unsigned char* buffer = malloc(buffer_size);
|
||||
if (!buffer) {
|
||||
DEBUG_ERROR("queue_message: failed to allocate message buffer");
|
||||
free(node);
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Copy message to buffer with LWS_PRE offset
|
||||
memcpy(buffer + LWS_PRE, message, length);
|
||||
|
||||
// Initialize node
|
||||
node->data = buffer;
|
||||
node->length = length;
|
||||
node->type = type;
|
||||
node->next = NULL;
|
||||
|
||||
// Add to queue (thread-safe)
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
if (!pss->message_queue_head) {
|
||||
// Queue was empty
|
||||
pss->message_queue_head = node;
|
||||
pss->message_queue_tail = node;
|
||||
} else {
|
||||
// Add to end of queue
|
||||
pss->message_queue_tail->next = node;
|
||||
pss->message_queue_tail = node;
|
||||
}
|
||||
pss->message_queue_count++;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Request writeable callback (only if not already requested)
|
||||
if (!pss->writeable_requested) {
|
||||
pss->writeable_requested = 1;
|
||||
lws_callback_on_writable(wsi);
|
||||
}
|
||||
|
||||
DEBUG_TRACE("Queued message: len=%zu, queue_count=%d", length, pss->message_queue_count);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Process message queue when the socket becomes writeable.
|
||||
* This function is called from LWS_CALLBACK_SERVER_WRITEABLE.
|
||||
*
|
||||
* @param wsi WebSocket instance
|
||||
* @param pss Per-session data containing message queue
|
||||
* @return 0 on success, -1 on error
|
||||
*/
|
||||
int process_message_queue(struct lws* wsi, struct per_session_data* pss) {
|
||||
if (!wsi || !pss) {
|
||||
DEBUG_ERROR("process_message_queue: invalid parameters");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Get next message from queue (thread-safe)
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
struct message_queue_node* node = pss->message_queue_head;
|
||||
if (!node) {
|
||||
// Queue is empty
|
||||
pss->writeable_requested = 0;
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Remove from queue
|
||||
pss->message_queue_head = node->next;
|
||||
if (!pss->message_queue_head) {
|
||||
pss->message_queue_tail = NULL;
|
||||
}
|
||||
pss->message_queue_count--;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Write message (libwebsockets handles partial writes internally)
|
||||
int write_result = lws_write(wsi, node->data + LWS_PRE, node->length, node->type);
|
||||
|
||||
// Free node resources
|
||||
free(node->data);
|
||||
free(node);
|
||||
|
||||
if (write_result < 0) {
|
||||
DEBUG_ERROR("process_message_queue: write failed, result=%d", write_result);
|
||||
return -1;
|
||||
}
|
||||
|
||||
DEBUG_TRACE("Processed message: wrote %d bytes, remaining in queue: %d", write_result, pss->message_queue_count);
|
||||
|
||||
// If queue not empty, request another callback
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
if (pss->message_queue_head) {
|
||||
lws_callback_on_writable(wsi);
|
||||
} else {
|
||||
pss->writeable_requested = 0;
|
||||
}
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
// WEBSOCKET PROTOCOL
|
||||
@@ -247,7 +377,57 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
|
||||
// Get real client IP address
|
||||
char client_ip[CLIENT_IP_MAX_LENGTH];
|
||||
lws_get_peer_simple(wsi, client_ip, sizeof(client_ip));
|
||||
memset(client_ip, 0, sizeof(client_ip));
|
||||
|
||||
// Check if we should trust proxy headers
|
||||
int trust_proxy = get_config_bool("trust_proxy_headers", 0);
|
||||
|
||||
if (trust_proxy) {
|
||||
// Try to get IP from X-Forwarded-For header first
|
||||
char x_forwarded_for[CLIENT_IP_MAX_LENGTH];
|
||||
int header_len = lws_hdr_copy(wsi, x_forwarded_for, sizeof(x_forwarded_for) - 1, WSI_TOKEN_X_FORWARDED_FOR);
|
||||
|
||||
if (header_len > 0) {
|
||||
x_forwarded_for[header_len] = '\0';
|
||||
// X-Forwarded-For can contain multiple IPs (client, proxy1, proxy2, ...)
|
||||
// We want the first (leftmost) IP which is the original client
|
||||
char* comma = strchr(x_forwarded_for, ',');
|
||||
if (comma) {
|
||||
*comma = '\0'; // Truncate at first comma
|
||||
}
|
||||
// Trim leading/trailing whitespace
|
||||
char* ip_start = x_forwarded_for;
|
||||
while (*ip_start == ' ' || *ip_start == '\t') ip_start++;
|
||||
size_t ip_len = strlen(ip_start);
|
||||
while (ip_len > 0 && (ip_start[ip_len-1] == ' ' || ip_start[ip_len-1] == '\t')) {
|
||||
ip_start[--ip_len] = '\0';
|
||||
}
|
||||
if (ip_len > 0 && ip_len < CLIENT_IP_MAX_LENGTH) {
|
||||
strncpy(client_ip, ip_start, CLIENT_IP_MAX_LENGTH - 1);
|
||||
client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
|
||||
DEBUG_TRACE("Using X-Forwarded-For IP: %s", client_ip);
|
||||
}
|
||||
}
|
||||
|
||||
// If X-Forwarded-For didn't work, try X-Real-IP
|
||||
if (client_ip[0] == '\0') {
|
||||
char x_real_ip[CLIENT_IP_MAX_LENGTH];
|
||||
header_len = lws_hdr_copy(wsi, x_real_ip, sizeof(x_real_ip) - 1, WSI_TOKEN_HTTP_X_REAL_IP);
|
||||
|
||||
if (header_len > 0) {
|
||||
x_real_ip[header_len] = '\0';
|
||||
strncpy(client_ip, x_real_ip, CLIENT_IP_MAX_LENGTH - 1);
|
||||
client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
|
||||
DEBUG_TRACE("Using X-Real-IP: %s", client_ip);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fall back to direct connection IP if proxy headers not available or not trusted
|
||||
if (client_ip[0] == '\0') {
|
||||
lws_get_peer_simple(wsi, client_ip, sizeof(client_ip));
|
||||
DEBUG_TRACE("Using direct connection IP: %s", client_ip);
|
||||
}
|
||||
|
||||
// Ensure client_ip is null-terminated and copy safely
|
||||
client_ip[CLIENT_IP_MAX_LENGTH - 1] = '\0';
|
||||
@@ -256,6 +436,9 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
memcpy(pss->client_ip, client_ip, copy_len);
|
||||
pss->client_ip[copy_len] = '\0';
|
||||
|
||||
// Record connection establishment time for duration tracking
|
||||
pss->connection_established = time(NULL);
|
||||
|
||||
DEBUG_LOG("WebSocket connection established from %s", pss->client_ip);
|
||||
|
||||
// Initialize NIP-42 authentication state
|
||||
@@ -379,11 +562,9 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
char *error_str = cJSON_Print(error_response);
|
||||
if (error_str) {
|
||||
size_t error_len = strlen(error_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + error_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, error_str, error_len);
|
||||
lws_write(wsi, buf + LWS_PRE, error_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
// Use proper message queue system instead of direct lws_write
|
||||
if (queue_message(wsi, pss, error_str, error_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue error response message");
|
||||
}
|
||||
free(error_str);
|
||||
}
|
||||
@@ -625,16 +806,24 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
}
|
||||
}
|
||||
} else {
|
||||
DEBUG_TRACE("Storing regular event in database");
|
||||
// Regular event - store in database and broadcast
|
||||
if (store_event(event) != 0) {
|
||||
DEBUG_ERROR("Failed to store event in database");
|
||||
result = -1;
|
||||
strncpy(error_message, "error: failed to store event", sizeof(error_message) - 1);
|
||||
} else {
|
||||
DEBUG_LOG("Event stored and broadcast (kind %d)", event_kind);
|
||||
// Broadcast event to matching persistent subscriptions
|
||||
// Check if this is an ephemeral event (kinds 20000-29999)
|
||||
// Per NIP-01: ephemeral events are broadcast but never stored
|
||||
if (event_kind >= 20000 && event_kind < 30000) {
|
||||
DEBUG_TRACE("Ephemeral event (kind %d) - broadcasting without storage", event_kind);
|
||||
// Broadcast directly to subscriptions without database storage
|
||||
broadcast_event_to_subscriptions(event);
|
||||
} else {
|
||||
DEBUG_TRACE("Storing regular event in database");
|
||||
// Regular event - store in database and broadcast
|
||||
if (store_event(event) != 0) {
|
||||
DEBUG_ERROR("Failed to store event in database");
|
||||
result = -1;
|
||||
strncpy(error_message, "error: failed to store event", sizeof(error_message) - 1);
|
||||
} else {
|
||||
DEBUG_LOG("Event stored and broadcast (kind %d)", event_kind);
|
||||
// Broadcast event to matching persistent subscriptions
|
||||
broadcast_event_to_subscriptions(event);
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@@ -662,12 +851,18 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
char *response_str = cJSON_Print(response);
|
||||
if (response_str) {
|
||||
size_t response_len = strlen(response_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + response_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, response_str, response_len);
|
||||
lws_write(wsi, buf + LWS_PRE, response_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=OK len=%zu data=%.100s%s",
|
||||
response_len,
|
||||
response_str,
|
||||
response_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
if (queue_message(wsi, pss, response_str, response_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue OK response message");
|
||||
}
|
||||
|
||||
free(response_str);
|
||||
}
|
||||
cJSON_Delete(response);
|
||||
@@ -707,38 +902,10 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Check subscription ID format and length
|
||||
size_t id_len = strlen(subscription_id);
|
||||
if (id_len == 0 || id_len >= SUBSCRIPTION_ID_MAX_LENGTH) {
|
||||
send_notice_message(wsi, "error: subscription ID too long or empty");
|
||||
DEBUG_WARN("REQ rejected: invalid subscription ID length");
|
||||
cJSON_Delete(json);
|
||||
free(message);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Validate characters in subscription ID
|
||||
int valid_id = 1;
|
||||
char invalid_char = '\0';
|
||||
size_t invalid_pos = 0;
|
||||
for (size_t i = 0; i < id_len; i++) {
|
||||
char c = subscription_id[i];
|
||||
if (!((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') ||
|
||||
(c >= '0' && c <= '9') || c == '_' || c == '-' || c == ':')) {
|
||||
valid_id = 0;
|
||||
invalid_char = c;
|
||||
invalid_pos = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!valid_id) {
|
||||
char debug_msg[512];
|
||||
snprintf(debug_msg, sizeof(debug_msg),
|
||||
"REQ rejected: invalid character '%c' (0x%02X) at position %zu in subscription ID: '%s'",
|
||||
invalid_char, (unsigned char)invalid_char, invalid_pos, subscription_id);
|
||||
DEBUG_WARN(debug_msg);
|
||||
send_notice_message(wsi, "error: invalid characters in subscription ID");
|
||||
// Validate subscription ID
|
||||
if (!validate_subscription_id(subscription_id)) {
|
||||
send_notice_message(wsi, "error: invalid subscription ID");
|
||||
DEBUG_WARN("REQ rejected: invalid subscription ID");
|
||||
cJSON_Delete(json);
|
||||
free(message);
|
||||
return 0;
|
||||
@@ -790,12 +957,18 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
char *eose_str = cJSON_Print(eose_response);
|
||||
if (eose_str) {
|
||||
size_t eose_len = strlen(eose_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + eose_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, eose_str, eose_len);
|
||||
lws_write(wsi, buf + LWS_PRE, eose_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=EOSE len=%zu data=%.100s%s",
|
||||
eose_len,
|
||||
eose_str,
|
||||
eose_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
if (queue_message(wsi, pss, eose_str, eose_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue EOSE message");
|
||||
}
|
||||
|
||||
free(eose_str);
|
||||
}
|
||||
cJSON_Delete(eose_response);
|
||||
@@ -866,38 +1039,31 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Check subscription ID format and length
|
||||
size_t id_len = strlen(subscription_id);
|
||||
if (id_len == 0 || id_len >= SUBSCRIPTION_ID_MAX_LENGTH) {
|
||||
send_notice_message(wsi, "error: subscription ID too long or empty in CLOSE");
|
||||
DEBUG_WARN("CLOSE rejected: invalid subscription ID length");
|
||||
// Validate subscription ID
|
||||
if (!validate_subscription_id(subscription_id)) {
|
||||
send_notice_message(wsi, "error: invalid subscription ID in CLOSE");
|
||||
DEBUG_WARN("CLOSE rejected: invalid subscription ID");
|
||||
cJSON_Delete(json);
|
||||
free(message);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Validate characters in subscription ID
|
||||
int valid_id = 1;
|
||||
for (size_t i = 0; i < id_len; i++) {
|
||||
char c = subscription_id[i];
|
||||
if (!((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') ||
|
||||
(c >= '0' && c <= '9') || c == '_' || c == '-' || c == ':')) {
|
||||
valid_id = 0;
|
||||
// CRITICAL FIX: Mark subscription as inactive in global manager FIRST
|
||||
// This prevents other threads from accessing it during removal
|
||||
pthread_mutex_lock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
subscription_t* target_sub = g_subscription_manager.active_subscriptions;
|
||||
while (target_sub) {
|
||||
if (strcmp(target_sub->id, subscription_id) == 0 && target_sub->wsi == wsi) {
|
||||
target_sub->active = 0; // Mark as inactive immediately
|
||||
break;
|
||||
}
|
||||
target_sub = target_sub->next;
|
||||
}
|
||||
|
||||
if (!valid_id) {
|
||||
send_notice_message(wsi, "error: invalid characters in subscription ID for CLOSE");
|
||||
DEBUG_WARN("CLOSE rejected: invalid characters in subscription ID");
|
||||
cJSON_Delete(json);
|
||||
free(message);
|
||||
return 0;
|
||||
}
|
||||
pthread_mutex_unlock(&g_subscription_manager.subscriptions_lock);
|
||||
|
||||
// CRITICAL FIX: Remove from session list FIRST (while holding lock)
|
||||
// to prevent race condition where global manager frees the subscription
|
||||
// while we're still iterating through the session list
|
||||
// Now safe to remove from session list
|
||||
if (pss) {
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
@@ -915,8 +1081,7 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
}
|
||||
|
||||
// Remove from global manager AFTER removing from session list
|
||||
// This prevents use-after-free when iterating session subscriptions
|
||||
// Finally remove from global manager (which will free it)
|
||||
remove_subscription_from_manager(subscription_id, wsi);
|
||||
|
||||
// Subscription closed
|
||||
@@ -959,26 +1124,109 @@ static int nostr_relay_callback(struct lws *wsi, enum lws_callback_reasons reaso
|
||||
}
|
||||
break;
|
||||
|
||||
case LWS_CALLBACK_SERVER_WRITEABLE:
|
||||
// Handle message queue when socket becomes writeable
|
||||
if (pss) {
|
||||
process_message_queue(wsi, pss);
|
||||
}
|
||||
break;
|
||||
|
||||
case LWS_CALLBACK_CLOSED:
|
||||
DEBUG_TRACE("WebSocket connection closed");
|
||||
DEBUG_LOG("WebSocket connection closed from %s", pss ? pss->client_ip : "unknown");
|
||||
|
||||
// Clean up session subscriptions
|
||||
// Enhanced closure logging with detailed diagnostics
|
||||
if (pss) {
|
||||
// Calculate connection duration
|
||||
time_t now = time(NULL);
|
||||
long duration = (pss->connection_established > 0) ?
|
||||
(long)(now - pss->connection_established) : 0;
|
||||
|
||||
// Determine closure reason
|
||||
const char* reason = "client_disconnect";
|
||||
if (g_shutdown_flag || !g_server_running) {
|
||||
reason = "server_shutdown";
|
||||
}
|
||||
|
||||
// Format authentication status
|
||||
char auth_status[80];
|
||||
if (pss->authenticated && strlen(pss->authenticated_pubkey) > 0) {
|
||||
// Show first 8 chars of pubkey for identification
|
||||
snprintf(auth_status, sizeof(auth_status), "yes(%.8s...)", pss->authenticated_pubkey);
|
||||
} else {
|
||||
snprintf(auth_status, sizeof(auth_status), "no");
|
||||
}
|
||||
|
||||
// Log comprehensive closure information
|
||||
DEBUG_LOG("WebSocket CLOSED: ip=%s duration=%lds subscriptions=%d authenticated=%s reason=%s",
|
||||
pss->client_ip,
|
||||
duration,
|
||||
pss->subscription_count,
|
||||
auth_status,
|
||||
reason);
|
||||
|
||||
// Clean up message queue to prevent memory leaks
|
||||
while (pss->message_queue_head) {
|
||||
struct message_queue_node* node = pss->message_queue_head;
|
||||
pss->message_queue_head = node->next;
|
||||
free(node->data);
|
||||
free(node);
|
||||
}
|
||||
pss->message_queue_tail = NULL;
|
||||
pss->message_queue_count = 0;
|
||||
pss->writeable_requested = 0;
|
||||
|
||||
// Clean up session subscriptions - copy IDs first to avoid use-after-free
|
||||
pthread_mutex_lock(&pss->session_lock);
|
||||
|
||||
// First pass: collect subscription IDs safely
|
||||
typedef struct temp_sub_id {
|
||||
char id[SUBSCRIPTION_ID_MAX_LENGTH];
|
||||
struct temp_sub_id* next;
|
||||
} temp_sub_id_t;
|
||||
|
||||
temp_sub_id_t* temp_ids = NULL;
|
||||
temp_sub_id_t* temp_tail = NULL;
|
||||
int temp_count = 0;
|
||||
|
||||
struct subscription* sub = pss->subscriptions;
|
||||
while (sub) {
|
||||
struct subscription* next = sub->session_next;
|
||||
remove_subscription_from_manager(sub->id, wsi);
|
||||
sub = next;
|
||||
if (sub->active) { // Only process active subscriptions
|
||||
temp_sub_id_t* temp = malloc(sizeof(temp_sub_id_t));
|
||||
if (temp) {
|
||||
memcpy(temp->id, sub->id, SUBSCRIPTION_ID_MAX_LENGTH);
|
||||
temp->id[SUBSCRIPTION_ID_MAX_LENGTH - 1] = '\0';
|
||||
temp->next = NULL;
|
||||
|
||||
if (!temp_ids) {
|
||||
temp_ids = temp;
|
||||
temp_tail = temp;
|
||||
} else {
|
||||
temp_tail->next = temp;
|
||||
temp_tail = temp;
|
||||
}
|
||||
temp_count++;
|
||||
}
|
||||
}
|
||||
sub = sub->session_next;
|
||||
}
|
||||
|
||||
// Clear session list immediately
|
||||
pss->subscriptions = NULL;
|
||||
pss->subscription_count = 0;
|
||||
|
||||
pthread_mutex_unlock(&pss->session_lock);
|
||||
|
||||
// Second pass: remove from global manager using copied IDs
|
||||
temp_sub_id_t* current_temp = temp_ids;
|
||||
while (current_temp) {
|
||||
temp_sub_id_t* next_temp = current_temp->next;
|
||||
remove_subscription_from_manager(current_temp->id, wsi);
|
||||
free(current_temp);
|
||||
current_temp = next_temp;
|
||||
}
|
||||
pthread_mutex_destroy(&pss->session_lock);
|
||||
} else {
|
||||
DEBUG_LOG("WebSocket CLOSED: ip=unknown duration=0s subscriptions=0 authenticated=no reason=unknown");
|
||||
}
|
||||
DEBUG_TRACE("WebSocket connection cleanup complete");
|
||||
break;
|
||||
@@ -1642,12 +1890,18 @@ int handle_count_message(const char* sub_id, cJSON* filters, struct lws *wsi, st
|
||||
char *count_str = cJSON_Print(count_response);
|
||||
if (count_str) {
|
||||
size_t count_len = strlen(count_str);
|
||||
unsigned char *buf = malloc(LWS_PRE + count_len);
|
||||
if (buf) {
|
||||
memcpy(buf + LWS_PRE, count_str, count_len);
|
||||
lws_write(wsi, buf + LWS_PRE, count_len, LWS_WRITE_TEXT);
|
||||
free(buf);
|
||||
|
||||
// DEBUG: Log WebSocket frame details before sending
|
||||
DEBUG_TRACE("WS_FRAME_SEND: type=COUNT len=%zu data=%.100s%s",
|
||||
count_len,
|
||||
count_str,
|
||||
count_len > 100 ? "..." : "");
|
||||
|
||||
// Queue message for proper libwebsockets pattern
|
||||
if (queue_message(wsi, pss, count_str, count_len, LWS_WRITE_TEXT) != 0) {
|
||||
DEBUG_ERROR("Failed to queue COUNT message");
|
||||
}
|
||||
|
||||
free(count_str);
|
||||
}
|
||||
cJSON_Delete(count_response);
|
||||
|
||||
@@ -31,6 +31,14 @@
|
||||
#define MAX_SEARCH_LENGTH 256
|
||||
#define MAX_TAG_VALUE_LENGTH 1024
|
||||
|
||||
// Message queue node for proper libwebsockets pattern
|
||||
struct message_queue_node {
|
||||
unsigned char* data; // Message data (with LWS_PRE space)
|
||||
size_t length; // Message length (without LWS_PRE)
|
||||
enum lws_write_protocol type; // LWS_WRITE_TEXT, etc.
|
||||
struct message_queue_node* next; // Next node in queue
|
||||
};
|
||||
|
||||
// Enhanced per-session data with subscription management, NIP-42 authentication, and rate limiting
|
||||
struct per_session_data {
|
||||
int authenticated;
|
||||
@@ -38,6 +46,7 @@ struct per_session_data {
|
||||
pthread_mutex_t session_lock; // Per-session thread safety
|
||||
char client_ip[CLIENT_IP_MAX_LENGTH]; // Client IP for logging
|
||||
int subscription_count; // Number of subscriptions for this session
|
||||
time_t connection_established; // When WebSocket connection was established
|
||||
|
||||
// NIP-42 Authentication State
|
||||
char authenticated_pubkey[65]; // Authenticated public key (64 hex + null)
|
||||
@@ -58,6 +67,12 @@ struct per_session_data {
|
||||
int malformed_request_count; // Count of malformed requests in current hour
|
||||
time_t malformed_request_window_start; // Start of current hour window
|
||||
time_t malformed_request_blocked_until; // Time until blocked for malformed requests
|
||||
|
||||
// Message queue for proper libwebsockets pattern (replaces single buffer)
|
||||
struct message_queue_node* message_queue_head; // Head of message queue
|
||||
struct message_queue_node* message_queue_tail; // Tail of message queue
|
||||
int message_queue_count; // Number of messages in queue
|
||||
int writeable_requested; // Flag: 1 if writeable callback requested
|
||||
};
|
||||
|
||||
// NIP-11 HTTP session data structure for managing buffer lifetime
|
||||
@@ -72,6 +87,10 @@ struct nip11_session_data {
|
||||
// Function declarations
|
||||
int start_websocket_relay(int port_override, int strict_port);
|
||||
|
||||
// Message queue functions for proper libwebsockets pattern
|
||||
int queue_message(struct lws* wsi, struct per_session_data* pss, const char* message, size_t length, enum lws_write_protocol type);
|
||||
int process_message_queue(struct lws* wsi, struct per_session_data* pss);
|
||||
|
||||
// Auth rules checking function from request_validator.c
|
||||
int check_database_auth_rules(const char *pubkey, const char *operation, const char *resource_hash);
|
||||
|
||||
|
||||
40
systemd/c-relay-local.service
Normal file
40
systemd/c-relay-local.service
Normal file
@@ -0,0 +1,40 @@
|
||||
[Unit]
|
||||
Description=C Nostr Relay Server (Local Development)
|
||||
Documentation=https://github.com/your-repo/c-relay
|
||||
After=network.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=teknari
|
||||
WorkingDirectory=/home/teknari/Storage/c_relay
|
||||
Environment=DEBUG_LEVEL=0
|
||||
ExecStart=/home/teknari/Storage/c_relay/crelay --port 7777 --debug-level=$DEBUG_LEVEL
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=c-relay-local
|
||||
|
||||
# Security settings (relaxed for local development)
|
||||
NoNewPrivileges=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths=/home/teknari/Storage/c_relay
|
||||
PrivateTmp=true
|
||||
|
||||
# Network security
|
||||
PrivateNetwork=false
|
||||
RestrictAddressFamilies=AF_INET AF_INET6
|
||||
|
||||
# Resource limits
|
||||
LimitNOFILE=65536
|
||||
LimitNPROC=4096
|
||||
|
||||
# Event-based configuration system
|
||||
# No environment variables needed - all configuration is stored as Nostr events
|
||||
# Database files (<relay_pubkey>.db) are created automatically in WorkingDirectory
|
||||
# Admin keys are generated and displayed only during first startup
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
35
tests/ephemeral_test.sh
Executable file
35
tests/ephemeral_test.sh
Executable file
@@ -0,0 +1,35 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Simplified Ephemeral Event Test
|
||||
# Tests that ephemeral events are broadcast to active subscriptions
|
||||
|
||||
echo "=== Generating Ephemeral Event (kind 20000) ==="
|
||||
event=$(nak event --kind 20000 --content "test ephemeral event")
|
||||
echo "$event"
|
||||
echo ""
|
||||
|
||||
echo "=== Testing Ephemeral Event Broadcast ==="
|
||||
subscription='["REQ","test_sub",{"kinds":[20000],"limit":10}]'
|
||||
echo "Subscription Filter:"
|
||||
echo "$subscription"
|
||||
echo ""
|
||||
|
||||
event_msg='["EVENT",'"$event"']'
|
||||
echo "Event Message:"
|
||||
echo "$event_msg"
|
||||
echo ""
|
||||
|
||||
echo "=== Relay Responses ==="
|
||||
(
|
||||
# Send subscription
|
||||
printf "%s\n" "$subscription"
|
||||
# Wait for subscription to establish
|
||||
sleep 1
|
||||
# Send ephemeral event on same connection
|
||||
printf "%s\n" "$event_msg"
|
||||
# Wait for responses
|
||||
sleep 2
|
||||
) | timeout 5 websocat ws://127.0.0.1:8888
|
||||
|
||||
echo ""
|
||||
echo "Test complete!"
|
||||
63
tests/large_event_test.sh
Executable file
63
tests/large_event_test.sh
Executable file
@@ -0,0 +1,63 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Test script for posting large events (>4KB) to test partial write handling
|
||||
# Uses nak to properly sign events with large content
|
||||
|
||||
RELAY_URL="ws://localhost:8888"
|
||||
|
||||
# Check if nak is installed
|
||||
if ! command -v nak &> /dev/null; then
|
||||
echo "Error: nak is not installed. Install with: go install github.com/fiatjaf/nak@latest"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Generate a test private key if not set
|
||||
if [ -z "$NOSTR_PRIVATE_KEY" ]; then
|
||||
echo "Generating temporary test key..."
|
||||
export NOSTR_PRIVATE_KEY=$(nak key generate)
|
||||
fi
|
||||
|
||||
echo "=== Large Event Test ==="
|
||||
echo "Testing partial write handling with events >4KB"
|
||||
echo "Relay: $RELAY_URL"
|
||||
echo ""
|
||||
|
||||
# Test 1: 5KB event
|
||||
echo "Test 1: Posting 5KB event..."
|
||||
CONTENT_5KB=$(python3 -c "print('A' * 5000)")
|
||||
echo "$CONTENT_5KB" | nak event -k 1 --content - $RELAY_URL
|
||||
sleep 1
|
||||
|
||||
# Test 2: 10KB event
|
||||
echo ""
|
||||
echo "Test 2: Posting 10KB event..."
|
||||
CONTENT_10KB=$(python3 -c "print('B' * 10000)")
|
||||
echo "$CONTENT_10KB" | nak event -k 1 --content - $RELAY_URL
|
||||
sleep 1
|
||||
|
||||
# Test 3: 20KB event
|
||||
echo ""
|
||||
echo "Test 3: Posting 20KB event..."
|
||||
CONTENT_20KB=$(python3 -c "print('C' * 20000)")
|
||||
echo "$CONTENT_20KB" | nak event -k 1 --content - $RELAY_URL
|
||||
sleep 1
|
||||
|
||||
# Test 4: 50KB event (very large)
|
||||
echo ""
|
||||
echo "Test 4: Posting 50KB event..."
|
||||
CONTENT_50KB=$(python3 -c "print('D' * 50000)")
|
||||
echo "$CONTENT_50KB" | nak event -k 1 --content - $RELAY_URL
|
||||
|
||||
echo ""
|
||||
echo "=== Test Complete ==="
|
||||
echo ""
|
||||
echo "Check relay.log for:"
|
||||
echo " - 'Queued partial write' messages (indicates buffering is working)"
|
||||
echo " - 'write completed' messages (indicates retry succeeded)"
|
||||
echo " - No 'Invalid frame header' errors"
|
||||
echo ""
|
||||
echo "To view logs in real-time:"
|
||||
echo " tail -f relay.log | grep -E '(partial|write completed|Invalid frame)'"
|
||||
echo ""
|
||||
echo "To check if events were stored:"
|
||||
echo " sqlite3 build/*.db 'SELECT id, length(content) as content_size FROM events ORDER BY created_at DESC LIMIT 4;'"
|
||||
@@ -3,6 +3,19 @@
|
||||
# Test script to post kind 1 events to the relay every second
|
||||
# Cycles through three different secret keys
|
||||
# Content includes current timestamp
|
||||
#
|
||||
# Usage: ./post_events.sh <relay_url>
|
||||
# Example: ./post_events.sh ws://localhost:8888
|
||||
# Example: ./post_events.sh wss://relay.laantungir.net
|
||||
|
||||
# Check if relay URL is provided
|
||||
if [ -z "$1" ]; then
|
||||
echo "Error: Relay URL is required"
|
||||
echo "Usage: $0 <relay_url>"
|
||||
echo "Example: $0 ws://localhost:8888"
|
||||
echo "Example: $0 wss://relay.laantungir.net"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Array of secret keys to cycle through
|
||||
SECRET_KEYS=(
|
||||
@@ -11,7 +24,7 @@ SECRET_KEYS=(
|
||||
"1618aaa21f5bd45c5ffede0d9a60556db67d4a046900e5f66b0bae5c01c801fb"
|
||||
)
|
||||
|
||||
RELAY_URL="ws://localhost:8888"
|
||||
RELAY_URL="$1"
|
||||
KEY_INDEX=0
|
||||
|
||||
echo "Starting event posting test to $RELAY_URL"
|
||||
@@ -36,5 +49,5 @@ while true; do
|
||||
KEY_INDEX=$(( (KEY_INDEX + 1) % ${#SECRET_KEYS[@]} ))
|
||||
|
||||
# Wait 1 second
|
||||
sleep 1
|
||||
sleep .2
|
||||
done
|
||||
@@ -1,203 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Rate Limiting Test Suite for C-Relay
|
||||
# Tests rate limiting and abuse prevention mechanisms
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
TEST_TIMEOUT=15
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Function to test rate limiting
|
||||
test_rate_limiting() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local burst_count="${3:-10}"
|
||||
local expected_limited="${4:-false}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
local rate_limited=false
|
||||
local success_count=0
|
||||
local error_count=0
|
||||
|
||||
# Send burst of messages
|
||||
for i in $(seq 1 "$burst_count"); do
|
||||
local response
|
||||
response=$(echo "$message" | timeout 2 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1 || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"rate limit"* ]] || [[ "$response" == *"too many"* ]] || [[ "$response" == *"TOO_MANY"* ]]; then
|
||||
rate_limited=true
|
||||
elif [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]] || [[ "$response" == *"OK"* ]]; then
|
||||
((success_count++))
|
||||
else
|
||||
((error_count++))
|
||||
fi
|
||||
|
||||
# Small delay between requests
|
||||
sleep 0.05
|
||||
done
|
||||
|
||||
if [[ "$expected_limited" == "true" ]]; then
|
||||
if [[ "$rate_limited" == "true" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Rate limiting triggered as expected"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Rate limiting not triggered (expected)"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
if [[ "$rate_limited" == "false" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - No rate limiting for normal traffic"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - Unexpected rate limiting"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1)) # Count as passed since it's conservative
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test sustained load
|
||||
test_sustained_load() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local duration="${3:-10}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
local start_time
|
||||
start_time=$(date +%s)
|
||||
local rate_limited=false
|
||||
local total_requests=0
|
||||
local successful_requests=0
|
||||
|
||||
while [[ $(($(date +%s) - start_time)) -lt duration ]]; do
|
||||
((total_requests++))
|
||||
local response
|
||||
response=$(echo "$message" | timeout 1 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1 || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"rate limit"* ]] || [[ "$response" == *"too many"* ]] || [[ "$response" == *"TOO_MANY"* ]]; then
|
||||
rate_limited=true
|
||||
elif [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]] || [[ "$response" == *"OK"* ]]; then
|
||||
((successful_requests++))
|
||||
fi
|
||||
|
||||
# Small delay to avoid overwhelming
|
||||
sleep 0.1
|
||||
done
|
||||
|
||||
local success_rate=0
|
||||
if [[ $total_requests -gt 0 ]]; then
|
||||
success_rate=$((successful_requests * 100 / total_requests))
|
||||
fi
|
||||
|
||||
if [[ "$rate_limited" == "true" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Rate limiting activated under sustained load (${success_rate}% success rate)"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - No rate limiting detected (${success_rate}% success rate)"
|
||||
# This might be acceptable if rate limiting is very permissive
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Rate Limiting Test Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing rate limiting against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Test basic connectivity first
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
test_rate_limiting "Basic connectivity" '["REQ","rate_test",{}]' 1 false
|
||||
echo ""
|
||||
|
||||
echo "=== Burst Request Testing ==="
|
||||
# Test rapid succession of requests
|
||||
test_rate_limiting "Rapid REQ messages" '["REQ","burst_req_'$(date +%s%N)'",{}]' 20 true
|
||||
test_rate_limiting "Rapid COUNT messages" '["COUNT","burst_count_'$(date +%s%N)'",{}]' 20 true
|
||||
test_rate_limiting "Rapid CLOSE messages" '["CLOSE","burst_close"]' 20 true
|
||||
echo ""
|
||||
|
||||
echo "=== Malformed Message Rate Limiting ==="
|
||||
# Test if malformed messages trigger rate limiting faster
|
||||
test_rate_limiting "Malformed JSON burst" '["REQ","malformed"' 15 true
|
||||
test_rate_limiting "Invalid message type burst" '["INVALID","test",{}]' 15 true
|
||||
test_rate_limiting "Empty message burst" '[]' 15 true
|
||||
echo ""
|
||||
|
||||
echo "=== Sustained Load Testing ==="
|
||||
# Test sustained moderate load
|
||||
test_sustained_load "Sustained REQ load" '["REQ","sustained_'$(date +%s%N)'",{}]' 10
|
||||
test_sustained_load "Sustained COUNT load" '["COUNT","sustained_count_'$(date +%s%N)'",{}]' 10
|
||||
echo ""
|
||||
|
||||
echo "=== Filter Complexity Testing ==="
|
||||
# Test if complex filters trigger rate limiting
|
||||
test_rate_limiting "Complex filter burst" '["REQ","complex_'$(date +%s%N)'",{"authors":["a","b","c"],"kinds":[1,2,3],"#e":["x","y","z"],"#p":["m","n","o"],"since":1000000000,"until":2000000000,"limit":100}]' 10 true
|
||||
echo ""
|
||||
|
||||
echo "=== Subscription Management Testing ==="
|
||||
# Test subscription creation/deletion rate limiting
|
||||
echo -n "Testing subscription churn... "
|
||||
local churn_test_passed=true
|
||||
for i in $(seq 1 25); do
|
||||
# Create subscription
|
||||
echo "[\"REQ\",\"churn_${i}_$(date +%s%N)\",{}]" | timeout 1 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1 || true
|
||||
|
||||
# Close subscription
|
||||
echo "[\"CLOSE\",\"churn_${i}_*\"]" | timeout 1 websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1 || true
|
||||
|
||||
sleep 0.05
|
||||
done
|
||||
|
||||
# Check if relay is still responsive
|
||||
if echo 'ping' | timeout 2 websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1; then
|
||||
echo -e "${GREEN}PASSED${NC} - Subscription churn handled"
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Relay unresponsive after subscription churn"
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All rate limiting tests passed!${NC}"
|
||||
echo "Rate limiting appears to be working correctly."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ Some rate limiting tests failed!${NC}"
|
||||
echo "Rate limiting may not be properly configured."
|
||||
exit 1
|
||||
fi
|
||||
448
tests/sql_test.sh
Executable file
448
tests/sql_test.sh
Executable file
@@ -0,0 +1,448 @@
|
||||
#!/bin/bash
|
||||
|
||||
# SQL Query Admin API Test Script
|
||||
# Tests the sql_query command functionality
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_URL="ws://localhost:8888"
|
||||
ADMIN_PRIVKEY="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
||||
ADMIN_PUBKEY="6a04ab98d9e4774ad806e302dddeb63bea16b5cb5f223ee77478e861bb583eb3"
|
||||
RELAY_PUBKEY="4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Helper functions
|
||||
print_test() {
|
||||
echo -e "${YELLOW}TEST: $1${NC}"
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
}
|
||||
|
||||
print_pass() {
|
||||
echo -e "${GREEN}✓ PASS: $1${NC}"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
}
|
||||
|
||||
print_fail() {
|
||||
echo -e "${RED}✗ FAIL: $1${NC}"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
}
|
||||
|
||||
# Check if nak is installed
|
||||
check_nak() {
|
||||
if ! command -v nak &> /dev/null; then
|
||||
echo -e "${RED}ERROR: nak command not found. Please install nak first.${NC}"
|
||||
echo -e "${RED}Visit: https://github.com/fiatjaf/nak${NC}"
|
||||
exit 1
|
||||
fi
|
||||
echo -e "${GREEN}✓ nak is available${NC}"
|
||||
}
|
||||
|
||||
# Send SQL query command via WebSocket using nak
|
||||
send_sql_query() {
|
||||
local query="$1"
|
||||
local description="$2"
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Create the admin command
|
||||
COMMAND="[\"sql_query\", \"$query\"]"
|
||||
|
||||
# Encrypt the command using NIP-44
|
||||
ENCRYPTED_COMMAND=$(nak encrypt "$COMMAND" \
|
||||
--sec "$ADMIN_PRIVKEY" \
|
||||
--recipient-pubkey "$RELAY_PUBKEY" 2>/dev/null)
|
||||
|
||||
if [ -z "$ENCRYPTED_COMMAND" ]; then
|
||||
echo -e "${RED}FAILED${NC} - Failed to encrypt admin command"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Create admin event
|
||||
ADMIN_EVENT=$(nak event \
|
||||
--kind 23456 \
|
||||
--content "$ENCRYPTED_COMMAND" \
|
||||
--sec "$ADMIN_PRIVKEY" \
|
||||
--tag "p=$RELAY_PUBKEY" 2>/dev/null)
|
||||
|
||||
if [ -z "$ADMIN_EVENT" ]; then
|
||||
echo -e "${RED}FAILED${NC} - Failed to create admin event"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "=== SENT EVENT ==="
|
||||
echo "$ADMIN_EVENT"
|
||||
echo "==================="
|
||||
|
||||
# Send SQL query event via WebSocket
|
||||
local response
|
||||
response=$(echo "$ADMIN_EVENT" | timeout 10 websocat -B 1048576 "$RELAY_URL" 2>/dev/null | head -3 || echo 'TIMEOUT')
|
||||
|
||||
echo "=== RECEIVED RESPONSE ==="
|
||||
echo "$response"
|
||||
echo "=========================="
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Connection timeout"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "$response" # Return the response for further processing
|
||||
}
|
||||
|
||||
# Test functions
|
||||
test_valid_select() {
|
||||
print_test "Valid SELECT query"
|
||||
local response=$(send_sql_query "SELECT * FROM events LIMIT 1" "valid SELECT query")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"' && echo "$response" | grep -q '"row_count"'; then
|
||||
print_pass "Valid SELECT accepted and executed"
|
||||
else
|
||||
print_fail "Valid SELECT failed: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_select_count() {
|
||||
print_test "SELECT COUNT(*) query"
|
||||
local response=$(send_sql_query "SELECT COUNT(*) FROM events" "COUNT query")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"' && echo "$response" | grep -q '"row_count"'; then
|
||||
print_pass "COUNT query executed successfully"
|
||||
else
|
||||
print_fail "COUNT query failed: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_insert() {
|
||||
print_test "INSERT statement blocked"
|
||||
local response=$(send_sql_query "INSERT INTO events VALUES ('id', 'pubkey', 1234567890, 1, 'content', 'sig')" "INSERT blocking")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"' && echo "$response" | grep -q '"error_type":"blocked_statement"'; then
|
||||
print_pass "INSERT correctly blocked"
|
||||
else
|
||||
print_fail "INSERT not blocked: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_update() {
|
||||
print_test "UPDATE statement blocked"
|
||||
local response=$(send_sql_query "UPDATE events SET content = 'test' WHERE id = 'abc123'" "UPDATE blocking")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"' && echo "$response" | grep -q '"error_type":"blocked_statement"'; then
|
||||
print_pass "UPDATE correctly blocked"
|
||||
else
|
||||
print_fail "UPDATE not blocked: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_delete() {
|
||||
print_test "DELETE statement blocked"
|
||||
local response=$(send_sql_query "DELETE FROM events WHERE id = 'abc123'" "DELETE blocking")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"' && echo "$response" | grep -q '"error_type":"blocked_statement"'; then
|
||||
print_pass "DELETE correctly blocked"
|
||||
else
|
||||
print_fail "DELETE not blocked: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_drop() {
|
||||
print_test "DROP statement blocked"
|
||||
local response=$(send_sql_query "DROP TABLE events" "DROP blocking")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"' && echo "$response" | grep -q '"error_type":"blocked_statement"'; then
|
||||
print_pass "DROP correctly blocked"
|
||||
else
|
||||
print_fail "DROP not blocked: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_create() {
|
||||
print_test "CREATE statement blocked"
|
||||
local response=$(send_sql_query "CREATE TABLE test (id TEXT)" "CREATE blocking")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"' && echo "$response" | grep -q '"error_type":"blocked_statement"'; then
|
||||
print_pass "CREATE correctly blocked"
|
||||
else
|
||||
print_fail "CREATE not blocked: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_alter() {
|
||||
print_test "ALTER statement blocked"
|
||||
local response=$(send_sql_query "ALTER TABLE events ADD COLUMN test TEXT" "ALTER blocking")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"' && echo "$response" | grep -q '"error_type":"blocked_statement"'; then
|
||||
print_pass "ALTER correctly blocked"
|
||||
else
|
||||
print_fail "ALTER not blocked: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_blocked_pragma() {
|
||||
print_test "PRAGMA statement blocked"
|
||||
local response=$(send_sql_query "PRAGMA table_info(events)" "PRAGMA blocking")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"' && echo "$response" | grep -q '"error_type":"blocked_statement"'; then
|
||||
print_pass "PRAGMA correctly blocked"
|
||||
else
|
||||
print_fail "PRAGMA not blocked: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_select_with_where() {
|
||||
print_test "SELECT with WHERE clause"
|
||||
local response=$(send_sql_query "SELECT id, kind FROM events WHERE kind = 1 LIMIT 5" "WHERE clause query")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"'; then
|
||||
print_pass "WHERE clause query executed"
|
||||
else
|
||||
print_fail "WHERE clause query failed: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_select_with_join() {
|
||||
print_test "SELECT with JOIN"
|
||||
local response=$(send_sql_query "SELECT e.id, e.kind, s.events_sent FROM events e LEFT JOIN active_subscriptions_log s ON e.id = s.subscription_id LIMIT 3" "JOIN query")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"'; then
|
||||
print_pass "JOIN query executed"
|
||||
else
|
||||
print_fail "JOIN query failed: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_select_views() {
|
||||
print_test "SELECT from views"
|
||||
local response=$(send_sql_query "SELECT * FROM event_kinds_view LIMIT 5" "view query")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"'; then
|
||||
print_pass "View query executed"
|
||||
else
|
||||
print_fail "View query failed: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_nonexistent_table() {
|
||||
print_test "Query nonexistent table"
|
||||
local response=$(send_sql_query "SELECT * FROM nonexistent_table" "nonexistent table")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"'; then
|
||||
print_pass "Nonexistent table error handled correctly"
|
||||
else
|
||||
print_fail "Nonexistent table error not handled: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_invalid_syntax() {
|
||||
print_test "Invalid SQL syntax"
|
||||
local response=$(send_sql_query "SELECT * FROM events WHERE" "invalid syntax")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"status":"error"'; then
|
||||
print_pass "Invalid syntax error handled"
|
||||
else
|
||||
print_fail "Invalid syntax not handled: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_request_id_correlation() {
|
||||
print_test "Request ID correlation"
|
||||
local response=$(send_sql_query "SELECT * FROM events LIMIT 1" "request ID correlation")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"request_id"'; then
|
||||
print_pass "Request ID included in response"
|
||||
else
|
||||
print_fail "Request ID missing from response: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_response_format() {
|
||||
print_test "Response format validation"
|
||||
local response=$(send_sql_query "SELECT * FROM events LIMIT 1" "response format")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"' &&
|
||||
echo "$response" | grep -q '"timestamp"' &&
|
||||
echo "$response" | grep -q '"execution_time_ms"' &&
|
||||
echo "$response" | grep -q '"row_count"' &&
|
||||
echo "$response" | grep -q '"columns"' &&
|
||||
echo "$response" | grep -q '"rows"'; then
|
||||
print_pass "Response format is valid"
|
||||
else
|
||||
print_fail "Response format invalid: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
test_empty_result() {
|
||||
print_test "Empty result set"
|
||||
local response=$(send_sql_query "SELECT * FROM events WHERE kind = 99999" "empty result")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if echo "$response" | grep -q '"query_type":"sql_query"'; then
|
||||
print_pass "Empty result handled correctly"
|
||||
else
|
||||
print_fail "Empty result not handled: $response"
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay SQL Query Admin API Testing Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing SQL query functionality at $RELAY_URL"
|
||||
echo ""
|
||||
|
||||
# Check prerequisites
|
||||
check_nak
|
||||
|
||||
# Test basic connectivity first
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
print_test "Basic connectivity"
|
||||
response=$(send_sql_query "SELECT 1" "basic connectivity")
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Cannot connect to relay at $RELAY_URL"
|
||||
echo "Make sure the relay is running and accessible."
|
||||
exit 1
|
||||
else
|
||||
print_pass "Relay connection established"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Run test suites
|
||||
echo "=== Query Validation Tests ==="
|
||||
test_valid_select
|
||||
test_select_count
|
||||
test_blocked_insert
|
||||
test_blocked_update
|
||||
test_blocked_delete
|
||||
test_blocked_drop
|
||||
test_blocked_create
|
||||
test_blocked_alter
|
||||
test_blocked_pragma
|
||||
echo ""
|
||||
|
||||
echo "=== Query Execution Tests ==="
|
||||
test_select_with_where
|
||||
test_select_with_join
|
||||
test_select_views
|
||||
test_empty_result
|
||||
echo ""
|
||||
|
||||
echo "=== Error Handling Tests ==="
|
||||
test_nonexistent_table
|
||||
test_invalid_syntax
|
||||
echo ""
|
||||
|
||||
echo "=== Response Format Tests ==="
|
||||
test_request_id_correlation
|
||||
test_response_format
|
||||
echo ""
|
||||
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All SQL query tests passed!${NC}"
|
||||
echo "SQL query admin API is working correctly."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ Some SQL query tests failed!${NC}"
|
||||
echo "SQL query admin API may have issues."
|
||||
exit 1
|
||||
fi
|
||||
1
text_graph
Submodule
1
text_graph
Submodule
Submodule text_graph added at bf1785f372
Reference in New Issue
Block a user