5 Commits

Author SHA1 Message Date
Your Name
840a5bbf5f v0.1.24 - . 2025-12-21 09:17:43 -04:00
Your Name
0f420fc6d0 v0.1.23 - Fixed auth rules on web page 2025-12-16 09:18:37 -04:00
Your Name
29e2421771 v0.1.22 - Cleaned out legacy websocket code in index.js 2025-12-16 07:50:16 -04:00
Your Name
cce1f2f0fd v0.1.21 - Fixed some web page errors. About to clean out websocket functions in index.js. Last push before we do this. 2025-12-16 07:17:19 -04:00
Your Name
281c686fde v0.1.20 - Fixed auth white and black lists 2025-12-16 06:54:26 -04:00
22 changed files with 40896 additions and 18460 deletions

View File

@@ -431,6 +431,13 @@ All commands are sent as NIP-44 encrypted JSON arrays in the event content:
| `storage_stats` | `["storage_stats"]` | Get detailed storage statistics |
| `mirror_status` | `["mirror_status"]` | Get status of mirroring operations |
| `report_query` | `["report_query", "all"]` | Query content reports (BUD-09) |
| **Authorization Rules Management** |
| `auth_add_blacklist` | `["blacklist", "pubkey", "abc123..."]` | Add pubkey to blacklist |
| `auth_add_whitelist` | `["whitelist", "pubkey", "def456..."]` | Add pubkey to whitelist |
| `auth_delete_rule` | `["delete_auth_rule", "blacklist", "pubkey", "abc123..."]` | Delete specific auth rule |
| `auth_query_all` | `["auth_query", "all"]` | Query all auth rules |
| `auth_query_type` | `["auth_query", "whitelist"]` | Query specific rule type |
| `auth_query_pattern` | `["auth_query", "pattern", "abc123..."]` | Query specific pattern |
| **Database Queries** |
| `sql_query` | `["sql_query", "SELECT * FROM blobs LIMIT 10"]` | Execute read-only SQL query |
@@ -448,10 +455,16 @@ All commands are sent as NIP-44 encrypted JSON arrays in the event content:
- `kind_10002_tags`: Relay list JSON array
**Authentication Settings:**
- `auth_enabled`: Enable auth rules system
- `auth_rules_enabled`: Enable auth rules system
- `require_auth_upload`: Require authentication for uploads
- `require_auth_delete`: Require authentication for deletes
**Authorization Rules:**
- `rule_type`: Type of rule (`pubkey_blacklist`, `pubkey_whitelist`, `hash_blacklist`, `mime_blacklist`, `mime_whitelist`)
- `pattern_type`: Pattern matching type (`pubkey`, `hash`, `mime`)
- `pattern_value`: The actual value to match (64-char hex for pubkey/hash, MIME type string for mime)
- `active`: Whether rule is active (1) or disabled (0)
**Limits:**
- `max_blobs_per_user`: Per-user blob limit
- `rate_limit_uploads`: Uploads per minute

View File

@@ -4,7 +4,7 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Blossom Admin 2</title>
<title>Blossom Admin</title>
<link rel="stylesheet" href="/api/index.css">
</head>
@@ -252,7 +252,7 @@ AUTH RULES MANAGEMENT
</div>
<!-- Auth Rules Table -->
<div id="authRulesTableContainer" style="display: none;">
<div id="authRulesTableContainer" class="config-table-container">
<table class="config-table" id="authRulesTable">
<thead>
<tr>
@@ -264,6 +264,9 @@ AUTH RULES MANAGEMENT
</tr>
</thead>
<tbody id="authRulesTableBody">
<tr>
<td colspan="5" style="text-align: center; font-style: italic;">Loading auth rules...</td>
</tr>
</tbody>
</table>
</div>
@@ -275,8 +278,8 @@ AUTH RULES MANAGEMENT
<div class="input-group">
<label for="authRulePubkey">Pubkey (nsec or hex):</label>
<input type="text" id="authRulePubkey" placeholder="nsec1... or 64-character hex pubkey">
<label for="authRulePubkey">Pubkey (npub or hex):</label>
<input type="text" id="authRulePubkey" placeholder="npub1... or 64-character hex pubkey">
</div>
<div id="whitelistWarning" class="warning-box" style="display: none;">

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

27138
debug.log

File diff suppressed because it is too large Load Diff

302
docs/AUTH_RULES_STATUS.md Normal file
View File

@@ -0,0 +1,302 @@
# Auth Rules Management System - Current Status
## Executive Summary
The auth rules management system is **fully implemented** with a database schema that differs from c-relay. This document outlines the current state and proposes alignment with c-relay's schema.
## Current Database Schema
### Ginxsom Schema (Current)
```sql
CREATE TABLE auth_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rule_type TEXT NOT NULL, -- 'pubkey_blacklist', 'pubkey_whitelist', etc.
rule_target TEXT NOT NULL, -- The pubkey, hash, or MIME type to match
operation TEXT NOT NULL DEFAULT '*', -- 'upload', 'delete', 'list', or '*'
enabled INTEGER NOT NULL DEFAULT 1, -- 1 = enabled, 0 = disabled
priority INTEGER NOT NULL DEFAULT 100,-- Lower number = higher priority
description TEXT, -- Human-readable description
created_by TEXT, -- Admin pubkey who created the rule
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
CHECK (rule_type IN ('pubkey_blacklist', 'pubkey_whitelist',
'hash_blacklist', 'mime_blacklist', 'mime_whitelist')),
CHECK (operation IN ('upload', 'delete', 'list', '*')),
CHECK (enabled IN (0, 1)),
CHECK (priority >= 0),
UNIQUE(rule_type, rule_target, operation)
);
```
### C-Relay Schema (Target)
```sql
CREATE TABLE auth_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rule_type TEXT NOT NULL,
pattern_type TEXT NOT NULL,
pattern_value TEXT NOT NULL,
active INTEGER NOT NULL DEFAULT 1,
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))
);
```
## Schema Differences
| Field | Ginxsom | C-Relay | Notes |
|-------|---------|---------|-------|
| `id` | ✅ | ✅ | Same |
| `rule_type` | ✅ | ✅ | Same |
| `rule_target` | ✅ | ❌ | Ginxsom-specific |
| `pattern_type` | ❌ | ✅ | C-relay-specific |
| `pattern_value` | ❌ | ✅ | C-relay-specific |
| `operation` | ✅ | ❌ | Ginxsom-specific |
| `enabled` | ✅ (1/0) | ❌ | Ginxsom uses `enabled` |
| `active` | ❌ | ✅ (1/0) | C-relay uses `active` |
| `priority` | ✅ | ❌ | Ginxsom-specific |
| `description` | ✅ | ❌ | Ginxsom-specific |
| `created_by` | ✅ | ❌ | Ginxsom-specific |
| `created_at` | ✅ | ✅ | Same |
| `updated_at` | ✅ | ✅ | Same |
## What Has Been Implemented
### ✅ Database Layer
- **Schema Created**: [`auth_rules`](../db/52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a.db) table exists with full schema
- **Indexes**: 5 indexes for performance optimization
- **Constraints**: CHECK constraints for data validation
- **Unique Constraint**: Prevents duplicate rules
### ✅ Rule Evaluation Engine
Location: [`src/request_validator.c:1318-1592`](../src/request_validator.c#L1318-L1592)
**Implemented Features:**
1. **Pubkey Blacklist** (Priority 1) - Lines 1346-1377
2. **Hash Blacklist** (Priority 2) - Lines 1382-1420
3. **MIME Blacklist** (Priority 3) - Lines 1423-1462
4. **Pubkey Whitelist** (Priority 4) - Lines 1464-1491
5. **MIME Whitelist** (Priority 5) - Lines 1493-1526
6. **Whitelist Default Denial** (Priority 6) - Lines 1528-1591
**Features:**
- ✅ Priority-based rule evaluation
- ✅ Wildcard operation matching (`*`)
- ✅ MIME type pattern matching (`image/*`)
- ✅ Whitelist default-deny behavior
- ✅ Detailed violation tracking
- ✅ Performance-optimized queries
### ✅ Admin API Commands
Location: [`src/admin_commands.c`](../src/admin_commands.c)
**Implemented Commands:**
-`config_query` - Query configuration values
-`config_update` - Update configuration
-`stats_query` - Get system statistics (includes auth_rules count)
-`system_status` - System health check
-`blob_list` - List stored blobs
-`storage_stats` - Storage statistics
-`sql_query` - Direct SQL queries (read-only)
**Note:** The stats_query command already queries auth_rules:
```c
// Line 390-395
sql = "SELECT COUNT(*) FROM auth_rules WHERE enabled = 1";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK && sqlite3_step(stmt) == SQLITE_ROW) {
cJSON_AddNumberToObject(stats, "active_auth_rules", sqlite3_column_int(stmt, 0));
}
```
### ❌ Missing Admin API Endpoints
The following endpoints from [`docs/AUTH_RULES_IMPLEMENTATION_PLAN.md`](../docs/AUTH_RULES_IMPLEMENTATION_PLAN.md) are **NOT implemented**:
1. **GET /api/rules** - List authentication rules
2. **POST /api/rules** - Create new rule
3. **PUT /api/rules/:id** - Update existing rule
4. **DELETE /api/rules/:id** - Delete rule
5. **POST /api/rules/clear-cache** - Clear auth cache
6. **GET /api/rules/test** - Test rule evaluation
### ✅ Configuration System
-`auth_rules_enabled` config flag (checked in [`reload_auth_config()`](../src/request_validator.c#L1049-L1145))
- ✅ Cache system with 5-minute TTL
- ✅ Environment variable support (`GINX_NO_CACHE`, `GINX_CACHE_TIMEOUT`)
### ✅ Documentation
- ✅ [`docs/AUTH_API.md`](../docs/AUTH_API.md) - Complete authentication flow
- ✅ [`docs/AUTH_RULES_IMPLEMENTATION_PLAN.md`](../docs/AUTH_RULES_IMPLEMENTATION_PLAN.md) - Implementation plan
- ✅ Flow diagrams and performance metrics
## Proposed Schema Migration to C-Relay Format
### Option 1: Minimal Changes (Recommended)
Keep Ginxsom's richer schema but rename fields for compatibility:
```sql
ALTER TABLE auth_rules RENAME COLUMN enabled TO active;
ALTER TABLE auth_rules ADD COLUMN pattern_type TEXT;
ALTER TABLE auth_rules ADD COLUMN pattern_value TEXT;
-- Populate new fields from existing data
UPDATE auth_rules SET
pattern_type = CASE
WHEN rule_type LIKE '%pubkey%' THEN 'pubkey'
WHEN rule_type LIKE '%hash%' THEN 'hash'
WHEN rule_type LIKE '%mime%' THEN 'mime'
END,
pattern_value = rule_target;
```
**Pros:**
- Maintains all Ginxsom features (operation, priority, description)
- Adds c-relay compatibility fields
- No data loss
- Backward compatible
**Cons:**
- Redundant fields (`rule_target` + `pattern_value`)
- Larger schema
### Option 2: Full Migration to C-Relay Schema
Drop Ginxsom-specific fields and adopt c-relay schema:
```sql
-- Create new table with c-relay schema
CREATE TABLE auth_rules_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rule_type TEXT NOT NULL,
pattern_type TEXT NOT NULL,
pattern_value TEXT NOT NULL,
active INTEGER NOT NULL DEFAULT 1,
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))
);
-- Migrate data
INSERT INTO auth_rules_new (id, rule_type, pattern_type, pattern_value, active, created_at, updated_at)
SELECT
id,
rule_type,
CASE
WHEN rule_type LIKE '%pubkey%' THEN 'pubkey'
WHEN rule_type LIKE '%hash%' THEN 'hash'
WHEN rule_type LIKE '%mime%' THEN 'mime'
END as pattern_type,
rule_target as pattern_value,
enabled as active,
created_at,
updated_at
FROM auth_rules;
-- Replace old table
DROP TABLE auth_rules;
ALTER TABLE auth_rules_new RENAME TO auth_rules;
```
**Pros:**
- Full c-relay compatibility
- Simpler schema
- Smaller database
**Cons:**
- **Loss of operation-specific rules** (upload/delete/list)
- **Loss of priority system**
- **Loss of description and created_by tracking**
- **Breaking change** - requires code updates in [`request_validator.c`](../src/request_validator.c)
## Code Impact Analysis
### Files Requiring Updates for C-Relay Schema
1. **[`src/request_validator.c`](../src/request_validator.c)**
- Lines 1346-1591: Rule evaluation queries need field name changes
- Change `enabled``active`
- Change `rule_target``pattern_value`
- Add `pattern_type` to queries if using Option 1
2. **[`src/admin_commands.c`](../src/admin_commands.c)**
- Line 390: Stats query uses `enabled` field
- Any future rule management endpoints
3. **[`docs/AUTH_RULES_IMPLEMENTATION_PLAN.md`](../docs/AUTH_RULES_IMPLEMENTATION_PLAN.md)**
- Update schema documentation
- Update API endpoint specifications
## Recommendations
### For C-Relay Alignment
**Use Option 1 (Minimal Changes)** because:
1. Preserves Ginxsom's advanced features (operation-specific rules, priority)
2. Adds c-relay compatibility without breaking existing functionality
3. Minimal code changes required
4. No data loss
### For Admin API Completion
Implement the missing endpoints in priority order:
1. **POST /api/rules** - Create rules (highest priority)
2. **GET /api/rules** - List rules
3. **DELETE /api/rules/:id** - Delete rules
4. **PUT /api/rules/:id** - Update rules
5. **GET /api/rules/test** - Test rules
6. **POST /api/rules/clear-cache** - Clear cache
### Migration Script
```bash
#!/bin/bash
# migrate_auth_rules_to_crelay.sh
DB_PATH="db/52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a.db"
sqlite3 "$DB_PATH" <<EOF
-- Backup current table
CREATE TABLE auth_rules_backup AS SELECT * FROM auth_rules;
-- Add c-relay compatibility fields
ALTER TABLE auth_rules ADD COLUMN pattern_type TEXT;
ALTER TABLE auth_rules ADD COLUMN pattern_value TEXT;
-- Populate new fields
UPDATE auth_rules SET
pattern_type = CASE
WHEN rule_type LIKE '%pubkey%' THEN 'pubkey'
WHEN rule_type LIKE '%hash%' THEN 'hash'
WHEN rule_type LIKE '%mime%' THEN 'mime'
END,
pattern_value = rule_target;
-- Rename enabled to active for c-relay compatibility
-- Note: SQLite doesn't support RENAME COLUMN directly in older versions
-- So we'll keep both fields for now
ALTER TABLE auth_rules ADD COLUMN active INTEGER NOT NULL DEFAULT 1;
UPDATE auth_rules SET active = enabled;
-- Verify migration
SELECT COUNT(*) as total_rules FROM auth_rules;
SELECT COUNT(*) as rules_with_pattern FROM auth_rules WHERE pattern_type IS NOT NULL;
EOF
```
## Summary
**Current State:**
- ✅ Database schema exists and is functional
- ✅ Rule evaluation engine fully implemented
- ✅ Configuration system working
- ✅ Documentation complete
- ❌ Admin API endpoints for rule management missing
**To Align with C-Relay:**
- Add `pattern_type` and `pattern_value` fields
- Optionally rename `enabled` to `active`
- Keep Ginxsom's advanced features (operation, priority, description)
- Update queries to use new field names
**Next Steps:**
1. Decide on migration strategy (Option 1 recommended)
2. Run migration script
3. Update code to use new field names
4. Implement missing Admin API endpoints
5. Test rule evaluation with new schema

View File

@@ -8,6 +8,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <time.h>
// Forward declare app_log
@@ -142,6 +143,19 @@ cJSON* admin_commands_process(cJSON* command_array, const char* request_event_id
else if (strcmp(command, "sql_query") == 0) {
return admin_cmd_sql_query(command_array);
}
else if (strcmp(command, "query_view") == 0) {
return admin_cmd_query_view(command_array);
}
// Auth rules management commands (c-relay compatible)
else if (strcmp(command, "blacklist") == 0 || strcmp(command, "whitelist") == 0) {
return admin_cmd_auth_add_rule(command_array);
}
else if (strcmp(command, "delete_auth_rule") == 0) {
return admin_cmd_auth_delete_rule(command_array);
}
else if (strcmp(command, "auth_query") == 0) {
return admin_cmd_auth_query(command_array);
}
else {
char error_msg[256];
snprintf(error_msg, sizeof(error_msg), "Unknown command: %s", command);
@@ -167,16 +181,23 @@ cJSON* admin_cmd_config_query(cJSON* args) {
return response;
}
// Check if specific keys were requested (args[1] should be array of keys or null for all)
// Check if specific keys were requested (args[1] should be array of keys, null, or "all" for all)
cJSON* keys_array = NULL;
if (cJSON_GetArraySize(args) >= 2) {
keys_array = cJSON_GetArrayItem(args, 1);
// Accept array, null, or string "all" for querying all configs
if (!cJSON_IsArray(keys_array) && !cJSON_IsNull(keys_array)) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Keys parameter must be array or null");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
// Check if it's the string "all"
if (cJSON_IsString(keys_array) && strcmp(keys_array->valuestring, "all") == 0) {
// Treat "all" as null (query all configs)
keys_array = NULL;
} else {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Keys parameter must be array, null, or \"all\"");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
}
}
@@ -259,18 +280,18 @@ cJSON* admin_cmd_config_update(cJSON* args) {
cJSON* response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "query_type", "config_update");
// Expected format: ["config_update", {"key1": "value1", "key2": "value2"}]
// Expected format: ["config_update", [{key: "x", value: "y", data_type: "z", category: "w"}]]
if (cJSON_GetArraySize(args) < 2) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Missing config updates object");
cJSON_AddStringToObject(response, "error", "Missing config updates array");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
cJSON* updates = cJSON_GetArrayItem(args, 1);
if (!cJSON_IsObject(updates)) {
if (!cJSON_IsArray(updates)) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Updates must be an object");
cJSON_AddStringToObject(response, "error", "Updates must be an array of config objects");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
@@ -297,50 +318,66 @@ cJSON* admin_cmd_config_update(cJSON* args) {
return response;
}
// Process each update
cJSON* updated_keys = cJSON_CreateArray();
cJSON* failed_keys = cJSON_CreateArray();
// Process each update - expecting array of config objects
cJSON* data_array = cJSON_CreateArray();
int success_count = 0;
int fail_count = 0;
cJSON* item = NULL;
cJSON_ArrayForEach(item, updates) {
const char* key = item->string;
const char* value = cJSON_GetStringValue(item);
if (!value) {
cJSON_AddItemToArray(failed_keys, cJSON_CreateString(key));
cJSON* config_obj = NULL;
cJSON_ArrayForEach(config_obj, updates) {
if (!cJSON_IsObject(config_obj)) {
fail_count++;
continue;
}
cJSON* key_item = cJSON_GetObjectItem(config_obj, "key");
cJSON* value_item = cJSON_GetObjectItem(config_obj, "value");
if (!cJSON_IsString(key_item) || !cJSON_IsString(value_item)) {
fail_count++;
continue;
}
const char* key = key_item->valuestring;
const char* value = value_item->valuestring;
sqlite3_reset(stmt);
sqlite3_bind_text(stmt, 1, value, -1, SQLITE_TRANSIENT);
sqlite3_bind_text(stmt, 2, key, -1, SQLITE_TRANSIENT);
rc = sqlite3_step(stmt);
// Create result object for this config update
cJSON* result_obj = cJSON_CreateObject();
cJSON_AddStringToObject(result_obj, "key", key);
if (rc == SQLITE_DONE && sqlite3_changes(db) > 0) {
cJSON_AddItemToArray(updated_keys, cJSON_CreateString(key));
cJSON_AddStringToObject(result_obj, "status", "success");
cJSON_AddStringToObject(result_obj, "value", value);
// Add optional fields if present
cJSON* data_type_item = cJSON_GetObjectItem(config_obj, "data_type");
if (cJSON_IsString(data_type_item)) {
cJSON_AddStringToObject(result_obj, "data_type", data_type_item->valuestring);
}
success_count++;
app_log(LOG_INFO, "Updated config key: %s", key);
app_log(LOG_INFO, "Updated config key: %s = %s", key, value);
} else {
cJSON_AddItemToArray(failed_keys, cJSON_CreateString(key));
cJSON_AddStringToObject(result_obj, "status", "error");
cJSON_AddStringToObject(result_obj, "error", "Failed to update");
fail_count++;
}
cJSON_AddItemToArray(data_array, result_obj);
}
sqlite3_finalize(stmt);
sqlite3_close(db);
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddNumberToObject(response, "updated_count", success_count);
cJSON_AddNumberToObject(response, "failed_count", fail_count);
cJSON_AddItemToObject(response, "updated_keys", updated_keys);
if (fail_count > 0) {
cJSON_AddItemToObject(response, "failed_keys", failed_keys);
} else {
cJSON_Delete(failed_keys);
}
cJSON_AddStringToObject(response, "status", success_count > 0 ? "success" : "error");
cJSON_AddNumberToObject(response, "updates_applied", success_count);
cJSON_AddItemToObject(response, "data", data_array);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
@@ -387,7 +424,7 @@ cJSON* admin_cmd_stats_query(cJSON* args) {
sqlite3_finalize(stmt);
// Get auth rules count
sql = "SELECT COUNT(*) FROM auth_rules WHERE enabled = 1";
sql = "SELECT COUNT(*) FROM auth_rules WHERE active = 1";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK && sqlite3_step(stmt) == SQLITE_ROW) {
cJSON_AddNumberToObject(stats, "active_auth_rules", sqlite3_column_int(stmt, 0));
@@ -637,7 +674,7 @@ cJSON* admin_cmd_sql_query(cJSON* args) {
cJSON* response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "query_type", "sql_query");
// Expected format: ["sql_query", "SELECT ..."]
// Expected format: ["sql_query", "SQL STATEMENT"]
if (cJSON_GetArraySize(args) < 2) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Missing SQL query");
@@ -654,20 +691,26 @@ cJSON* admin_cmd_sql_query(cJSON* args) {
}
const char* sql = query_item->valuestring;
const char* trimmed_sql = sql;
while (*trimmed_sql && isspace((unsigned char)*trimmed_sql)) {
trimmed_sql++;
}
// Security: Only allow SELECT queries
const char* sql_upper = sql;
while (*sql_upper == ' ' || *sql_upper == '\t' || *sql_upper == '\n') sql_upper++;
if (strncasecmp(sql_upper, "SELECT", 6) != 0) {
int is_select = strncasecmp(trimmed_sql, "SELECT", 6) == 0;
int is_delete = strncasecmp(trimmed_sql, "DELETE", 6) == 0;
int is_update = strncasecmp(trimmed_sql, "UPDATE", 6) == 0;
int is_insert = strncasecmp(trimmed_sql, "INSERT", 6) == 0;
if (!is_select && !is_delete && !is_update && !is_insert) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Only SELECT queries are allowed");
cJSON_AddStringToObject(response, "error", "Only SELECT, INSERT, UPDATE, or DELETE queries are allowed");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
// Open database (read-only for safety)
int open_flags = is_select ? SQLITE_OPEN_READONLY : (SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE);
sqlite3* db;
int rc = sqlite3_open_v2(g_admin_state.db_path, &db, SQLITE_OPEN_READONLY, NULL);
int rc = sqlite3_open_v2(g_admin_state.db_path, &db, open_flags, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to open database");
@@ -675,7 +718,70 @@ cJSON* admin_cmd_sql_query(cJSON* args) {
return response;
}
// Prepare and execute query
if (is_select) {
sqlite3_stmt* stmt;
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
char error_msg[256];
snprintf(error_msg, sizeof(error_msg), "SQL error: %s", sqlite3_errmsg(db));
cJSON_AddStringToObject(response, "error", error_msg);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
int col_count = sqlite3_column_count(stmt);
cJSON* columns = cJSON_CreateArray();
for (int i = 0; i < col_count; i++) {
cJSON_AddItemToArray(columns, cJSON_CreateString(sqlite3_column_name(stmt, i)));
}
cJSON* rows = cJSON_CreateArray();
int row_count = 0;
const int MAX_ROWS = 1000;
while (row_count < MAX_ROWS && (rc = sqlite3_step(stmt)) == SQLITE_ROW) {
cJSON* row = cJSON_CreateArray();
for (int i = 0; i < col_count; i++) {
int col_type = sqlite3_column_type(stmt, i);
switch (col_type) {
case SQLITE_INTEGER:
cJSON_AddItemToArray(row, cJSON_CreateNumber(sqlite3_column_int64(stmt, i)));
break;
case SQLITE_FLOAT:
cJSON_AddItemToArray(row, cJSON_CreateNumber(sqlite3_column_double(stmt, i)));
break;
case SQLITE_TEXT:
cJSON_AddItemToArray(row, cJSON_CreateString((const char*)sqlite3_column_text(stmt, i)));
break;
case SQLITE_NULL:
cJSON_AddItemToArray(row, cJSON_CreateNull());
break;
default:
cJSON_AddItemToArray(row, cJSON_CreateString(""));
}
}
cJSON_AddItemToArray(rows, row);
row_count++;
}
sqlite3_finalize(stmt);
sqlite3_close(db);
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddItemToObject(response, "columns", columns);
cJSON_AddItemToObject(response, "rows", rows);
cJSON_AddNumberToObject(response, "row_count", row_count);
if (row_count >= MAX_ROWS) {
cJSON_AddBoolToObject(response, "truncated", 1);
}
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
app_log(LOG_INFO, "SQL query executed: %d rows returned", row_count);
return response;
}
// Handle DELETE/UPDATE/INSERT
sqlite3_stmt* stmt;
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc != SQLITE_OK) {
@@ -688,19 +794,113 @@ cJSON* admin_cmd_sql_query(cJSON* args) {
return response;
}
// Get column names
rc = sqlite3_step(stmt);
if (rc != SQLITE_DONE) {
cJSON_AddStringToObject(response, "status", "error");
char error_msg[256];
snprintf(error_msg, sizeof(error_msg), "SQL execution error: %s", sqlite3_errmsg(db));
cJSON_AddStringToObject(response, "error", error_msg);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_finalize(stmt);
sqlite3_close(db);
return response;
}
int affected_rows = sqlite3_changes(db);
sqlite3_finalize(stmt);
sqlite3_close(db);
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddNumberToObject(response, "affected_rows", affected_rows);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
app_log(LOG_INFO, "SQL modification executed: %d rows affected", affected_rows);
return response;
}
cJSON* admin_cmd_query_view(cJSON* args) {
cJSON* response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "query_type", "query_view");
// Expected format: ["query_view", "view_name"]
if (cJSON_GetArraySize(args) < 2) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Missing view name");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
cJSON* view_name_item = cJSON_GetArrayItem(args, 1);
if (!cJSON_IsString(view_name_item)) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "View name must be a string");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
const char* view_name = view_name_item->valuestring;
// Open database
sqlite3* db;
int rc = sqlite3_open_v2(g_admin_state.db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to open database");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
// Build SQL query based on view name
char sql[512];
if (strcmp(view_name, "blob_overview") == 0) {
// Query blob_overview view
snprintf(sql, sizeof(sql), "SELECT * FROM blob_overview");
} else if (strcmp(view_name, "storage_stats") == 0) {
// Query storage_stats view
snprintf(sql, sizeof(sql), "SELECT * FROM storage_stats");
} else if (strcmp(view_name, "blob_type_distribution") == 0) {
// Query blob_type_distribution view
snprintf(sql, sizeof(sql), "SELECT * FROM blob_type_distribution");
} else if (strcmp(view_name, "blob_time_stats") == 0) {
// Query blob_time_stats view
snprintf(sql, sizeof(sql), "SELECT * FROM blob_time_stats");
} else if (strcmp(view_name, "top_uploaders") == 0) {
// Query top_uploaders view
snprintf(sql, sizeof(sql), "SELECT * FROM top_uploaders");
} else {
cJSON_AddStringToObject(response, "status", "error");
char error_msg[256];
snprintf(error_msg, sizeof(error_msg), "Unknown view: %s", view_name);
cJSON_AddStringToObject(response, "error", error_msg);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
sqlite3_stmt* stmt;
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
char error_msg[256];
snprintf(error_msg, sizeof(error_msg), "Failed to prepare query: %s", sqlite3_errmsg(db));
cJSON_AddStringToObject(response, "error", error_msg);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
// Execute query and build results
int col_count = sqlite3_column_count(stmt);
cJSON* columns = cJSON_CreateArray();
for (int i = 0; i < col_count; i++) {
cJSON_AddItemToArray(columns, cJSON_CreateString(sqlite3_column_name(stmt, i)));
}
// Execute and collect rows (limit to 1000 rows for safety)
cJSON* rows = cJSON_CreateArray();
int row_count = 0;
const int MAX_ROWS = 1000;
while (row_count < MAX_ROWS && (rc = sqlite3_step(stmt)) == SQLITE_ROW) {
while ((rc = sqlite3_step(stmt)) == SQLITE_ROW) {
cJSON* row = cJSON_CreateArray();
for (int i = 0; i < col_count; i++) {
int col_type = sqlite3_column_type(stmt, i);
@@ -729,15 +929,313 @@ cJSON* admin_cmd_sql_query(cJSON* args) {
sqlite3_close(db);
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "view_name", view_name);
cJSON_AddItemToObject(response, "columns", columns);
cJSON_AddItemToObject(response, "rows", rows);
cJSON_AddNumberToObject(response, "row_count", row_count);
if (row_count >= MAX_ROWS) {
cJSON_AddBoolToObject(response, "truncated", 1);
}
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
app_log(LOG_INFO, "SQL query executed: %d rows returned", row_count);
app_log(LOG_INFO, "View query executed: %s (%d rows)", view_name, row_count);
return response;
}
// ============================================================================
// AUTH RULES MANAGEMENT COMMANDS (c-relay compatible)
// ============================================================================
// Add blacklist or whitelist rule
// Format: ["blacklist", "pubkey", "abc123..."] or ["whitelist", "pubkey", "def456..."]
cJSON* admin_cmd_auth_add_rule(cJSON* args) {
cJSON* response = cJSON_CreateObject();
// Get command type (blacklist or whitelist)
cJSON* cmd_type = cJSON_GetArrayItem(args, 0);
if (!cJSON_IsString(cmd_type)) {
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Invalid command type");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
const char* command = cmd_type->valuestring;
const char* rule_type_prefix = command; // "blacklist" or "whitelist"
// Expected format: ["blacklist/whitelist", "pattern_type", "pattern_value"]
if (cJSON_GetArraySize(args) < 3) {
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Missing parameters. Format: [\"blacklist/whitelist\", \"pattern_type\", \"pattern_value\"]");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
cJSON* pattern_type_item = cJSON_GetArrayItem(args, 1);
cJSON* pattern_value_item = cJSON_GetArrayItem(args, 2);
if (!cJSON_IsString(pattern_type_item) || !cJSON_IsString(pattern_value_item)) {
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Pattern type and value must be strings");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
const char* pattern_type = pattern_type_item->valuestring;
const char* pattern_value = pattern_value_item->valuestring;
char rule_type[64];
snprintf(rule_type, sizeof(rule_type), "%s_%s", rule_type_prefix, pattern_type);
// Validate pattern_type
if (strcmp(pattern_type, "pubkey") != 0 && strcmp(pattern_type, "hash") != 0 && strcmp(pattern_type, "mime") != 0) {
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Invalid pattern_type. Must be 'pubkey', 'hash', or 'mime'");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
// Open database
sqlite3* db;
int rc = sqlite3_open_v2(g_admin_state.db_path, &db, SQLITE_OPEN_READWRITE, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to open database");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
// Insert rule
const char* sql = "INSERT INTO auth_rules (rule_type, pattern_type, pattern_value) VALUES (?, ?, ?)";
sqlite3_stmt* stmt;
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to prepare insert statement");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
sqlite3_bind_text(stmt, 1, rule_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, pattern_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, pattern_value, -1, SQLITE_STATIC);
rc = sqlite3_step(stmt);
int rule_id = 0;
if (rc == SQLITE_DONE) {
rule_id = sqlite3_last_insert_rowid(db);
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddNumberToObject(response, "rule_id", rule_id);
cJSON_AddStringToObject(response, "rule_type", rule_type);
cJSON_AddStringToObject(response, "pattern_type", pattern_type);
cJSON_AddStringToObject(response, "pattern_value", pattern_value);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
app_log(LOG_INFO, "Added %s rule: %s=%s (ID: %d)", rule_type, pattern_type, pattern_value, rule_id);
} else {
cJSON_AddStringToObject(response, "query_type", "auth_add_rule");
cJSON_AddStringToObject(response, "status", "error");
char error_msg[256];
snprintf(error_msg, sizeof(error_msg), "Failed to insert rule: %s", sqlite3_errmsg(db));
cJSON_AddStringToObject(response, "error", error_msg);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
}
sqlite3_finalize(stmt);
sqlite3_close(db);
return response;
}
// Delete auth rule
// Format: ["delete_auth_rule", "blacklist", "pubkey", "abc123..."]
cJSON* admin_cmd_auth_delete_rule(cJSON* args) {
cJSON* response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "query_type", "delete_auth_rule");
// Expected format: ["delete_auth_rule", "rule_type", "pattern_type", "pattern_value"]
if (cJSON_GetArraySize(args) < 4) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Missing parameters. Format: [\"delete_auth_rule\", \"blacklist/whitelist\", \"pattern_type\", \"pattern_value\"]");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
cJSON* rule_type_item = cJSON_GetArrayItem(args, 1);
cJSON* pattern_type_item = cJSON_GetArrayItem(args, 2);
cJSON* pattern_value_item = cJSON_GetArrayItem(args, 3);
if (!cJSON_IsString(rule_type_item) || !cJSON_IsString(pattern_type_item) || !cJSON_IsString(pattern_value_item)) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "All parameters must be strings");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
const char* rule_type_str = cJSON_GetStringValue(rule_type_item);
const char* pattern_type = cJSON_GetStringValue(pattern_type_item);
const char* pattern_value = cJSON_GetStringValue(pattern_value_item);
char full_rule_type[64];
snprintf(full_rule_type, sizeof(full_rule_type), "%s_%s", rule_type_str, pattern_type);
// Open database
sqlite3* db;
int rc = sqlite3_open_v2(g_admin_state.db_path, &db, SQLITE_OPEN_READWRITE, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to open database");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
// Delete rule
const char* sql = "DELETE FROM auth_rules WHERE rule_type = ? AND pattern_type = ? AND pattern_value = ?";
sqlite3_stmt* stmt;
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to prepare delete statement");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
sqlite3_bind_text(stmt, 1, full_rule_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, pattern_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, pattern_value, -1, SQLITE_STATIC);
rc = sqlite3_step(stmt);
int changes = sqlite3_changes(db);
if (rc == SQLITE_DONE) {
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddNumberToObject(response, "deleted_count", changes);
cJSON_AddStringToObject(response, "rule_type", full_rule_type);
cJSON_AddStringToObject(response, "pattern_type", pattern_type);
cJSON_AddStringToObject(response, "pattern_value", pattern_value);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
app_log(LOG_INFO, "Deleted %d %s rule(s): %s=%s", changes, full_rule_type, pattern_type, pattern_value);
} else {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to delete rule");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
}
sqlite3_finalize(stmt);
sqlite3_close(db);
return response;
}
// Query auth rules
// Format: ["auth_query", "all"] or ["auth_query", "whitelist"] or ["auth_query", "pattern", "abc123..."]
cJSON* admin_cmd_auth_query(cJSON* args) {
cJSON* response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "query_type", "auth_query");
// Get query type
const char* query_type = "all";
const char* filter_value = NULL;
if (cJSON_GetArraySize(args) >= 2) {
cJSON* query_type_item = cJSON_GetArrayItem(args, 1);
if (cJSON_IsString(query_type_item)) {
query_type = query_type_item->valuestring;
}
}
if (cJSON_GetArraySize(args) >= 3) {
cJSON* filter_value_item = cJSON_GetArrayItem(args, 2);
if (cJSON_IsString(filter_value_item)) {
filter_value = filter_value_item->valuestring;
}
}
// Open database
sqlite3* db;
int rc = sqlite3_open_v2(g_admin_state.db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to open database");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
return response;
}
// Build SQL query based on query type
char sql[512];
sqlite3_stmt* stmt;
if (strcmp(query_type, "all") == 0) {
snprintf(sql, sizeof(sql), "SELECT id, rule_type, pattern_type, pattern_value, active, created_at, updated_at FROM auth_rules ORDER BY id");
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
}
else if (strcmp(query_type, "blacklist") == 0 || strcmp(query_type, "whitelist") == 0) {
snprintf(sql, sizeof(sql), "SELECT id, rule_type, pattern_type, pattern_value, active, created_at, updated_at FROM auth_rules WHERE rule_type LIKE ? || '_%%' ORDER BY id");
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, query_type, -1, SQLITE_STATIC);
}
}
else if (strcmp(query_type, "pattern") == 0 && filter_value) {
snprintf(sql, sizeof(sql), "SELECT id, rule_type, pattern_type, pattern_value, active, created_at, updated_at FROM auth_rules WHERE pattern_value = ? ORDER BY id");
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, filter_value, -1, SQLITE_STATIC);
}
}
else {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Invalid query type. Use 'all', 'blacklist', 'whitelist', or 'pattern'");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "error", "Failed to prepare query");
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
sqlite3_close(db);
return response;
}
// Execute query and build results
cJSON* rules = cJSON_CreateArray();
int count = 0;
while (sqlite3_step(stmt) == SQLITE_ROW) {
cJSON* rule = cJSON_CreateObject();
cJSON_AddNumberToObject(rule, "id", sqlite3_column_int(stmt, 0));
cJSON_AddStringToObject(rule, "rule_type", (const char*)sqlite3_column_text(stmt, 1));
cJSON_AddStringToObject(rule, "pattern_type", (const char*)sqlite3_column_text(stmt, 2));
cJSON_AddStringToObject(rule, "pattern_value", (const char*)sqlite3_column_text(stmt, 3));
cJSON_AddNumberToObject(rule, "active", sqlite3_column_int(stmt, 4));
cJSON_AddNumberToObject(rule, "created_at", sqlite3_column_int64(stmt, 5));
cJSON_AddNumberToObject(rule, "updated_at", sqlite3_column_int64(stmt, 6));
cJSON_AddItemToArray(rules, rule);
count++;
}
sqlite3_finalize(stmt);
sqlite3_close(db);
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddNumberToObject(response, "count", count);
cJSON_AddStringToObject(response, "filter", query_type);
cJSON_AddItemToObject(response, "rules", rules);
cJSON_AddNumberToObject(response, "timestamp", (double)time(NULL));
app_log(LOG_INFO, "Auth query executed: %d rules returned (filter: %s)", count, query_type);
return response;
}

View File

@@ -35,6 +35,12 @@ cJSON* admin_cmd_system_status(cJSON* args);
cJSON* admin_cmd_blob_list(cJSON* args);
cJSON* admin_cmd_storage_stats(cJSON* args);
cJSON* admin_cmd_sql_query(cJSON* args);
cJSON* admin_cmd_query_view(cJSON* args);
// Auth rules management handlers (c-relay compatible)
cJSON* admin_cmd_auth_add_rule(cJSON* args);
cJSON* admin_cmd_auth_delete_rule(cJSON* args);
cJSON* admin_cmd_auth_query(cJSON* args);
// NIP-44 encryption/decryption helpers
int admin_encrypt_response(

View File

@@ -6,6 +6,7 @@
#include <unistd.h>
#include <sys/types.h>
#include "ginxsom.h"
#include "admin_commands.h"
// Forward declarations for nostr_core_lib functions
int nostr_hex_to_bytes(const char* hex, unsigned char* bytes, size_t bytes_len);
@@ -28,10 +29,8 @@ extern char g_db_path[];
// Forward declarations
static int get_server_privkey(unsigned char* privkey_bytes);
static int get_server_pubkey(char* pubkey_hex, size_t size);
static int handle_config_query_command(cJSON* response_data);
static int handle_query_view_command(cJSON* command_array, cJSON* response_data);
static int send_admin_response_event(const char* admin_pubkey, const char* request_id,
cJSON* response_data);
cJSON* response_data);
static cJSON* parse_authorization_header(void);
static int process_admin_event(cJSON* event);
@@ -304,20 +303,35 @@ static int process_admin_event(cJSON* event) {
cJSON_AddStringToObject(response_data, "query_type", cmd);
cJSON_AddNumberToObject(response_data, "timestamp", (double)time(NULL));
// Handle command
// Handle command - use admin_commands system for processing
cJSON* command_response = admin_commands_process(command_array, request_id);
int result = -1;
if (strcmp(cmd, "config_query") == 0) {
app_log(LOG_DEBUG, "ADMIN_EVENT: Handling config_query command");
result = handle_config_query_command(response_data);
app_log(LOG_DEBUG, "ADMIN_EVENT: config_query result: %d", result);
} else if (strcmp(cmd, "query_view") == 0) {
app_log(LOG_DEBUG, "ADMIN_EVENT: Handling query_view command");
result = handle_query_view_command(command_array, response_data);
app_log(LOG_DEBUG, "ADMIN_EVENT: query_view result: %d", result);
if (command_response) {
// Check if command was successful
cJSON* status = cJSON_GetObjectItem(command_response, "status");
if (status && cJSON_IsString(status)) {
const char* status_str = cJSON_GetStringValue(status);
if (strcmp(status_str, "success") == 0) {
result = 0;
}
}
// Copy response data from command_response to response_data
cJSON* item = NULL;
cJSON_ArrayForEach(item, command_response) {
if (item->string) {
cJSON* copy = cJSON_Duplicate(item, 1);
cJSON_AddItemToObject(response_data, item->string, copy);
}
}
cJSON_Delete(command_response);
app_log(LOG_DEBUG, "ADMIN_EVENT: Command processed with result: %d", result);
} else {
app_log(LOG_WARN, "ADMIN_EVENT: Unknown command: %s", cmd);
app_log(LOG_ERROR, "ADMIN_EVENT: Command processing returned NULL");
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Unknown command");
cJSON_AddStringToObject(response_data, "error", "Command processing failed");
result = -1;
}
@@ -397,166 +411,6 @@ static int get_server_pubkey(char* pubkey_hex, size_t size) {
return result;
}
/**
* Handle config_query command - returns all config values
*/
static int handle_config_query_command(cJSON* response_data) {
sqlite3* db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Database error");
return -1;
}
cJSON_AddStringToObject(response_data, "status", "success");
cJSON* data = cJSON_CreateObject();
// Query all config settings
sqlite3_stmt* stmt;
const char* sql = "SELECT key, value FROM config ORDER BY key";
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
while (sqlite3_step(stmt) == SQLITE_ROW) {
const char* key = (const char*)sqlite3_column_text(stmt, 0);
const char* value = (const char*)sqlite3_column_text(stmt, 1);
if (key && value) {
cJSON_AddStringToObject(data, key, value);
}
}
sqlite3_finalize(stmt);
}
cJSON_AddItemToObject(response_data, "data", data);
sqlite3_close(db);
return 0;
}
/**
* Handle query_view command - returns data from a specified database view
* Command format: ["query_view", "view_name"]
*/
static int handle_query_view_command(cJSON* command_array, cJSON* response_data) {
app_log(LOG_DEBUG, "ADMIN_EVENT: handle_query_view_command called");
// Get view name from command array
cJSON* view_name_obj = cJSON_GetArrayItem(command_array, 1);
if (!view_name_obj || !cJSON_IsString(view_name_obj)) {
app_log(LOG_ERROR, "ADMIN_EVENT: View name missing or not a string");
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "View name required");
return -1;
}
const char* view_name = cJSON_GetStringValue(view_name_obj);
app_log(LOG_DEBUG, "ADMIN_EVENT: Querying view: %s", view_name);
// Validate view name (whitelist approach for security)
const char* allowed_views[] = {
"blob_overview",
"blob_type_distribution",
"blob_time_stats",
"top_uploaders",
NULL
};
int view_allowed = 0;
for (int i = 0; allowed_views[i] != NULL; i++) {
if (strcmp(view_name, allowed_views[i]) == 0) {
view_allowed = 1;
break;
}
}
if (!view_allowed) {
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Invalid view name");
app_log(LOG_WARN, "ADMIN_EVENT: Attempted to query invalid view: %s", view_name);
return -1;
}
app_log(LOG_DEBUG, "ADMIN_EVENT: View '%s' is allowed, opening database: %s", view_name, g_db_path);
// Open database
sqlite3* db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
app_log(LOG_ERROR, "ADMIN_EVENT: Failed to open database: %s (error: %s)", g_db_path, sqlite3_errmsg(db));
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Database error");
return -1;
}
// Build SQL query
char sql[256];
snprintf(sql, sizeof(sql), "SELECT * FROM %s", view_name);
app_log(LOG_DEBUG, "ADMIN_EVENT: Executing SQL: %s", sql);
sqlite3_stmt* stmt;
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) != SQLITE_OK) {
app_log(LOG_ERROR, "ADMIN_EVENT: Failed to prepare query: %s (error: %s)", sql, sqlite3_errmsg(db));
sqlite3_close(db);
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Failed to prepare query");
return -1;
}
// Get column count and names
int col_count = sqlite3_column_count(stmt);
// Create results array
cJSON* results = cJSON_CreateArray();
// Fetch all rows
while (sqlite3_step(stmt) == SQLITE_ROW) {
cJSON* row = cJSON_CreateObject();
for (int i = 0; i < col_count; i++) {
const char* col_name = sqlite3_column_name(stmt, i);
int col_type = sqlite3_column_type(stmt, i);
switch (col_type) {
case SQLITE_INTEGER:
cJSON_AddNumberToObject(row, col_name, (double)sqlite3_column_int64(stmt, i));
break;
case SQLITE_FLOAT:
cJSON_AddNumberToObject(row, col_name, sqlite3_column_double(stmt, i));
break;
case SQLITE_TEXT:
cJSON_AddStringToObject(row, col_name, (const char*)sqlite3_column_text(stmt, i));
break;
case SQLITE_NULL:
cJSON_AddNullToObject(row, col_name);
break;
default:
// For BLOB or unknown types, skip
break;
}
}
cJSON_AddItemToArray(results, row);
}
sqlite3_finalize(stmt);
sqlite3_close(db);
// Build response
cJSON_AddStringToObject(response_data, "status", "success");
cJSON_AddStringToObject(response_data, "view_name", view_name);
cJSON_AddItemToObject(response_data, "data", results);
// Debug: Log the complete response data
char* debug_response = cJSON_Print(response_data);
if (debug_response) {
app_log(LOG_DEBUG, "ADMIN_EVENT: Query view '%s' returned %d rows. Full response: %s",
view_name, cJSON_GetArraySize(results), debug_response);
free(debug_response);
}
return 0;
}
/**
* Send Kind 23459 admin response event

File diff suppressed because it is too large Load Diff

View File

@@ -10,8 +10,8 @@
// Version information (auto-updated by build system)
#define VERSION_MAJOR 0
#define VERSION_MINOR 1
#define VERSION_PATCH 19
#define VERSION "v0.1.19"
#define VERSION_PATCH 24
#define VERSION "v0.1.24"
#include <stddef.h>
#include <stdint.h>

View File

@@ -194,25 +194,16 @@ int initialize_database(const char *db_path) {
return -1;
}
// Create auth_rules table
// Create auth_rules table (c-relay compatible schema)
const char *create_auth_rules =
"CREATE TABLE IF NOT EXISTS auth_rules ("
" id INTEGER PRIMARY KEY AUTOINCREMENT,"
" rule_type TEXT NOT NULL,"
" rule_target TEXT NOT NULL,"
" operation TEXT NOT NULL DEFAULT '*',"
" enabled INTEGER NOT NULL DEFAULT 1,"
" priority INTEGER NOT NULL DEFAULT 100,"
" description TEXT,"
" created_by TEXT,"
" pattern_type TEXT NOT NULL,"
" pattern_value TEXT NOT NULL,"
" active INTEGER NOT NULL DEFAULT 1,"
" created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),"
" updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),"
" CHECK (rule_type IN ('pubkey_blacklist', 'pubkey_whitelist',"
" 'hash_blacklist', 'mime_blacklist', 'mime_whitelist')),"
" CHECK (operation IN ('upload', 'delete', 'list', '*')),"
" CHECK (enabled IN (0, 1)),"
" CHECK (priority >= 0),"
" UNIQUE(rule_type, rule_target, operation)"
" updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))"
");";
rc = sqlite3_exec(db, create_auth_rules, NULL, NULL, &err_msg);
@@ -229,11 +220,9 @@ int initialize_database(const char *db_path) {
"CREATE INDEX IF NOT EXISTS idx_blobs_uploader_pubkey ON blobs(uploader_pubkey);"
"CREATE INDEX IF NOT EXISTS idx_blobs_type ON blobs(type);"
"CREATE INDEX IF NOT EXISTS idx_config_updated_at ON config(updated_at);"
"CREATE INDEX IF NOT EXISTS idx_auth_rules_type_target ON auth_rules(rule_type, rule_target);"
"CREATE INDEX IF NOT EXISTS idx_auth_rules_operation ON auth_rules(operation);"
"CREATE INDEX IF NOT EXISTS idx_auth_rules_enabled ON auth_rules(enabled);"
"CREATE INDEX IF NOT EXISTS idx_auth_rules_priority ON auth_rules(priority);"
"CREATE INDEX IF NOT EXISTS idx_auth_rules_type_operation ON auth_rules(rule_type, operation, enabled);";
"CREATE INDEX IF NOT EXISTS idx_auth_rules_type ON auth_rules(rule_type);"
"CREATE INDEX IF NOT EXISTS idx_auth_rules_pattern ON auth_rules(pattern_type, pattern_value);"
"CREATE INDEX IF NOT EXISTS idx_auth_rules_active ON auth_rules(active);";
rc = sqlite3_exec(db, create_indexes, NULL, NULL, &err_msg);
if (rc != SQLITE_OK) {

View File

@@ -23,6 +23,8 @@
#include <strings.h>
#include <time.h>
#define MAX_MIME_TYPE_LEN 128 // Define here for direct use
// Additional error codes for ginxsom-specific functionality
#define NOSTR_ERROR_CRYPTO_INIT -100
#define NOSTR_ERROR_AUTH_REQUIRED -101
@@ -671,8 +673,8 @@ int nostr_validate_unified_request(const nostr_unified_request_t *request,
"VALIDATOR_DEBUG: STEP 10 PASSED - Blossom authentication succeeded\n");
strcpy(result->reason, "Blossom authentication passed");
} else if (event_kind == 33335) {
// 10. Admin/Configuration Event Validation (Kind 33335)
} else if (event_kind == 33335 || event_kind == 23459 || event_kind == 23458) {
// 10. Admin/Configuration Event Validation (Kind 33335, 23459, 23458)
// Verify admin authorization, check required tags, validate expiration
validator_debug_log("VALIDATOR_DEBUG: STEP 10 - Processing Admin/Configuration "
"authentication (kind 33335)\n");
@@ -775,6 +777,16 @@ int nostr_validate_unified_request(const nostr_unified_request_t *request,
cJSON_Delete(event);
// Skip rule evaluation for admin events
if (event_kind == 33335 || event_kind == 23459 || event_kind == 23458) {
char admin_skip_msg[256];
snprintf(admin_skip_msg, sizeof(admin_skip_msg),
"VALIDATOR_DEBUG: Admin event (kind %d) - skipping rule evaluation\n", event_kind);
validator_debug_log(admin_skip_msg);
strcpy(result->reason, "Admin event validated - rules bypassed");
return NOSTR_SUCCESS;
}
// STEP 12 PASSED: Protocol validation complete - continue to database rule
// evaluation
validator_debug_log("VALIDATOR_DEBUG: STEP 12 PASSED - Protocol validation "
@@ -1321,6 +1333,13 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
sqlite3 *db = NULL;
sqlite3_stmt *stmt = NULL;
int rc;
int pubkey_whitelisted = 0;
int pubkey_whitelist_exists = 0;
int mime_whitelisted = 0;
int mime_whitelist_exists = 0;
int mime_whitelist_count = 0;
int pubkey_whitelist_count = 0;
char rules_msg[256];
if (!pubkey) {
validator_debug_log(
@@ -1328,7 +1347,12 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
return NOSTR_ERROR_INVALID_INPUT;
}
char rules_msg[256];
if (operation && (strcmp(operation, "admin_event") == 0 ||
strcmp(operation, "admin") == 0)) {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - Admin management request, skipping auth rules\n");
return NOSTR_SUCCESS;
}
sprintf(rules_msg,
"VALIDATOR_DEBUG: RULES ENGINE - Checking rules for pubkey=%.32s..., "
"operation=%s, mime_type=%s\n",
@@ -1344,18 +1368,14 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
}
// Step 1: Check pubkey blacklist (highest priority)
// Match both exact operation and wildcard '*'
const char *blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'pubkey_blacklist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type FROM auth_rules WHERE rule_type LIKE 'blacklist_pubkey' AND pattern_type = 'pubkey' AND pattern_value = ? AND active = 1 LIMIT 1";
rc = sqlite3_prepare_v2(db, blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, pubkey, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
const char *description = "Pubkey blacklisted";
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 1 FAILED - "
"Pubkey blacklisted\n");
char blacklist_msg[256];
@@ -1380,18 +1400,14 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
// Step 2: Check hash blacklist
if (resource_hash) {
// Match both exact operation and wildcard '*'
const char *hash_blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'hash_blacklist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type FROM auth_rules WHERE rule_type LIKE 'blacklist_hash' AND pattern_type = 'hash' AND pattern_value = ? AND active = 1 LIMIT 1";
rc = sqlite3_prepare_v2(db, hash_blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, resource_hash, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
const char *description = "Hash blacklisted";
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 2 FAILED - "
"Hash blacklisted\n");
char hash_blacklist_msg[256];
@@ -1423,17 +1439,14 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
if (mime_type) {
// Match both exact MIME type and wildcard patterns (e.g., 'image/*')
const char *mime_blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'mime_blacklist' AND (rule_target = ? OR rule_target LIKE '%/*' AND ? LIKE REPLACE(rule_target, '*', '%')) AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type FROM auth_rules WHERE rule_type LIKE 'blacklist_mime' AND pattern_type = 'mime' AND (pattern_value = ? OR pattern_value LIKE '%/*' AND ? LIKE REPLACE(pattern_value, '*', '%')) AND active = 1 LIMIT 1";
rc = sqlite3_prepare_v2(db, mime_blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
const char *description = "MIME type blacklisted";
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 FAILED - "
"MIME type blacklisted\n");
char mime_blacklist_msg[256];
@@ -1462,133 +1475,151 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
}
// Step 4: Check pubkey whitelist
// Match both exact operation and wildcard '*'
const char *whitelist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'pubkey_whitelist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type FROM auth_rules WHERE rule_type LIKE 'whitelist_pubkey' AND pattern_type = 'pubkey' AND pattern_value = ? AND active = 1 LIMIT 1";
rc = sqlite3_prepare_v2(db, whitelist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, pubkey, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 PASSED - "
const char *description = "Pubkey whitelisted";
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 4 PASSED - "
"Pubkey whitelisted\n");
char whitelist_msg[256];
sprintf(whitelist_msg,
"VALIDATOR_DEBUG: RULES ENGINE - Whitelist rule matched: %s\n",
description ? description : "Unknown");
snprintf(whitelist_msg,
sizeof(whitelist_msg),
"VALIDATOR_DEBUG: RULES ENGINE - Whitelist rule matched: %s\n",
description ? description : "Unknown");
validator_debug_log(whitelist_msg);
sqlite3_finalize(stmt);
sqlite3_close(db);
return NOSTR_SUCCESS; // Allow whitelisted pubkey
pubkey_whitelisted = 1;
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 4 - Pubkey not whitelisted\n");
}
sqlite3_finalize(stmt);
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 4 FAILED - Pubkey whitelist query failed\n");
}
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 FAILED - Pubkey "
"not whitelisted\n");
// Step 5: Check MIME type whitelist (only if not already denied)
// Step 5: Check MIME type whitelist
if (mime_type) {
// Match both exact MIME type and wildcard patterns (e.g., 'image/*')
char mime_pattern_wildcard[MAX_MIME_TYPE_LEN + 2];
const char *mime_whitelist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'mime_whitelist' AND (rule_target = ? OR rule_target LIKE '%/*' AND ? LIKE REPLACE(rule_target, '*', '%')) AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
"SELECT rule_type FROM auth_rules WHERE rule_type LIKE 'whitelist_mime' AND pattern_type = 'mime' AND (pattern_value = ? OR pattern_value LIKE ? ) AND active = 1 LIMIT 1";
rc = sqlite3_prepare_v2(db, mime_whitelist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, operation ? operation : "", -1, SQLITE_STATIC);
const char *slash_pos = strchr(mime_type, '/');
if (slash_pos != NULL) {
size_t prefix_len = slash_pos - mime_type;
if (prefix_len < MAX_MIME_TYPE_LEN) {
snprintf(mime_pattern_wildcard, sizeof(mime_pattern_wildcard), "%.*s/%%", (int)prefix_len, mime_type);
} else {
snprintf(mime_pattern_wildcard, sizeof(mime_pattern_wildcard), "%%/%%");
}
} else {
snprintf(mime_pattern_wildcard, sizeof(mime_pattern_wildcard), "%s/%%", mime_type);
}
sqlite3_bind_text(stmt, 2, mime_pattern_wildcard, -1, SQLITE_TRANSIENT);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 PASSED - "
"MIME type whitelisted\n");
const char *description = "MIME type whitelisted";
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 PASSED - MIME type whitelisted\n");
char mime_whitelist_msg[256];
sprintf(mime_whitelist_msg,
"VALIDATOR_DEBUG: RULES ENGINE - MIME whitelist rule matched: %s\n",
description ? description : "Unknown");
snprintf(mime_whitelist_msg,
sizeof(mime_whitelist_msg),
"VALIDATOR_DEBUG: RULES ENGINE - MIME whitelist rule matched: %s (pattern=%s)\n",
description ? description : "Unknown",
mime_pattern_wildcard);
validator_debug_log(mime_whitelist_msg);
sqlite3_finalize(stmt);
sqlite3_close(db);
return NOSTR_SUCCESS; // Allow whitelisted MIME type
mime_whitelisted = 1;
} else {
char mime_not_msg[256];
snprintf(mime_not_msg,
sizeof(mime_not_msg),
"VALIDATOR_DEBUG: RULES ENGINE - STEP 5 - MIME type not whitelisted (pattern=%s)\n",
mime_pattern_wildcard);
validator_debug_log(mime_not_msg);
}
sqlite3_finalize(stmt);
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 FAILED - Failed to prepare MIME whitelist query\n");
}
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 FAILED - MIME "
"type not whitelisted\n");
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 SKIPPED - No "
"MIME type provided\n");
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 SKIPPED - No MIME type provided\n");
}
// Step 6: Check if any MIME whitelist rules exist - if yes, deny by default
// Match both exact operation and wildcard '*'
// Step 6: Count MIME whitelist rules
const char *mime_whitelist_exists_sql =
"SELECT COUNT(*) FROM auth_rules WHERE rule_type = 'mime_whitelist' "
"AND (operation = ? OR operation = '*') AND enabled = 1 LIMIT 1";
"SELECT COUNT(*) FROM auth_rules WHERE rule_type LIKE 'whitelist_mime' "
"AND pattern_type = 'mime' AND active = 1";
rc = sqlite3_prepare_v2(db, mime_whitelist_exists_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
int mime_whitelist_count = sqlite3_column_int(stmt, 0);
if (mime_whitelist_count > 0) {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 6 FAILED - "
"MIME whitelist exists but type not in it\n");
// Set specific violation details for status code mapping
strcpy(g_last_rule_violation.violation_type, "mime_whitelist_violation");
strcpy(g_last_rule_violation.reason,
"MIME type not whitelisted for this operation");
sqlite3_finalize(stmt);
sqlite3_close(db);
return NOSTR_ERROR_AUTH_REQUIRED;
}
mime_whitelist_count = sqlite3_column_int(stmt, 0);
char mime_cnt_msg[256];
snprintf(mime_cnt_msg, sizeof(mime_cnt_msg),
"VALIDATOR_DEBUG: RULES ENGINE - MIME whitelist count: %d\n",
mime_whitelist_count);
validator_debug_log(mime_cnt_msg);
}
sqlite3_finalize(stmt);
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 6 FAILED - Failed to prepare MIME whitelist count query\n");
}
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 6 PASSED - No "
"MIME whitelist restrictions apply\n");
// Step 7: Check if any whitelist rules exist - if yes, deny by default
// Match both exact operation and wildcard '*'
if (mime_whitelist_count > 0 && !mime_whitelisted) {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - MIME whitelist exists but MIME type not allowed\n");
strcpy(g_last_rule_violation.violation_type, "mime_whitelist_violation");
strcpy(g_last_rule_violation.reason, "MIME type not whitelisted for this operation");
sqlite3_close(db);
return NOSTR_ERROR_AUTH_REQUIRED;
}
// Step 7: Count pubkey whitelist rules
const char *whitelist_exists_sql =
"SELECT COUNT(*) FROM auth_rules WHERE rule_type = 'pubkey_whitelist' "
"AND (operation = ? OR operation = '*') AND enabled = 1 LIMIT 1";
"SELECT COUNT(*) FROM auth_rules WHERE (rule_type LIKE 'whitelist_pubkey' OR rule_type LIKE 'pubkey_whitelist') "
"AND pattern_type = 'pubkey' AND active = 1";
rc = sqlite3_prepare_v2(db, whitelist_exists_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
int whitelist_count = sqlite3_column_int(stmt, 0);
if (whitelist_count > 0) {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 4 FAILED - "
"Whitelist exists but pubkey not in it\n");
// Set specific violation details for status code mapping
strcpy(g_last_rule_violation.violation_type, "whitelist_violation");
strcpy(g_last_rule_violation.reason,
"Public key not whitelisted for this operation");
sqlite3_finalize(stmt);
sqlite3_close(db);
return NOSTR_ERROR_AUTH_REQUIRED;
}
pubkey_whitelist_count = sqlite3_column_int(stmt, 0);
char pubkey_cnt_msg[256];
snprintf(pubkey_cnt_msg, sizeof(pubkey_cnt_msg),
"VALIDATOR_DEBUG: RULES ENGINE - Pubkey whitelist count: %d\n",
pubkey_whitelist_count);
validator_debug_log(pubkey_cnt_msg);
}
sqlite3_finalize(stmt);
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 7 FAILED - Failed to prepare pubkey whitelist count query\n");
}
if (pubkey_whitelist_count > 0) {
char pubkey_whitelist_msg[256];
snprintf(pubkey_whitelist_msg, sizeof(pubkey_whitelist_msg),
"VALIDATOR_DEBUG: RULES ENGINE - Pubkey whitelist exists (%d entries)\n",
pubkey_whitelist_count);
validator_debug_log(pubkey_whitelist_msg);
}
if (pubkey_whitelist_count > 0 && !pubkey_whitelisted) {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - Pubkey whitelist exists but pubkey not allowed\n");
strcpy(g_last_rule_violation.violation_type, "whitelist_violation");
strcpy(g_last_rule_violation.reason, "Public key not whitelisted for this operation");
sqlite3_close(db);
return NOSTR_ERROR_AUTH_REQUIRED;
}
if ((mime_whitelist_count > 0 && !mime_whitelisted) ||
(pubkey_whitelist_count > 0 && !pubkey_whitelisted)) {
// Already handled above but include fallback
sqlite3_close(db);
return NOSTR_ERROR_AUTH_REQUIRED;
}
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 4 PASSED - No "
"whitelist restrictions apply\n");
sqlite3_close(db);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 7 PASSED - All "
"rule checks completed, default ALLOW\n");
return NOSTR_SUCCESS; // Default allow if no restrictive rules matched
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - Completed whitelist checks\n");
return NOSTR_SUCCESS;
}
/**

View File

@@ -11,8 +11,13 @@ SERVER_URL="https://localhost:9443"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
TEST_FILE="test_blob_$(date +%s).txt"
CLEANUP_FILES=()
NOSTR_PRIVKEY="22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd"
NOSTR_PUBKEY="8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e"
NOSTR_PRIVKEY="39079f9fbdead31b5ec1724479e62c892a6866699c7873613c19832caff447bd"
NOSTR_PUBKEY="2a38db7fc1ffdabb43c79b5ad525f7d97102d4d235efc257dfd1514571f8159f"
# NOSTR_PRIVKEY="22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd"
# NOSTR_PUBKEY="8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e"
# Colors for output
RED='\033[0;31m'

View File

@@ -1,19 +1,28 @@
#!/bin/bash
# white_black_list_test.sh - Whitelist/Blacklist Rules Test Suite
# Tests the auth_rules table functionality for pubkey and MIME type filtering
# Tests the auth_rules table functionality using Kind 23458 admin commands
# Configuration
SERVER_URL="http://localhost:9001"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
DB_PATH="db/ginxsom.db"
ADMIN_API_ENDPOINT="${SERVER_URL}/api/admin"
DB_PATH="db/52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a.db"
TEST_DIR="tests/auth_test_tmp"
TEST_KEYS_FILE=".test_keys"
# Test results tracking
TESTS_PASSED=0
TESTS_FAILED=0
TOTAL_TESTS=0
# Load admin keys from .test_keys
if [[ ! -f "$TEST_KEYS_FILE" ]]; then
echo "$TEST_KEYS_FILE not found"
exit 1
fi
source "$TEST_KEYS_FILE"
# Test keys for different scenarios - Using WSB's keys for TEST_USER1
# Generated using: nak key public <privkey>
TEST_USER1_PRIVKEY="22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd"
@@ -42,6 +51,37 @@ record_test_result() {
fi
}
# Helper function to send admin command via Kind 23458
send_admin_command() {
local command_json="$1"
# Encrypt command with NIP-44
local encrypted_command=$(nak encrypt --sec "$ADMIN_PRIVKEY" -p "$SERVER_PUBKEY" "$command_json")
if [[ -z "$encrypted_command" ]]; then
echo "❌ Failed to encrypt command"
return 1
fi
# Create Kind 23458 event
local event=$(nak event -k 23458 \
-c "$encrypted_command" \
--tag p="$SERVER_PUBKEY" \
--sec "$ADMIN_PRIVKEY")
if [[ -z "$event" ]]; then
echo "❌ Failed to create admin event"
return 1
fi
# Send to admin API endpoint
local response=$(curl -s -X POST "$ADMIN_API_ENDPOINT" \
-H "Content-Type: application/json" \
-d "$event")
echo "$response"
}
# Check prerequisites
for cmd in nak curl jq sqlite3; do
if ! command -v $cmd &> /dev/null; then
@@ -130,20 +170,24 @@ test_upload() {
}
# Clean up any existing rules from previous tests
echo "Cleaning up existing auth rules..."
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;" 2>/dev/null
echo "Cleaning up existing auth rules via admin command..."
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Enable authentication rules
echo "Enabling authentication rules..."
sqlite3 "$DB_PATH" "UPDATE config SET value = 'true' WHERE key = 'auth_rules_enabled';"
ENABLE_CMD='["config_update", {"auth_rules_enabled": "true"}]'
send_admin_command "$ENABLE_CMD" > /dev/null 2>&1
echo
echo "=== SECTION 1: PUBKEY BLACKLIST TESTS ==="
echo
# Test 1: Add pubkey blacklist rule
echo "Adding blacklist rule for TEST_USER3..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER3_PUBKEY', 'upload', 10, 'Test blacklist');"
# Test 1: Add pubkey blacklist rule via admin command
echo "Adding blacklist rule for TEST_USER3 via admin API..."
BLACKLIST_CMD='["blacklist", "pubkey", "'$TEST_USER3_PUBKEY'"]'
BLACKLIST_RESPONSE=$(send_admin_command "$BLACKLIST_CMD")
echo "Response: $BLACKLIST_RESPONSE" | jq -c '.' 2>/dev/null || echo "$BLACKLIST_RESPONSE"
# Test 1a: Blacklisted user should be denied
test_file1=$(create_test_file "blacklist_test1.txt" "Content from blacklisted user")
@@ -157,13 +201,16 @@ echo
echo "=== SECTION 2: PUBKEY WHITELIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Clean rules via admin command
echo "Cleaning rules via admin API..."
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 2: Add pubkey whitelist rule
echo "Adding whitelist rule for TEST_USER1..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_whitelist', '$TEST_USER1_PUBKEY', 'upload', 300, 'Test whitelist');"
# Test 2: Add pubkey whitelist rule via admin command
echo "Adding whitelist rule for TEST_USER1 via admin API..."
WHITELIST_CMD='["whitelist", "pubkey", "'$TEST_USER1_PUBKEY'"]'
WHITELIST_RESPONSE=$(send_admin_command "$WHITELIST_CMD")
echo "Response: $WHITELIST_RESPONSE" | jq -c '.' 2>/dev/null || echo "$WHITELIST_RESPONSE"
# Test 2a: Whitelisted user should succeed
test_file3=$(create_test_file "whitelist_test1.txt" "Content from whitelisted user")
@@ -177,15 +224,17 @@ echo
echo "=== SECTION 3: HASH BLACKLIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
# Clean rules via admin command
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 3: Create a file and blacklist its hash
# Test 3: Create a file and blacklist its hash via admin command
test_file5=$(create_test_file "hash_blacklist_test.txt" "This specific file is blacklisted")
BLACKLISTED_HASH=$(sha256sum "$test_file5" | cut -d' ' -f1)
echo "Adding hash blacklist rule for $BLACKLISTED_HASH..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('hash_blacklist', '$BLACKLISTED_HASH', 'upload', 100, 'Test hash blacklist');"
echo "Adding hash blacklist rule for $BLACKLISTED_HASH via admin API..."
HASH_BLACKLIST_CMD='["blacklist", "hash", "'$BLACKLISTED_HASH'"]'
send_admin_command "$HASH_BLACKLIST_CMD" > /dev/null 2>&1
# Test 3a: Blacklisted hash should be denied
test_upload "Test 3a: Blacklisted Hash Upload" "$TEST_USER1_PRIVKEY" "$test_file5" "403"
@@ -198,13 +247,14 @@ echo
echo "=== SECTION 4: MIME TYPE BLACKLIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Clean rules via admin command
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 4: Blacklist executable MIME types
echo "Adding MIME type blacklist rules..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_blacklist', 'application/x-executable', 'upload', 200, 'Block executables');"
# Test 4: Blacklist executable MIME types via admin command
echo "Adding MIME type blacklist rules via admin API..."
MIME_BLACKLIST_CMD='["blacklist", "mime", "application/x-executable"]'
send_admin_command "$MIME_BLACKLIST_CMD" > /dev/null 2>&1
# Note: This test would require the server to detect MIME types from file content
# For now, we'll test with text/plain which should be allowed
@@ -215,14 +265,16 @@ echo
echo "=== SECTION 5: MIME TYPE WHITELIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Clean rules via admin command
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 5: Whitelist only image MIME types
echo "Adding MIME type whitelist rules..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_whitelist', 'image/jpeg', 'upload', 400, 'Allow JPEG');"
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_whitelist', 'image/png', 'upload', 400, 'Allow PNG');"
# Test 5: Whitelist only image MIME types via admin command
echo "Adding MIME type whitelist rules via admin API..."
MIME_WL1_CMD='["whitelist", "mime", "image/jpeg"]'
MIME_WL2_CMD='["whitelist", "mime", "image/png"]'
send_admin_command "$MIME_WL1_CMD" > /dev/null 2>&1
send_admin_command "$MIME_WL2_CMD" > /dev/null 2>&1
# Note: MIME type detection would need to be implemented in the server
# For now, text/plain should be denied if whitelist exists
@@ -233,14 +285,16 @@ echo
echo "=== SECTION 6: PRIORITY ORDERING TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Clean rules via admin command
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 6: Blacklist should override whitelist (priority ordering)
echo "Adding both blacklist (priority 10) and whitelist (priority 300) for same pubkey..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER1_PUBKEY', 'upload', 10, 'Blacklist priority test');"
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_whitelist', '$TEST_USER1_PUBKEY', 'upload', 300, 'Whitelist priority test');"
echo "Adding both blacklist and whitelist for same pubkey via admin API..."
BL_CMD='["blacklist", "pubkey", "'$TEST_USER1_PUBKEY'"]'
WL_CMD='["whitelist", "pubkey", "'$TEST_USER1_PUBKEY'"]'
send_admin_command "$BL_CMD" > /dev/null 2>&1
send_admin_command "$WL_CMD" > /dev/null 2>&1
# Test 6a: Blacklist should win (lower priority number = higher priority)
test_file9=$(create_test_file "priority_test.txt" "Testing priority ordering")
@@ -250,13 +304,14 @@ echo
echo "=== SECTION 7: OPERATION-SPECIFIC RULES ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Clean rules via admin command
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 7: Blacklist only for upload operation
echo "Adding blacklist rule for upload operation only..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER2_PUBKEY', 'upload', 10, 'Upload-only blacklist');"
# Test 7: Blacklist for user via admin command
echo "Adding blacklist rule for TEST_USER2 via admin API..."
BL_USER2_CMD='["blacklist", "pubkey", "'$TEST_USER2_PUBKEY'"]'
send_admin_command "$BL_USER2_CMD" > /dev/null 2>&1
# Test 7a: Upload should be denied
test_file10=$(create_test_file "operation_test.txt" "Testing operation-specific rules")
@@ -266,13 +321,14 @@ echo
echo "=== SECTION 8: WILDCARD OPERATION TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Clean rules via admin command
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 8: Blacklist for all operations using wildcard
echo "Adding blacklist rule for all operations (*)..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER3_PUBKEY', '*', 10, 'All operations blacklist');"
# Test 8: Blacklist for user via admin command
echo "Adding blacklist rule for TEST_USER3 via admin API..."
BL_USER3_CMD='["blacklist", "pubkey", "'$TEST_USER3_PUBKEY'"]'
send_admin_command "$BL_USER3_CMD" > /dev/null 2>&1
# Test 8a: Upload should be denied
test_file11=$(create_test_file "wildcard_test.txt" "Testing wildcard operation")
@@ -282,13 +338,13 @@ echo
echo "=== SECTION 9: ENABLED/DISABLED RULES ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Clean rules via admin command
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Test 9: Disabled rule should not be enforced
echo "Adding disabled blacklist rule..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description) VALUES ('pubkey_blacklist', '$TEST_USER1_PUBKEY', 'upload', 10, 0, 'Disabled blacklist');"
echo "Adding disabled blacklist rule via SQL (admin API doesn't support active=0 on create)..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, pattern_type, pattern_value, active) VALUES ('blacklist_pubkey', 'pubkey', '$TEST_USER1_PUBKEY', 0);"
# Test 9a: Upload should succeed (rule is disabled)
test_file12=$(create_test_file "disabled_rule_test.txt" "Testing disabled rule")
@@ -296,7 +352,7 @@ test_upload "Test 9a: Disabled Rule Not Enforced" "$TEST_USER1_PRIVKEY" "$test_f
# Test 9b: Enable the rule
echo "Enabling the blacklist rule..."
sqlite3 "$DB_PATH" "UPDATE auth_rules SET enabled = 1 WHERE rule_target = '$TEST_USER1_PUBKEY';"
sqlite3 "$DB_PATH" "UPDATE auth_rules SET active = 1 WHERE pattern_value = '$TEST_USER1_PUBKEY';"
# Test 9c: Upload should now be denied
test_file13=$(create_test_file "enabled_rule_test.txt" "Testing enabled rule")
@@ -307,9 +363,10 @@ echo
echo "=== SECTION 11: CLEANUP AND RESET ==="
echo
# Clean up all test rules
echo "Cleaning up test rules..."
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
# Clean up all test rules via admin command
echo "Cleaning up test rules via admin API..."
CLEANUP_CMD='["sql_query", "DELETE FROM auth_rules"]'
send_admin_command "$CLEANUP_CMD" > /dev/null 2>&1
# Verify cleanup
RULE_COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM auth_rules;" 2>/dev/null)