11 Commits

Author SHA1 Message Date
Your Name
deec021933 v0.1.11 - Last push before changing logging system 2025-12-07 12:50:12 -04:00
Your Name
db7621a293 v0.1.10 - In the middle of working on getting admin api working 2025-11-21 11:54:17 -04:00
Your Name
e693fe3caa v0.1.9 - program generates it's own private keys. 2025-11-20 07:53:58 -04:00
Your Name
c1b615de32 v0.1.8 - Removed cache functionality for now 2025-11-13 10:59:14 -04:00
Your Name
455aab1eac v0.1.7 - Fixing black and white lists 2025-11-13 10:21:26 -04:00
Your Name
533c7f29f2 v0.1.6 - Just catching up 2025-11-11 17:02:14 -04:00
Your Name
35f8385508 v0.1.5 - Make versioning system 2025-11-11 07:16:33 -04:00
Your Name
fe2495f897 v0.1.4 - Make response at root JSON 2025-11-11 07:08:27 -04:00
Your Name
30e4408b28 v0.1.3 - Implement https 2025-11-09 19:57:45 -04:00
Your Name
e43dd5c64f v0.1.2 - . 2025-10-18 17:38:56 -04:00
Your Name
bb18ffcdce v0.1.1 - Cleaning things up. 2025-10-16 15:24:41 -04:00
91 changed files with 10003 additions and 2410 deletions

1
.gitignore vendored
View File

@@ -2,4 +2,5 @@ blossom/
logs/
nostr_core_lib/
blobs/
c-relay/

View File

@@ -1,4 +1,4 @@
ADMIN_PRIVKEY='31d3fd4bb38f4f6b60fb66e0a2e5063703bb3394579ce820d5aaf3773b96633f'
ADMIN_PUBKEY='bd109762a8185716ec0fe0f887e911c30d40e36cf7b6bb99f6eef3301e9f6f99'
ADMIN_PRIVKEY='22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd'
ADMIN_PUBKEY='8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e'
SERVER_PRIVKEY='c4e0d2ed7d36277d6698650f68a6e9199f91f3abb476a67f07303e81309c48f1'
SERVER_PUBKEY='52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a'

109
42.md
View File

@@ -1,109 +0,0 @@
NIP-42
======
Authentication of clients to relays
-----------------------------------
`draft` `optional`
This NIP defines a way for clients to authenticate to relays by signing an ephemeral event.
## Motivation
A relay may want to require clients to authenticate to access restricted resources. For example,
- A relay may request payment or other forms of whitelisting to publish events -- this can naïvely be achieved by limiting publication to events signed by the whitelisted key, but with this NIP they may choose to accept any events as long as they are published from an authenticated user;
- A relay may limit access to `kind: 4` DMs to only the parties involved in the chat exchange, and for that it may require authentication before clients can query for that kind.
- A relay may limit subscriptions of any kind to paying users or users whitelisted through any other means, and require authentication.
## Definitions
### New client-relay protocol messages
This NIP defines a new message, `AUTH`, which relays CAN send when they support authentication and clients can send to relays when they want to authenticate. When sent by relays the message has the following form:
```
["AUTH", <challenge-string>]
```
And, when sent by clients, the following form:
```
["AUTH", <signed-event-json>]
```
Clients MAY provide signed events from multiple pubkeys in a sequence of `AUTH` messages. Relays MUST treat all pubkeys as authenticated accordingly.
`AUTH` messages sent by clients MUST be answered with an `OK` message, like any `EVENT` message.
### Canonical authentication event
The signed event is an ephemeral event not meant to be published or queried, it must be of `kind: 22242` and it should have at least two tags, one for the relay URL and one for the challenge string as received from the relay. Relays MUST exclude `kind: 22242` events from being broadcasted to any client. `created_at` should be the current time. Example:
```jsonc
{
"kind": 22242,
"tags": [
["relay", "wss://relay.example.com/"],
["challenge", "challengestringhere"]
],
// other fields...
}
```
### `OK` and `CLOSED` machine-readable prefixes
This NIP defines two new prefixes that can be used in `OK` (in response to event writes by clients) and `CLOSED` (in response to rejected subscriptions by clients):
- `"auth-required: "` - for when a client has not performed `AUTH` and the relay requires that to fulfill the query or write the event.
- `"restricted: "` - for when a client has already performed `AUTH` but the key used to perform it is still not allowed by the relay or is exceeding its authorization.
## Protocol flow
At any moment the relay may send an `AUTH` message to the client containing a challenge. The challenge is valid for the duration of the connection or until another challenge is sent by the relay. The client MAY decide to send its `AUTH` event at any point and the authenticated session is valid afterwards for the duration of the connection.
### `auth-required` in response to a `REQ` message
Given that a relay is likely to require clients to perform authentication only for certain jobs, like answering a `REQ` or accepting an `EVENT` write, these are some expected common flows:
```
relay: ["AUTH", "<challenge>"]
client: ["REQ", "sub_1", {"kinds": [4]}]
relay: ["CLOSED", "sub_1", "auth-required: we can't serve DMs to unauthenticated users"]
client: ["AUTH", {"id": "abcdef...", ...}]
client: ["AUTH", {"id": "abcde2...", ...}]
relay: ["OK", "abcdef...", true, ""]
relay: ["OK", "abcde2...", true, ""]
client: ["REQ", "sub_1", {"kinds": [4]}]
relay: ["EVENT", "sub_1", {...}]
relay: ["EVENT", "sub_1", {...}]
relay: ["EVENT", "sub_1", {...}]
relay: ["EVENT", "sub_1", {...}]
...
```
In this case, the `AUTH` message from the relay could be sent right as the client connects or it can be sent immediately before the `CLOSED` is sent. The only requirement is that _the client must have a stored challenge associated with that relay_ so it can act upon that in response to the `auth-required` `CLOSED` message.
### `auth-required` in response to an `EVENT` message
The same flow is valid for when a client wants to write an `EVENT` to the relay, except now the relay sends back an `OK` message instead of a `CLOSED` message:
```
relay: ["AUTH", "<challenge>"]
client: ["EVENT", {"id": "012345...", ...}]
relay: ["OK", "012345...", false, "auth-required: we only accept events from registered users"]
client: ["AUTH", {"id": "abcdef...", ...}]
relay: ["OK", "abcdef...", true, ""]
client: ["EVENT", {"id": "012345...", ...}]
relay: ["OK", "012345...", true, ""]
```
## Signed Event Verification
To verify `AUTH` messages, relays must ensure:
- that the `kind` is `22242`;
- that the event `created_at` is close (e.g. within ~10 minutes) of the current time;
- that the `"challenge"` tag matches the challenge sent before;
- that the `"relay"` tag matches the relay URL:
- URL normalization techniques can be applied. For most cases just checking if the domain name is correct should be enough.

View File

View File

@@ -1,14 +1,14 @@
# Ginxsom Blossom Server Makefile
CC = gcc
CFLAGS = -Wall -Wextra -std=c99 -O2 -Inostr_core_lib/nostr_core -Inostr_core_lib/cjson
LIBS = -lfcgi -lsqlite3 nostr_core_lib/libnostr_core_x64.a -lz -ldl -lpthread -lm -L/usr/local/lib -lsecp256k1 -lssl -lcrypto -lcurl
CFLAGS = -Wall -Wextra -std=gnu99 -O2 -Inostr_core_lib/nostr_core -Inostr_core_lib/cjson $(shell pkg-config --cflags libwebsockets)
LIBS = -lfcgi -lsqlite3 nostr_core_lib/libnostr_core_x64.a -lz -ldl -lpthread -lm -L/usr/local/lib -lsecp256k1 -lssl -lcrypto -lcurl $(shell pkg-config --libs libwebsockets)
SRCDIR = src
BUILDDIR = build
TARGET = $(BUILDDIR)/ginxsom-fcgi
# Source files
SOURCES = $(SRCDIR)/main.c $(SRCDIR)/admin_api.c $(SRCDIR)/bud04.c $(SRCDIR)/bud06.c $(SRCDIR)/bud08.c $(SRCDIR)/bud09.c $(SRCDIR)/request_validator.c
SOURCES = $(SRCDIR)/main.c $(SRCDIR)/admin_api.c $(SRCDIR)/admin_auth.c $(SRCDIR)/admin_event.c $(SRCDIR)/admin_websocket.c $(SRCDIR)/admin_handlers.c $(SRCDIR)/bud04.c $(SRCDIR)/bud06.c $(SRCDIR)/bud08.c $(SRCDIR)/bud09.c $(SRCDIR)/request_validator.c
OBJECTS = $(SOURCES:$(SRCDIR)/%.c=$(BUILDDIR)/%.o)
# Default target

View File

612
STATIC_MUSL_GUIDE.md Normal file
View File

@@ -0,0 +1,612 @@
# Static MUSL Build Guide for C Programs
## Overview
This guide explains how to build truly portable static binaries using Alpine Linux and MUSL libc. These binaries have **zero runtime dependencies** and work on any Linux distribution without modification.
This guide is specifically tailored for C programs that use:
- **nostr_core_lib** - Nostr protocol implementation
- **nostr_login_lite** - Nostr authentication library
- Common dependencies: libwebsockets, OpenSSL, SQLite, curl, secp256k1
## Why MUSL Static Binaries?
### Advantages Over glibc
| Feature | MUSL Static | glibc Static | glibc Dynamic |
|---------|-------------|--------------|---------------|
| **Portability** | ✓ Any Linux | ⚠ glibc only | ✗ Requires matching libs |
| **Binary Size** | ~7-10 MB | ~12-15 MB | ~2-3 MB |
| **Dependencies** | None | NSS libs | Many system libs |
| **Deployment** | Single file | Single file + NSS | Binary + libraries |
| **Compatibility** | Universal | glibc version issues | Library version hell |
### Key Benefits
1. **True Portability**: Works on Alpine, Ubuntu, Debian, CentOS, Arch, etc.
2. **No Library Hell**: No `GLIBC_2.XX not found` errors
3. **Simple Deployment**: Just copy one file
4. **Reproducible Builds**: Same Docker image = same binary
5. **Security**: No dependency on system libraries with vulnerabilities
## Quick Start
### Prerequisites
- Docker installed and running
- Your C project with source code
- Internet connection for downloading dependencies
### Basic Build Process
```bash
# 1. Copy the Dockerfile template (see below)
cp /path/to/c-relay/Dockerfile.alpine-musl ./Dockerfile.static
# 2. Customize for your project (see Customization section)
vim Dockerfile.static
# 3. Build the static binary
docker build --platform linux/amd64 -f Dockerfile.static -t my-app-builder .
# 4. Extract the binary
docker create --name temp-container my-app-builder
docker cp temp-container:/build/my_app_static ./my_app_static
docker rm temp-container
# 5. Verify it's static
ldd ./my_app_static # Should show "not a dynamic executable"
```
## Dockerfile Template
Here's a complete Dockerfile template you can customize for your project:
```dockerfile
# Alpine-based MUSL static binary builder
# Produces truly portable binaries with zero runtime dependencies
FROM alpine:3.19 AS builder
# Install build dependencies
RUN apk add --no-cache \
build-base \
musl-dev \
git \
cmake \
pkgconfig \
autoconf \
automake \
libtool \
openssl-dev \
openssl-libs-static \
zlib-dev \
zlib-static \
curl-dev \
curl-static \
sqlite-dev \
sqlite-static \
linux-headers \
wget \
bash
WORKDIR /build
# Build libsecp256k1 static (required for Nostr)
RUN cd /tmp && \
git clone https://github.com/bitcoin-core/secp256k1.git && \
cd secp256k1 && \
./autogen.sh && \
./configure --enable-static --disable-shared --prefix=/usr \
CFLAGS="-fPIC" && \
make -j$(nproc) && \
make install && \
rm -rf /tmp/secp256k1
# Build libwebsockets static (if needed for WebSocket support)
RUN cd /tmp && \
git clone --depth 1 --branch v4.3.3 https://github.com/warmcat/libwebsockets.git && \
cd libwebsockets && \
mkdir build && cd build && \
cmake .. \
-DLWS_WITH_STATIC=ON \
-DLWS_WITH_SHARED=OFF \
-DLWS_WITH_SSL=ON \
-DLWS_WITHOUT_TESTAPPS=ON \
-DLWS_WITHOUT_TEST_SERVER=ON \
-DLWS_WITHOUT_TEST_CLIENT=ON \
-DLWS_WITHOUT_TEST_PING=ON \
-DLWS_WITH_HTTP2=OFF \
-DLWS_WITH_LIBUV=OFF \
-DLWS_WITH_LIBEVENT=OFF \
-DLWS_IPV6=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/usr \
-DCMAKE_C_FLAGS="-fPIC" && \
make -j$(nproc) && \
make install && \
rm -rf /tmp/libwebsockets
# Copy git configuration for submodules
COPY .gitmodules /build/.gitmodules
COPY .git /build/.git
# Initialize submodules
RUN git submodule update --init --recursive
# Copy and build nostr_core_lib
COPY nostr_core_lib /build/nostr_core_lib/
RUN cd nostr_core_lib && \
chmod +x build.sh && \
sed -i 's/CFLAGS="-Wall -Wextra -std=c99 -fPIC -O2"/CFLAGS="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -Wall -Wextra -std=c99 -fPIC -O2"/' build.sh && \
rm -f *.o *.a 2>/dev/null || true && \
./build.sh --nips=1,6,13,17,19,44,59
# Copy and build nostr_login_lite (if used)
# COPY nostr_login_lite /build/nostr_login_lite/
# RUN cd nostr_login_lite && make static
# Copy your application source
COPY src/ /build/src/
COPY Makefile /build/Makefile
# Build your application with full static linking
RUN gcc -static -O2 -Wall -Wextra -std=c99 \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
-I. -Inostr_core_lib -Inostr_core_lib/nostr_core \
-Inostr_core_lib/cjson -Inostr_core_lib/nostr_websocket \
src/*.c \
-o /build/my_app_static \
nostr_core_lib/libnostr_core_x64.a \
-lwebsockets -lssl -lcrypto -lsqlite3 -lsecp256k1 \
-lcurl -lz -lpthread -lm -ldl && \
strip /build/my_app_static
# Verify it's truly static
RUN echo "=== Binary Information ===" && \
file /build/my_app_static && \
ls -lh /build/my_app_static && \
echo "=== Checking for dynamic dependencies ===" && \
(ldd /build/my_app_static 2>&1 || echo "Binary is static")
# Output stage - just the binary
FROM scratch AS output
COPY --from=builder /build/my_app_static /my_app_static
```
## Customization Guide
### 1. Adjust Dependencies
**Add dependencies** by modifying the `apk add` section:
```dockerfile
RUN apk add --no-cache \
build-base \
musl-dev \
# Add your dependencies here:
libpng-dev \
libpng-static \
libjpeg-turbo-dev \
libjpeg-turbo-static
```
**Remove unused dependencies** to speed up builds:
- Remove `libwebsockets` section if you don't need WebSocket support
- Remove `sqlite` if you don't use databases
- Remove `curl` if you don't make HTTP requests
### 2. Configure nostr_core_lib NIPs
Specify which NIPs your application needs:
```bash
./build.sh --nips=1,6,19 # Minimal: Basic protocol, keys, bech32
./build.sh --nips=1,6,13,17,19,44,59 # Full: All common NIPs
./build.sh --nips=all # Everything available
```
**Common NIP combinations:**
- **Basic client**: `1,6,19` (events, keys, bech32)
- **With encryption**: `1,6,19,44` (add modern encryption)
- **With DMs**: `1,6,17,19,44,59` (add private messages)
- **Relay/server**: `1,6,13,17,19,42,44,59` (add PoW, auth)
### 3. Modify Compilation Flags
**For your application:**
```dockerfile
RUN gcc -static -O2 -Wall -Wextra -std=c99 \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \ # REQUIRED for MUSL
-I. -Inostr_core_lib \ # Include paths
src/*.c \ # Your source files
-o /build/my_app_static \ # Output binary
nostr_core_lib/libnostr_core_x64.a \ # Nostr library
-lwebsockets -lssl -lcrypto \ # Link libraries
-lsqlite3 -lsecp256k1 -lcurl \
-lz -lpthread -lm -ldl
```
**Debug build** (with symbols, no optimization):
```dockerfile
RUN gcc -static -g -O0 -DDEBUG \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
# ... rest of flags
```
### 4. Multi-Architecture Support
Build for different architectures:
```bash
# x86_64 (Intel/AMD)
docker build --platform linux/amd64 -f Dockerfile.static -t my-app-x86 .
# ARM64 (Apple Silicon, Raspberry Pi 4+)
docker build --platform linux/arm64 -f Dockerfile.static -t my-app-arm64 .
```
## Build Script Template
Create a `build_static.sh` script for convenience:
```bash
#!/bin/bash
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
BUILD_DIR="$SCRIPT_DIR/build"
DOCKERFILE="$SCRIPT_DIR/Dockerfile.static"
# Detect architecture
ARCH=$(uname -m)
case "$ARCH" in
x86_64)
PLATFORM="linux/amd64"
OUTPUT_NAME="my_app_static_x86_64"
;;
aarch64|arm64)
PLATFORM="linux/arm64"
OUTPUT_NAME="my_app_static_arm64"
;;
*)
echo "Unknown architecture: $ARCH"
exit 1
;;
esac
echo "Building for platform: $PLATFORM"
mkdir -p "$BUILD_DIR"
# Build Docker image
docker build \
--platform "$PLATFORM" \
-f "$DOCKERFILE" \
-t my-app-builder:latest \
--progress=plain \
.
# Extract binary
CONTAINER_ID=$(docker create my-app-builder:latest)
docker cp "$CONTAINER_ID:/build/my_app_static" "$BUILD_DIR/$OUTPUT_NAME"
docker rm "$CONTAINER_ID"
chmod +x "$BUILD_DIR/$OUTPUT_NAME"
echo "✓ Build complete: $BUILD_DIR/$OUTPUT_NAME"
echo "✓ Size: $(du -h "$BUILD_DIR/$OUTPUT_NAME" | cut -f1)"
# Verify
if ldd "$BUILD_DIR/$OUTPUT_NAME" 2>&1 | grep -q "not a dynamic executable"; then
echo "✓ Binary is fully static"
else
echo "⚠ Warning: Binary may have dynamic dependencies"
fi
```
Make it executable:
```bash
chmod +x build_static.sh
./build_static.sh
```
## Common Issues and Solutions
### Issue 1: Fortification Errors
**Error:**
```
undefined reference to '__snprintf_chk'
undefined reference to '__fprintf_chk'
```
**Cause**: GCC's `-O2` enables fortification by default, which uses glibc-specific functions.
**Solution**: Add these flags to **all** compilation commands:
```bash
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0
```
This must be applied to:
1. nostr_core_lib build.sh
2. Your application compilation
3. Any other libraries you build
### Issue 2: Missing Symbols from nostr_core_lib
**Error:**
```
undefined reference to 'nostr_create_event'
undefined reference to 'nostr_sign_event'
```
**Cause**: Required NIPs not included in nostr_core_lib build.
**Solution**: Add missing NIPs:
```bash
./build.sh --nips=1,6,19 # Add the NIPs you need
```
### Issue 3: Docker Permission Denied
**Error:**
```
permission denied while trying to connect to the Docker daemon socket
```
**Solution**:
```bash
sudo usermod -aG docker $USER
newgrp docker # Or logout and login
```
### Issue 4: Binary Won't Run on Target System
**Checks**:
```bash
# 1. Verify it's static
ldd my_app_static # Should show "not a dynamic executable"
# 2. Check architecture
file my_app_static # Should match target system
# 3. Test on different distributions
docker run --rm -v $(pwd):/app alpine:latest /app/my_app_static --version
docker run --rm -v $(pwd):/app ubuntu:latest /app/my_app_static --version
```
## Project Structure Example
Organize your project for easy static builds:
```
my-nostr-app/
├── src/
│ ├── main.c
│ ├── handlers.c
│ └── utils.c
├── nostr_core_lib/ # Git submodule
├── nostr_login_lite/ # Git submodule (if used)
├── Dockerfile.static # Static build Dockerfile
├── build_static.sh # Build script
├── Makefile # Regular build
└── README.md
```
### Makefile Integration
Add static build targets to your Makefile:
```makefile
# Regular dynamic build
all: my_app
my_app: src/*.c
gcc -O2 src/*.c -o my_app \
nostr_core_lib/libnostr_core_x64.a \
-lssl -lcrypto -lsecp256k1 -lz -lpthread -lm
# Static MUSL build via Docker
static:
./build_static.sh
# Clean
clean:
rm -f my_app build/my_app_static_*
.PHONY: all static clean
```
## Deployment
### Single Binary Deployment
```bash
# Copy to server
scp build/my_app_static_x86_64 user@server:/opt/my-app/
# Run (no dependencies needed!)
ssh user@server
/opt/my-app/my_app_static_x86_64
```
### SystemD Service
```ini
[Unit]
Description=My Nostr Application
After=network.target
[Service]
Type=simple
User=myapp
WorkingDirectory=/opt/my-app
ExecStart=/opt/my-app/my_app_static_x86_64
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
```
### Docker Container (Minimal)
```dockerfile
FROM scratch
COPY my_app_static_x86_64 /app
ENTRYPOINT ["/app"]
```
Build and run:
```bash
docker build -t my-app:latest .
docker run --rm my-app:latest --help
```
## Reusing c-relay Files
You can directly copy these files from c-relay:
### 1. Dockerfile.alpine-musl
```bash
cp /path/to/c-relay/Dockerfile.alpine-musl ./Dockerfile.static
```
Then customize:
- Change binary name (line 125)
- Adjust source files (line 122-124)
- Modify include paths (line 120-121)
### 2. build_static.sh
```bash
cp /path/to/c-relay/build_static.sh ./
```
Then customize:
- Change `OUTPUT_NAME` variable (lines 66, 70)
- Update Docker image name (line 98)
- Modify verification commands (lines 180-184)
### 3. .dockerignore (Optional)
```bash
cp /path/to/c-relay/.dockerignore ./
```
Helps speed up Docker builds by excluding unnecessary files.
## Best Practices
1. **Version Control**: Commit your Dockerfile and build script
2. **Tag Builds**: Include git commit hash in binary version
3. **Test Thoroughly**: Verify on multiple distributions
4. **Document Dependencies**: List required NIPs and libraries
5. **Automate**: Use CI/CD to build on every commit
6. **Archive Binaries**: Keep old versions for rollback
## Performance Comparison
| Metric | MUSL Static | glibc Dynamic |
|--------|-------------|---------------|
| Binary Size | 7-10 MB | 2-3 MB + libs |
| Startup Time | ~50ms | ~40ms |
| Memory Usage | Similar | Similar |
| Portability | ✓ Universal | ✗ System-dependent |
| Deployment | Single file | Binary + libraries |
## References
- [MUSL libc](https://musl.libc.org/)
- [Alpine Linux](https://alpinelinux.org/)
- [nostr_core_lib](https://github.com/chebizarro/nostr_core_lib)
- [Static Linking Best Practices](https://www.musl-libc.org/faq.html)
- [c-relay Implementation](./docs/musl_static_build.md)
## Example: Minimal Nostr Client
Here's a complete example of building a minimal Nostr client:
```c
// minimal_client.c
#include "nostr_core/nostr_core.h"
#include <stdio.h>
int main() {
// Generate keypair
char nsec[64], npub[64];
nostr_generate_keypair(nsec, npub);
printf("Generated keypair:\n");
printf("Private key (nsec): %s\n", nsec);
printf("Public key (npub): %s\n", npub);
// Create event
cJSON *event = nostr_create_event(1, "Hello, Nostr!", NULL);
nostr_sign_event(event, nsec);
char *json = cJSON_Print(event);
printf("\nSigned event:\n%s\n", json);
free(json);
cJSON_Delete(event);
return 0;
}
```
**Dockerfile.static:**
```dockerfile
FROM alpine:3.19 AS builder
RUN apk add --no-cache build-base musl-dev git autoconf automake libtool \
openssl-dev openssl-libs-static zlib-dev zlib-static
WORKDIR /build
# Build secp256k1
RUN cd /tmp && git clone https://github.com/bitcoin-core/secp256k1.git && \
cd secp256k1 && ./autogen.sh && \
./configure --enable-static --disable-shared --prefix=/usr CFLAGS="-fPIC" && \
make -j$(nproc) && make install
# Copy and build nostr_core_lib
COPY nostr_core_lib /build/nostr_core_lib/
RUN cd nostr_core_lib && \
sed -i 's/CFLAGS="-Wall/CFLAGS="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -Wall/' build.sh && \
./build.sh --nips=1,6,19
# Build application
COPY minimal_client.c /build/
RUN gcc -static -O2 -Wall -std=c99 \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
-Inostr_core_lib -Inostr_core_lib/nostr_core -Inostr_core_lib/cjson \
minimal_client.c -o /build/minimal_client_static \
nostr_core_lib/libnostr_core_x64.a \
-lssl -lcrypto -lsecp256k1 -lz -lpthread -lm -ldl && \
strip /build/minimal_client_static
FROM scratch
COPY --from=builder /build/minimal_client_static /minimal_client_static
```
**Build and run:**
```bash
docker build -f Dockerfile.static -t minimal-client .
docker create --name temp minimal-client
docker cp temp:/minimal_client_static ./
docker rm temp
./minimal_client_static
```
## Conclusion
Static MUSL binaries provide the best portability for C applications. While they're slightly larger than dynamic binaries, the benefits of zero dependencies and universal compatibility make them ideal for:
- Server deployments across different Linux distributions
- Embedded systems and IoT devices
- Docker containers (FROM scratch)
- Distribution to users without dependency management
- Long-term archival and reproducibility
Follow this guide to create portable, self-contained binaries for your Nostr applications!

View File

File diff suppressed because it is too large Load Diff

View File

View File

@@ -38,14 +38,48 @@ INSERT OR IGNORE INTO config (key, value, description) VALUES
('auth_rules_enabled', 'false', 'Whether authentication rules are enabled for uploads'),
('server_name', 'ginxsom', 'Server name for responses'),
('admin_pubkey', '', 'Admin public key for API access'),
('admin_enabled', 'false', 'Whether admin API is enabled'),
('admin_enabled', 'true', 'Whether admin API is enabled'),
('nip42_require_auth', 'false', 'Enable NIP-42 challenge/response authentication'),
('nip42_challenge_timeout', '600', 'NIP-42 challenge timeout in seconds'),
('nip42_time_tolerance', '300', 'NIP-42 timestamp tolerance in seconds');
-- Authentication rules table for whitelist/blacklist functionality
CREATE TABLE IF NOT EXISTS auth_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rule_type TEXT NOT NULL, -- 'pubkey_blacklist', 'pubkey_whitelist',
-- 'hash_blacklist', 'mime_blacklist', 'mime_whitelist'
rule_target TEXT NOT NULL, -- The pubkey, hash, or MIME type to match
operation TEXT NOT NULL DEFAULT '*', -- 'upload', 'delete', 'list', or '*' for all
enabled INTEGER NOT NULL DEFAULT 1, -- 1 = enabled, 0 = disabled
priority INTEGER NOT NULL DEFAULT 100,-- Lower number = higher priority
description TEXT, -- Human-readable description
created_by TEXT, -- Admin pubkey who created the rule
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
-- Constraints
CHECK (rule_type IN ('pubkey_blacklist', 'pubkey_whitelist',
'hash_blacklist', 'mime_blacklist', 'mime_whitelist')),
CHECK (operation IN ('upload', 'delete', 'list', '*')),
CHECK (enabled IN (0, 1)),
CHECK (priority >= 0),
-- Unique constraint: one rule per type/target/operation combination
UNIQUE(rule_type, rule_target, operation)
);
-- Indexes for performance optimization
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_target ON auth_rules(rule_type, rule_target);
CREATE INDEX IF NOT EXISTS idx_auth_rules_operation ON auth_rules(operation);
CREATE INDEX IF NOT EXISTS idx_auth_rules_enabled ON auth_rules(enabled);
CREATE INDEX IF NOT EXISTS idx_auth_rules_priority ON auth_rules(priority);
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_operation ON auth_rules(rule_type, operation, enabled);
-- View for storage statistics
CREATE VIEW IF NOT EXISTS storage_stats AS
SELECT
SELECT
COUNT(*) as total_blobs,
SUM(size) as total_bytes,
AVG(size) as avg_blob_size,

Binary file not shown.

BIN
build/admin_auth.o Normal file

Binary file not shown.

BIN
build/admin_event.o Normal file

Binary file not shown.

BIN
build/admin_handlers.o Normal file

Binary file not shown.

BIN
build/admin_websocket.o Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -2,7 +2,7 @@
# Comprehensive Blossom Protocol Implementation
# Main context - specify error log here to override system default
error_log logs/nginx/error.log debug;
error_log logs/nginx/error.log info;
pid logs/nginx/nginx.pid;
events {
@@ -219,9 +219,28 @@ http {
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
}
# WebSocket Admin endpoint (/admin) - Nostr Kind 23456/23457 events
location /admin {
proxy_pass http://127.0.0.1:9442;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket timeouts
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
# Disable buffering for WebSocket
proxy_buffering off;
}
# Admin API endpoints (/api/*)
location /api/ {
if ($request_method !~ ^(GET|PUT)$) {
if ($request_method !~ ^(GET|PUT|POST)$) {
return 405;
}
fastcgi_pass fastcgi_backend;
@@ -351,14 +370,33 @@ http {
autoindex_format json;
}
# Root redirect
# Root endpoint - Server info from FastCGI
location = / {
return 200 "Ginxsom Blossom Server\nEndpoints: GET /<sha256>, PUT /upload, GET /list/<pubkey>\nHealth: GET /health\n";
add_header Content-Type text/plain;
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass fastcgi_backend;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param REDIRECT_STATUS 200;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
}
}
@@ -551,9 +589,28 @@ http {
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
}
# WebSocket Admin endpoint (/admin) - Nostr Kind 23456/23457 events
location /admin {
proxy_pass http://127.0.0.1:9442;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket timeouts
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
# Disable buffering for WebSocket
proxy_buffering off;
}
# Admin API endpoints (/api/*)
location /api/ {
if ($request_method !~ ^(GET|PUT)$) {
if ($request_method !~ ^(GET|PUT|POST)$) {
return 405;
}
fastcgi_pass fastcgi_backend;
@@ -683,14 +740,33 @@ http {
autoindex_format json;
}
# Root redirect
# Root endpoint - Server info from FastCGI
location = / {
return 200 "Ginxsom Blossom Server (HTTPS)\nEndpoints: GET /<sha256>, PUT /upload, GET /list/<pubkey>\nHealth: GET /health\n";
add_header Content-Type text/plain;
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass fastcgi_backend;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param REDIRECT_STATUS 200;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
}
}
}

Binary file not shown.

File diff suppressed because it is too large Load Diff

306
deploy_lt.sh Executable file
View File

@@ -0,0 +1,306 @@
#!/bin/bash
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
print_status() { echo -e "${BLUE}[INFO]${NC} $1"; }
print_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
print_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
print_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Parse command line arguments
FRESH_INSTALL=false
if [[ "$1" == "--fresh" ]]; then
FRESH_INSTALL=true
fi
# Configuration
REMOTE_HOST="laantungir.net"
REMOTE_USER="ubuntu"
REMOTE_DIR="/home/ubuntu/ginxsom"
REMOTE_DB_PATH="/home/ubuntu/ginxsom/db/ginxsom.db"
REMOTE_NGINX_CONFIG="/etc/nginx/conf.d/default.conf"
REMOTE_BINARY_PATH="/home/ubuntu/ginxsom/ginxsom.fcgi"
REMOTE_SOCKET="/tmp/ginxsom-fcgi.sock"
REMOTE_DATA_DIR="/var/www/html/blossom"
print_status "Starting deployment to $REMOTE_HOST..."
# Step 1: Build and prepare local binary
print_status "Building ginxsom binary..."
make clean && make
if [[ ! -f "build/ginxsom-fcgi" ]]; then
print_error "Build failed - binary not found"
exit 1
fi
print_success "Binary built successfully"
# Step 2: Setup remote environment first (before copying files)
print_status "Setting up remote environment..."
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
set -e
# Create data directory if it doesn't exist (using existing /var/www/html/blossom)
sudo mkdir -p /var/www/html/blossom
sudo chown www-data:www-data /var/www/html/blossom
sudo chmod 755 /var/www/html/blossom
# Ensure socket directory exists
sudo mkdir -p /tmp
sudo chmod 755 /tmp
# Install required dependencies
echo "Installing required dependencies..."
sudo apt-get update
sudo apt-get install -y spawn-fcgi libfcgi-dev
# Stop any existing ginxsom processes
echo "Stopping existing ginxsom processes..."
sudo pkill -f ginxsom-fcgi || true
sudo rm -f /tmp/ginxsom-fcgi.sock || true
echo "Remote environment setup complete"
EOF
print_success "Remote environment configured"
# Step 3: Copy files to remote server
print_status "Copying files to remote server..."
# Copy entire project directory (excluding unnecessary files)
print_status "Copying entire ginxsom project..."
rsync -avz --exclude='.git' --exclude='build' --exclude='logs' --exclude='Trash' --exclude='blobs' --exclude='db' --no-g --no-o --no-perms --omit-dir-times . $REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/
# Build on remote server to ensure compatibility
print_status "Building ginxsom on remote server..."
ssh $REMOTE_USER@$REMOTE_HOST "cd $REMOTE_DIR && make clean && make" || {
print_error "Build failed on remote server"
print_status "Checking what packages are actually installed..."
ssh $REMOTE_USER@$REMOTE_HOST "dpkg -l | grep -E '(sqlite|fcgi)'"
exit 1
}
# Copy binary to application directory
print_status "Copying ginxsom binary to application directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Stop any running process first
sudo pkill -f ginxsom-fcgi || true
sleep 1
# Remove old binary if it exists
rm -f $REMOTE_BINARY_PATH
# Copy new binary
cp $REMOTE_DIR/build/ginxsom-fcgi $REMOTE_BINARY_PATH
chmod +x $REMOTE_BINARY_PATH
chown ubuntu:ubuntu $REMOTE_BINARY_PATH
echo "Binary copied successfully"
EOF
# NOTE: Do NOT update nginx configuration automatically
# The deployment script should only update ginxsom binaries and do nothing else with the system
# Nginx configuration should be managed manually by the system administrator
print_status "Skipping nginx configuration update (manual control required)"
print_success "Files copied to remote server"
# Step 3: Setup remote environment
print_status "Setting up remote environment..."
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
set -e
# Create data directory if it doesn't exist (using existing /var/www/html/blossom)
sudo mkdir -p /var/www/html/blossom
sudo chown www-data:www-data /var/www/html/blossom
sudo chmod 755 /var/www/html/blossom
# Ensure socket directory exists
sudo mkdir -p /tmp
sudo chmod 755 /tmp
# Install required dependencies
echo "Installing required dependencies..."
sudo apt-get update 2>/dev/null || true # Continue even if apt update has issues
sudo apt-get install -y spawn-fcgi libfcgi-dev libsqlite3-dev sqlite3 libcurl4-openssl-dev
# Verify installations
echo "Verifying installations..."
if ! dpkg -l libsqlite3-dev >/dev/null 2>&1; then
echo "libsqlite3-dev not found, trying alternative..."
sudo apt-get install -y libsqlite3-dev || {
echo "Failed to install libsqlite3-dev"
exit 1
}
fi
if ! dpkg -l libfcgi-dev >/dev/null 2>&1; then
echo "libfcgi-dev not found"
exit 1
fi
# Check if sqlite3.h exists
if [ ! -f /usr/include/sqlite3.h ]; then
echo "sqlite3.h not found in /usr/include/"
find /usr -name "sqlite3.h" 2>/dev/null || echo "sqlite3.h not found anywhere"
exit 1
fi
# Stop any existing ginxsom processes
echo "Stopping existing ginxsom processes..."
sudo pkill -f ginxsom-fcgi || true
sudo rm -f /tmp/ginxsom-fcgi.sock || true
echo "Remote environment setup complete"
EOF
print_success "Remote environment configured"
# Step 4: Setup database directory and migrate database
print_status "Setting up database directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Create db directory if it doesn't exist
mkdir -p $REMOTE_DIR/db
if [ "$FRESH_INSTALL" = "true" ]; then
echo "Fresh install: removing existing database and blobs..."
# Remove existing database
sudo rm -f $REMOTE_DB_PATH
sudo rm -f /var/www/html/blossom/ginxsom.db
# Remove existing blobs
sudo rm -rf $REMOTE_DATA_DIR/*
echo "Existing data removed"
else
# Backup current database if it exists in old location
if [ -f /var/www/html/blossom/ginxsom.db ]; then
echo "Backing up existing database..."
cp /var/www/html/blossom/ginxsom.db /var/www/html/blossom/ginxsom.db.backup.\$(date +%Y%m%d_%H%M%S)
# Migrate database to new location if not already there
if [ ! -f $REMOTE_DB_PATH ]; then
echo "Migrating database to new location..."
cp /var/www/html/blossom/ginxsom.db $REMOTE_DB_PATH
else
echo "Database already exists at new location"
fi
elif [ ! -f $REMOTE_DB_PATH ]; then
echo "No existing database found - will be created on first run"
else
echo "Database already exists at $REMOTE_DB_PATH"
fi
fi
# Set proper permissions - www-data needs write access to db directory for SQLite journal files
sudo chown -R www-data:www-data $REMOTE_DIR/db
sudo chmod 755 $REMOTE_DIR/db
sudo chmod 644 $REMOTE_DB_PATH 2>/dev/null || true
# Allow www-data to access the application directory for spawn-fcgi chdir
chmod 755 $REMOTE_DIR
echo "Database directory setup complete"
EOF
print_success "Database directory configured"
# Step 5: Start ginxsom FastCGI process
print_status "Starting ginxsom FastCGI process..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Clean up any existing socket
sudo rm -f $REMOTE_SOCKET
# Start FastCGI process with explicit paths
echo "Starting ginxsom FastCGI with configuration:"
echo " Working directory: $REMOTE_DIR"
echo " Binary: $REMOTE_BINARY_PATH"
echo " Database: $REMOTE_DB_PATH"
echo " Storage: $REMOTE_DATA_DIR"
sudo spawn-fcgi -M 666 -u www-data -g www-data -s $REMOTE_SOCKET -U www-data -G www-data -d $REMOTE_DIR -- $REMOTE_BINARY_PATH --db-path "$REMOTE_DB_PATH" --storage-dir "$REMOTE_DATA_DIR"
# Give it a moment to start
sleep 2
# Verify process is running
if pgrep -f "ginxsom-fcgi" > /dev/null; then
echo "FastCGI process started successfully"
echo "PID: \$(pgrep -f ginxsom-fcgi)"
else
echo "Process not found by pgrep, but socket exists - this may be normal for FastCGI"
echo "Checking socket..."
ls -la $REMOTE_SOCKET
echo "Checking if binary exists and is executable..."
ls -la $REMOTE_BINARY_PATH
echo "Testing if we can connect to the socket..."
# Try to test the FastCGI connection
if command -v cgi-fcgi >/dev/null 2>&1; then
echo "Testing FastCGI connection..."
SCRIPT_NAME=/health SCRIPT_FILENAME=$REMOTE_BINARY_PATH REQUEST_METHOD=GET cgi-fcgi -bind -connect $REMOTE_SOCKET 2>/dev/null | head -5 || echo "Connection test failed"
else
echo "cgi-fcgi not available for testing"
fi
# Don't exit - the socket existing means spawn-fcgi worked
fi
EOF
if [ $? -eq 0 ]; then
print_success "FastCGI process started"
else
print_error "Failed to start FastCGI process"
exit 1
fi
# Step 6: Test nginx configuration and reload
print_status "Testing and reloading nginx..."
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
# Test nginx configuration
if sudo nginx -t; then
echo "Nginx configuration test passed"
sudo nginx -s reload
echo "Nginx reloaded successfully"
else
echo "Nginx configuration test failed"
exit 1
fi
EOF
print_success "Nginx reloaded"
# Step 7: Test deployment
print_status "Testing deployment..."
# Test health endpoint
echo "Testing health endpoint..."
if curl -k -s --max-time 10 "https://blossom.laantungir.net/health" | grep -q "OK"; then
print_success "Health check passed"
else
print_warning "Health check failed - checking response..."
curl -k -v --max-time 10 "https://blossom.laantungir.net/health" 2>&1 | head -10
fi
# Test basic endpoints
echo "Testing root endpoint..."
if curl -k -s --max-time 10 "https://blossom.laantungir.net/" | grep -q "Ginxsom"; then
print_success "Root endpoint responding"
else
print_warning "Root endpoint not responding as expected - checking response..."
curl -k -v --max-time 10 "https://blossom.laantungir.net/" 2>&1 | head -10
fi
print_success "Deployment to $REMOTE_HOST completed!"
print_status "Ginxsom should now be available at: https://blossom.laantungir.net"
print_status "Test endpoints:"
echo " Health: curl -k https://blossom.laantungir.net/health"
echo " Root: curl -k https://blossom.laantungir.net/"
echo " List: curl -k https://blossom.laantungir.net/list"
if [ "$FRESH_INSTALL" = "true" ]; then
print_warning "Fresh install completed - database and blobs have been reset"
fi

View File

@@ -0,0 +1,496 @@
# Authentication Rules Implementation Plan
## Executive Summary
This document outlines the implementation plan for adding whitelist/blacklist functionality to the Ginxsom Blossom server. The authentication rules system is **already coded** in [`src/request_validator.c`](src/request_validator.c) but lacks the database schema to function. This plan focuses on completing the implementation by adding the missing database tables and Admin API endpoints.
## Current State Analysis
### ✅ Already Implemented
- **Nostr event validation** - Full cryptographic verification (NIP-42 and Blossom)
- **Rule evaluation engine** - Complete priority-based logic in [`check_database_auth_rules()`](src/request_validator.c:1309-1471)
- **Configuration system** - `auth_rules_enabled` flag in config table
- **Admin API framework** - Authentication and endpoint structure in place
- **Documentation** - Comprehensive flow diagrams in [`docs/AUTH_API.md`](docs/AUTH_API.md)
### ❌ Missing Components
- **Database schema** - `auth_rules` table doesn't exist
- **Cache table** - `auth_rules_cache` for performance optimization
- **Admin API endpoints** - CRUD operations for managing rules
- **Migration script** - Database schema updates
- **Test suite** - Validation of rule enforcement
## Database Schema Design
### 1. auth_rules Table
```sql
-- Authentication rules for whitelist/blacklist functionality
CREATE TABLE IF NOT EXISTS auth_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rule_type TEXT NOT NULL, -- 'pubkey_blacklist', 'pubkey_whitelist',
-- 'hash_blacklist', 'mime_blacklist', 'mime_whitelist'
rule_target TEXT NOT NULL, -- The pubkey, hash, or MIME type to match
operation TEXT NOT NULL DEFAULT '*', -- 'upload', 'delete', 'list', or '*' for all
enabled INTEGER NOT NULL DEFAULT 1, -- 1 = enabled, 0 = disabled
priority INTEGER NOT NULL DEFAULT 100,-- Lower number = higher priority
description TEXT, -- Human-readable description
created_by TEXT, -- Admin pubkey who created the rule
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
-- Constraints
CHECK (rule_type IN ('pubkey_blacklist', 'pubkey_whitelist',
'hash_blacklist', 'mime_blacklist', 'mime_whitelist')),
CHECK (operation IN ('upload', 'delete', 'list', '*')),
CHECK (enabled IN (0, 1)),
CHECK (priority >= 0),
-- Unique constraint: one rule per type/target/operation combination
UNIQUE(rule_type, rule_target, operation)
);
-- Indexes for performance
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_target ON auth_rules(rule_type, rule_target);
CREATE INDEX IF NOT EXISTS idx_auth_rules_operation ON auth_rules(operation);
CREATE INDEX IF NOT EXISTS idx_auth_rules_enabled ON auth_rules(enabled);
CREATE INDEX IF NOT EXISTS idx_auth_rules_priority ON auth_rules(priority);
```
### 2. auth_rules_cache Table
```sql
-- Cache for authentication decisions (5-minute TTL)
CREATE TABLE IF NOT EXISTS auth_rules_cache (
cache_key TEXT PRIMARY KEY NOT NULL, -- SHA-256 hash of request parameters
decision INTEGER NOT NULL, -- 1 = allow, 0 = deny
reason TEXT, -- Reason for decision
pubkey TEXT, -- Public key from request
operation TEXT, -- Operation type
resource_hash TEXT, -- Resource hash (if applicable)
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
expires_at INTEGER NOT NULL, -- Expiration timestamp
CHECK (decision IN (0, 1))
);
-- Index for cache expiration cleanup
CREATE INDEX IF NOT EXISTS idx_auth_cache_expires ON auth_rules_cache(expires_at);
```
### 3. Rule Type Definitions
| Rule Type | Purpose | Target Format | Priority Range |
|-----------|---------|---------------|----------------|
| `pubkey_blacklist` | Block specific users | 64-char hex pubkey | 1-99 (highest) |
| `hash_blacklist` | Block specific files | 64-char hex SHA-256 | 100-199 |
| `mime_blacklist` | Block file types | MIME type string | 200-299 |
| `pubkey_whitelist` | Allow specific users | 64-char hex pubkey | 300-399 |
| `mime_whitelist` | Allow file types | MIME type string | 400-499 |
### 4. Operation Types
- `upload` - File upload operations
- `delete` - File deletion operations
- `list` - File listing operations
- `*` - All operations (wildcard)
## Admin API Endpoints
### GET /api/rules
**Purpose**: List all authentication rules with filtering
**Authentication**: Required (admin pubkey)
**Query Parameters**:
- `rule_type` (optional): Filter by rule type
- `operation` (optional): Filter by operation
- `enabled` (optional): Filter by enabled status (true/false)
- `limit` (default: 100): Number of rules to return
- `offset` (default: 0): Pagination offset
**Response**:
```json
{
"status": "success",
"data": {
"rules": [
{
"id": 1,
"rule_type": "pubkey_blacklist",
"rule_target": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798",
"operation": "upload",
"enabled": true,
"priority": 10,
"description": "Blocked spammer account",
"created_by": "admin_pubkey_here",
"created_at": 1704067200,
"updated_at": 1704067200
}
],
"total": 1,
"limit": 100,
"offset": 0
}
}
```
### POST /api/rules
**Purpose**: Create a new authentication rule
**Authentication**: Required (admin pubkey)
**Request Body**:
```json
{
"rule_type": "pubkey_blacklist",
"rule_target": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798",
"operation": "upload",
"priority": 10,
"description": "Blocked spammer account"
}
```
**Response**:
```json
{
"status": "success",
"message": "Rule created successfully",
"data": {
"id": 1,
"rule_type": "pubkey_blacklist",
"rule_target": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798",
"operation": "upload",
"enabled": true,
"priority": 10,
"description": "Blocked spammer account",
"created_at": 1704067200
}
}
```
### PUT /api/rules/:id
**Purpose**: Update an existing rule
**Authentication**: Required (admin pubkey)
**Request Body**:
```json
{
"enabled": false,
"priority": 20,
"description": "Updated description"
}
```
**Response**:
```json
{
"status": "success",
"message": "Rule updated successfully",
"data": {
"id": 1,
"updated_fields": ["enabled", "priority", "description"]
}
}
```
### DELETE /api/rules/:id
**Purpose**: Delete an authentication rule
**Authentication**: Required (admin pubkey)
**Response**:
```json
{
"status": "success",
"message": "Rule deleted successfully",
"data": {
"id": 1
}
}
```
### POST /api/rules/clear-cache
**Purpose**: Clear the authentication rules cache
**Authentication**: Required (admin pubkey)
**Response**:
```json
{
"status": "success",
"message": "Authentication cache cleared",
"data": {
"entries_cleared": 42
}
}
```
### GET /api/rules/test
**Purpose**: Test if a specific request would be allowed
**Authentication**: Required (admin pubkey)
**Query Parameters**:
- `pubkey` (required): Public key to test
- `operation` (required): Operation type (upload/delete/list)
- `hash` (optional): Resource hash
- `mime` (optional): MIME type
**Response**:
```json
{
"status": "success",
"data": {
"allowed": false,
"reason": "Public key blacklisted",
"matched_rule": {
"id": 1,
"rule_type": "pubkey_blacklist",
"description": "Blocked spammer account"
}
}
}
```
## Implementation Phases
### Phase 1: Database Schema (Priority: HIGH)
**Estimated Time**: 2-4 hours
**Tasks**:
1. Create migration script `db/migrations/001_add_auth_rules.sql`
2. Add `auth_rules` table with indexes
3. Add `auth_rules_cache` table with indexes
4. Create migration runner script
5. Test migration on clean database
6. Test migration on existing database
**Deliverables**:
- Migration SQL script
- Migration runner bash script
- Migration documentation
**Validation**:
- Verify tables created successfully
- Verify indexes exist
- Verify constraints work correctly
- Test with sample data
### Phase 2: Admin API Endpoints (Priority: HIGH)
**Estimated Time**: 6-8 hours
**Tasks**:
1. Implement `GET /api/rules` endpoint
2. Implement `POST /api/rules` endpoint
3. Implement `PUT /api/rules/:id` endpoint
4. Implement `DELETE /api/rules/:id` endpoint
5. Implement `POST /api/rules/clear-cache` endpoint
6. Implement `GET /api/rules/test` endpoint
7. Add input validation for all endpoints
8. Add error handling and logging
**Deliverables**:
- C implementation in `src/admin_api.c`
- Header declarations in `src/ginxsom.h`
- API documentation updates
**Validation**:
- Test each endpoint with valid data
- Test error cases (invalid input, missing auth, etc.)
- Verify database operations work correctly
- Check response formats match specification
### Phase 3: Integration & Testing (Priority: HIGH)
**Estimated Time**: 4-6 hours
**Tasks**:
1. Create comprehensive test suite
2. Test rule creation and enforcement
3. Test cache functionality
4. Test priority ordering
5. Test whitelist default-deny behavior
6. Test performance with many rules
7. Document test scenarios
**Deliverables**:
- Test script `tests/auth_rules_test.sh`
- Performance benchmarks
- Test documentation
**Validation**:
- All test cases pass
- Performance meets requirements (<3ms per request)
- Cache hit rate >80% under load
- No memory leaks detected
### Phase 4: Documentation & Examples (Priority: MEDIUM)
**Estimated Time**: 2-3 hours
**Tasks**:
1. Update [`docs/AUTH_API.md`](docs/AUTH_API.md) with rule management
2. Create usage examples
3. Document common patterns (blocking users, allowing file types)
4. Create migration guide for existing deployments
5. Add troubleshooting section
**Deliverables**:
- Updated documentation
- Example scripts
- Migration guide
- Troubleshooting guide
## Code Changes Required
### 1. src/request_validator.c
**Status**: ✅ Already implemented - NO CHANGES NEEDED
The rule evaluation logic is complete in [`check_database_auth_rules()`](src/request_validator.c:1309-1471). Once the database tables exist, this code will work immediately.
### 2. src/admin_api.c
**Status**: ❌ Needs new endpoints
Add new functions:
```c
// Rule management endpoints
int handle_get_rules(FCGX_Request *request);
int handle_create_rule(FCGX_Request *request);
int handle_update_rule(FCGX_Request *request);
int handle_delete_rule(FCGX_Request *request);
int handle_clear_cache(FCGX_Request *request);
int handle_test_rule(FCGX_Request *request);
```
### 3. src/ginxsom.h
**Status**: ❌ Needs new declarations
Add function prototypes for new admin endpoints.
### 4. db/schema.sql
**Status**: ❌ Needs new tables
Add `auth_rules` and `auth_rules_cache` table definitions.
## Migration Strategy
### For New Installations
1. Run updated `db/init.sh` which includes new tables
2. No additional steps needed
### For Existing Installations
1. Create backup: `cp db/ginxsom.db db/ginxsom.db.backup`
2. Run migration: `sqlite3 db/ginxsom.db < db/migrations/001_add_auth_rules.sql`
3. Verify migration: `sqlite3 db/ginxsom.db ".schema auth_rules"`
4. Restart server to load new schema
### Rollback Procedure
1. Stop server
2. Restore backup: `cp db/ginxsom.db.backup db/ginxsom.db`
3. Restart server
## Performance Considerations
### Cache Strategy
- **5-minute TTL** balances freshness with performance
- **SHA-256 cache keys** prevent collision attacks
- **Automatic cleanup** of expired entries every 5 minutes
- **Cache hit target**: >80% under normal load
### Database Optimization
- **Indexes on all query columns** for fast lookups
- **Prepared statements** prevent SQL injection
- **Single connection** with proper cleanup
- **Query optimization** for rule evaluation order
### Expected Performance
- **Cache hit**: ~100μs (SQLite SELECT)
- **Cache miss**: ~2.4ms (full validation + rule checks)
- **Rule creation**: ~50ms (INSERT + cache invalidation)
- **Rule update**: ~30ms (UPDATE + cache invalidation)
## Security Considerations
### Input Validation
- Validate all rule_type values against enum
- Validate pubkey format (64 hex chars)
- Validate hash format (64 hex chars)
- Validate MIME type format
- Sanitize description text
### Authorization
- All rule management requires admin pubkey
- Verify Nostr event signatures
- Check event expiration
- Log all rule changes with admin pubkey
### Attack Mitigation
- **Rule flooding**: Limit total rules per type
- **Cache poisoning**: Cryptographic cache keys
- **Priority manipulation**: Validate priority ranges
- **Whitelist bypass**: Default-deny when whitelist exists
## Testing Strategy
### Unit Tests
- Rule creation with valid data
- Rule creation with invalid data
- Rule update operations
- Rule deletion
- Cache operations
- Priority ordering
### Integration Tests
- End-to-end request flow
- Multiple rules interaction
- Cache hit/miss scenarios
- Whitelist default-deny behavior
- Performance under load
### Security Tests
- Invalid admin pubkey rejection
- Expired event rejection
- SQL injection attempts
- Cache poisoning attempts
- Priority bypass attempts
## Success Criteria
### Functional Requirements
- ✅ Rules can be created via Admin API
- ✅ Rules can be updated via Admin API
- ✅ Rules can be deleted via Admin API
- ✅ Rules are enforced during request validation
- ✅ Cache improves performance significantly
- ✅ Priority ordering works correctly
- ✅ Whitelist default-deny works correctly
### Performance Requirements
- ✅ Cache hit latency <200μs
- Full validation latency <3ms
- Cache hit rate >80% under load
- ✅ No memory leaks
- ✅ Database queries optimized
### Security Requirements
- ✅ Admin authentication required
- ✅ Input validation prevents injection
- ✅ Audit logging of all changes
- ✅ Cache keys prevent poisoning
- ✅ Whitelist bypass prevented
## Timeline Estimate
| Phase | Duration | Dependencies |
|-------|----------|--------------|
| Phase 1: Database Schema | 2-4 hours | None |
| Phase 2: Admin API | 6-8 hours | Phase 1 |
| Phase 3: Testing | 4-6 hours | Phase 2 |
| Phase 4: Documentation | 2-3 hours | Phase 3 |
| **Total** | **14-21 hours** | Sequential |
## Next Steps
1. **Review this plan** with stakeholders
2. **Create Phase 1 migration script** in `db/migrations/`
3. **Test migration** on development database
4. **Implement Phase 2 endpoints** in `src/admin_api.c`
5. **Create test suite** in `tests/auth_rules_test.sh`
6. **Update documentation** in `docs/`
7. **Deploy to production** with migration guide
## Conclusion
The authentication rules system is **90% complete** - the core logic exists and is well-tested. This implementation plan focuses on the final 10%: adding database tables and Admin API endpoints. The work is straightforward, well-scoped, and can be completed in 2-3 days of focused development.
The system will provide powerful whitelist/blacklist functionality while maintaining the performance and security characteristics already present in the codebase.

View File

@@ -0,0 +1,300 @@
# Database Naming Design (c-relay Pattern)
## Overview
Following c-relay's architecture, ginxsom will use pubkey-based database naming to ensure database-key consistency and prevent mismatched configurations.
## Database Naming Convention
Database files are named after the blossom server's public key:
```
db/<blossom_pubkey>.db
```
Example:
```
db/52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a.db
```
## Startup Scenarios
### Scenario 1: No Arguments (Fresh Start)
```bash
./ginxsom-fcgi
```
**Behavior:**
1. Generate new server keypair
2. Create database file: `db/<new_pubkey>.db`
3. Store keys in the new database
4. Start server
**Result:** New instance with fresh keys and database
---
### Scenario 2: Database File Specified
```bash
./ginxsom-fcgi --db-path db/52e366ed...198681a.db
```
**Behavior:**
1. Open specified database
2. Load blossom_seckey from database
3. Verify pubkey matches database filename
4. Load admin_pubkey if present
5. Start server
**Validation:**
- Database MUST exist
- Database MUST contain blossom_seckey
- Derived pubkey MUST match filename
**Error Cases:**
- Database doesn't exist → Error: "Database file not found"
- Database missing blossom_seckey → Error: "Invalid database: missing server keys"
- Pubkey mismatch → Error: "Database pubkey mismatch: expected X, got Y"
---
### Scenario 3: Keys Specified (New Instance with Specific Keys)
```bash
./ginxsom-fcgi --server-privkey c4e0d2ed...309c48f1 --admin-pubkey 8ff74724...5eedde0e
```
**Behavior:**
1. Validate provided server private key
2. Derive server public key
3. Create database file: `db/<derived_pubkey>.db`
4. Store both keys in new database
5. Start server
**Validation:**
- server-privkey MUST be valid 64-char hex
- Derived database file MUST NOT already exist (prevents overwriting)
**Error Cases:**
- Invalid privkey format → Error: "Invalid server private key format"
- Database already exists → Error: "Database already exists for this pubkey"
---
### Scenario 4: Test Mode
```bash
./ginxsom-fcgi --test-keys
```
**Behavior:**
1. Load keys from `.test_keys` file
2. Derive server public key from SERVER_PRIVKEY
3. Create/overwrite database: `db/<test_pubkey>.db`
4. Store test keys in database
5. Start server
**Special Handling:**
- Test mode ALWAYS overwrites existing database (for clean testing)
- Database name derived from test SERVER_PRIVKEY
---
### Scenario 5: Database + Keys Specified (Validation Mode)
```bash
./ginxsom-fcgi --db-path db/52e366ed...198681a.db --server-privkey c4e0d2ed...309c48f1
```
**Behavior:**
1. Open specified database
2. Load blossom_seckey from database
3. Compare with provided --server-privkey
4. If match: continue normally
5. If mismatch: ERROR and exit
**Purpose:** Validation/verification that correct keys are being used
**Error Cases:**
- Key mismatch → Error: "Server private key doesn't match database"
---
## Command Line Options
### Updated Options
```
--db-path PATH Database file path (must match pubkey if keys exist)
--storage-dir DIR Storage directory for files (default: blobs)
--admin-pubkey KEY Admin public key (only used when creating new database)
--server-privkey KEY Server private key (creates new DB or validates existing)
--test-keys Use test keys from .test_keys file
--generate-keys Generate new keypair and create database (deprecated - default behavior)
--help, -h Show this help message
```
### Removed Options
- `--generate-keys` - No longer needed, this is default behavior when no args provided
---
## Database Directory Structure
```
db/
├── 52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a.db # Test instance
├── a1b2c3d4e5f6...xyz.db # Production instance 1
├── f9e8d7c6b5a4...abc.db # Production instance 2
└── schema.sql # Schema template
```
Each database is completely independent and tied to its keypair.
---
## Implementation Logic Flow
```
START
├─ Parse command line arguments
├─ Initialize crypto system
├─ Determine mode:
│ │
│ ├─ Test mode (--test-keys)?
│ │ ├─ Load keys from .test_keys
│ │ ├─ Derive pubkey
│ │ ├─ Set db_path = db/<pubkey>.db
│ │ └─ Create/overwrite database
│ │
│ ├─ Keys provided (--server-privkey)?
│ │ ├─ Validate privkey format
│ │ ├─ Derive pubkey
│ │ ├─ Set db_path = db/<pubkey>.db
│ │ │
│ │ ├─ Database specified (--db-path)?
│ │ │ ├─ YES: Validate keys match database
│ │ │ └─ NO: Create new database
│ │ │
│ │ └─ Store keys in database
│ │
│ ├─ Database specified (--db-path)?
│ │ ├─ Open database
│ │ ├─ Load blossom_seckey
│ │ ├─ Derive pubkey
│ │ ├─ Validate pubkey matches filename
│ │ └─ Load admin_pubkey
│ │
│ └─ No arguments (fresh start)?
│ ├─ Generate new keypair
│ ├─ Set db_path = db/<new_pubkey>.db
│ └─ Create new database with keys
├─ Initialize database schema (if new)
├─ Load/validate all keys
└─ Start FastCGI server
```
---
## Migration Path
### For Existing Installations
1. **Backup current database:**
```bash
cp db/ginxsom.db db/ginxsom.db.backup
```
2. **Extract current pubkey:**
```bash
PUBKEY=$(sqlite3 db/ginxsom.db "SELECT value FROM config WHERE key='blossom_pubkey'")
```
3. **Rename database:**
```bash
mv db/ginxsom.db db/${PUBKEY}.db
```
4. **Update restart-all.sh:**
- Remove hardcoded `db/ginxsom.db` references
- Let application determine database name from keys
---
## Benefits
1. **Database-Key Consistency:** Impossible to use wrong database with wrong keys
2. **Multiple Instances:** Can run multiple independent instances with different keys
3. **Clear Identity:** Database filename immediately identifies the server
4. **Test Isolation:** Test databases are clearly separate from production
5. **No Accidental Overwrites:** Each keypair has its own database
6. **Follows c-relay Pattern:** Proven architecture from production relay software
---
## Error Messages
### Clear, Actionable Errors
```
ERROR: Database file not found: db/52e366ed...198681a.db
→ Specify a different database or let the application create a new one
ERROR: Invalid database: missing server keys
→ Database is corrupted or not a valid ginxsom database
ERROR: Database pubkey mismatch
Expected: 52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a
Got: a1b2c3d4e5f6789...
→ Database filename doesn't match the keys stored inside
ERROR: Server private key doesn't match database
→ The --server-privkey you provided doesn't match the database keys
ERROR: Database already exists for this pubkey: db/52e366ed...198681a.db
→ Use --db-path to open existing database or use different keys
```
---
## Testing Strategy
### Test Cases
1. **Fresh start (no args)** → Creates new database with generated keys
2. **Specify database** → Opens and validates existing database
3. **Specify keys** → Creates new database with those keys
4. **Test mode** → Uses test keys and creates test database
5. **Database + matching keys** → Validates and continues
6. **Database + mismatched keys** → Errors appropriately
7. **Invalid database path** → Clear error message
8. **Corrupted database** → Detects and reports
### Test Script
```bash
#!/bin/bash
# Test database naming system
# Test 1: Fresh start
./ginxsom-fcgi --generate-keys
# Should create db/<new_pubkey>.db
# Test 2: Test mode
./ginxsom-fcgi --test-keys
# Should create db/52e366ed...198681a.db
# Test 3: Specify keys
./ginxsom-fcgi --server-privkey abc123...
# Should create db/<derived_pubkey>.db
# Test 4: Open existing
./ginxsom-fcgi --db-path db/52e366ed...198681a.db
# Should open and validate
# Test 5: Mismatch error
./ginxsom-fcgi --db-path db/52e366ed...198681a.db --server-privkey wrong_key
# Should error with clear message

View File

@@ -0,0 +1,994 @@
# Ginxsom Management System Design
## Executive Summary
This document outlines the design for a secure management interface for ginxsom (Blossom media storage server) based on c-relay's proven admin system architecture. The design uses Kind 23456/23457 events with NIP-44 encryption over WebSocket for real-time admin operations.
## 1. System Architecture
### 1.1 High-Level Overview
```mermaid
graph TB
Admin[Admin Client] -->|WebSocket| WS[WebSocket Handler]
WS -->|Kind 23456| Auth[Admin Authorization]
Auth -->|Decrypt NIP-44| Decrypt[Command Decryption]
Decrypt -->|Parse JSON Array| Router[Command Router]
Router -->|Route by Command Type| Handlers[Unified Handlers]
Handlers -->|Execute| DB[(Database)]
Handlers -->|Execute| FS[File System]
Handlers -->|Generate Response| Encrypt[NIP-44 Encryption]
Encrypt -->|Kind 23457| WS
WS -->|WebSocket| Admin
style Admin fill:#e1f5ff
style Auth fill:#fff3cd
style Handlers fill:#d4edda
style DB fill:#f8d7da
```
### 1.2 Component Architecture
```mermaid
graph LR
subgraph "Admin Interface"
CLI[CLI Tool]
Web[Web Dashboard]
end
subgraph "ginxsom FastCGI Process"
WS[WebSocket Endpoint]
Auth[Authorization Layer]
Router[Command Router]
subgraph "Unified Handlers"
BlobH[Blob Handler]
StorageH[Storage Handler]
ConfigH[Config Handler]
StatsH[Stats Handler]
SystemH[System Handler]
end
DB[(SQLite Database)]
Storage[Blob Storage]
end
CLI -->|WebSocket| WS
Web -->|WebSocket| WS
WS --> Auth
Auth --> Router
Router --> BlobH
Router --> StorageH
Router --> ConfigH
Router --> StatsH
Router --> SystemH
BlobH --> DB
BlobH --> Storage
StorageH --> Storage
ConfigH --> DB
StatsH --> DB
SystemH --> DB
style Auth fill:#fff3cd
style Router fill:#d4edda
```
### 1.3 Data Flow for Admin Commands
```mermaid
sequenceDiagram
participant Admin
participant WebSocket
participant Auth
participant Handler
participant Database
Admin->>WebSocket: Kind 23456 Event (NIP-44 encrypted)
WebSocket->>Auth: Verify admin signature
Auth->>Auth: Check pubkey matches admin_pubkey
Auth->>Auth: Verify event signature
Auth->>WebSocket: Authorization OK
WebSocket->>Handler: Decrypt & parse command array
Handler->>Handler: Validate command structure
Handler->>Database: Execute operation
Database-->>Handler: Result
Handler->>Handler: Build response JSON
Handler->>WebSocket: Encrypt response (NIP-44)
WebSocket->>Admin: Kind 23457 Event (encrypted response)
```
### 1.4 Integration with Existing Ginxsom
```mermaid
graph TB
subgraph "Existing Ginxsom"
Main[main.c]
BUD04[bud04.c - Mirror]
BUD06[bud06.c - Requirements]
BUD08[bud08.c - NIP-94]
BUD09[bud09.c - Report]
AdminAPI[admin_api.c - Basic Admin]
Validator[request_validator.c]
end
subgraph "New Management System"
AdminWS[admin_websocket.c]
AdminAuth[admin_auth.c]
AdminHandlers[admin_handlers.c]
AdminConfig[admin_config.c]
end
Main -->|Initialize| AdminWS
AdminWS -->|Use| AdminAuth
AdminWS -->|Route to| AdminHandlers
AdminHandlers -->|Query| BUD04
AdminHandlers -->|Query| BUD06
AdminHandlers -->|Query| BUD08
AdminHandlers -->|Query| BUD09
AdminHandlers -->|Update| AdminConfig
AdminAuth -->|Use| Validator
style AdminWS fill:#d4edda
style AdminAuth fill:#fff3cd
style AdminHandlers fill:#e1f5ff
```
## 2. Database Schema
### 2.1 Core Tables
Following c-relay's minimal approach, we need only two tables for key management:
#### relay_seckey Table
```sql
-- Stores relay's private key (used for signing Kind 23457 responses)
CREATE TABLE relay_seckey (
private_key_hex TEXT NOT NULL CHECK (length(private_key_hex) = 64),
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))
);
```
**Note**: This table stores the relay's private key as plain hex (no encryption). The key is used to:
- Sign Kind 23457 response events
- Encrypt responses using NIP-44 (shared secret with admin pubkey)
#### config Table (Extended)
```sql
-- Existing config table, add admin_pubkey entry
INSERT INTO config (key, value, data_type, description, category, requires_restart)
VALUES (
'admin_pubkey',
'<64-char-hex-pubkey>',
'string',
'Public key of authorized admin (hex format)',
'security',
0
);
```
**Note**: Admin public key is stored in the config table, not a separate table. Admin private key is NEVER stored anywhere.
### 2.2 Schema Comparison with c-relay
| c-relay | ginxsom | Purpose |
|---------|---------|---------|
| `relay_seckey` (private_key_hex, created_at) | `relay_seckey` (private_key_hex, created_at) | Relay private key storage |
| `config` table entry for admin_pubkey | `config` table entry for admin_pubkey | Admin authorization |
| No audit log | No audit log | Keep it simple |
| No processed events tracking | No processed events tracking | Stateless processing |
### 2.3 Key Storage Strategy
**Relay Private Key**:
- Stored in `relay_seckey` table as plain 64-character hex
- Generated on first startup or provided via `--relay-privkey` CLI option
- Used for signing Kind 23457 responses and NIP-44 encryption
- Never exposed via API
**Admin Public Key**:
- Stored in `config` table as plain 64-character hex
- Generated on first startup or provided via `--admin-pubkey` CLI option
- Used to verify Kind 23456 command signatures
- Can be queried via admin API
**Admin Private Key**:
- NEVER stored anywhere in the system
- Kept only by the admin in their client/tool
- Used to sign Kind 23456 commands and decrypt Kind 23457 responses
## 3. API Design
### 3.1 Command Structure
Following c-relay's pattern, all commands use JSON array format:
```json
["command_name", {"param1": "value1", "param2": "value2"}]
```
### 3.2 Event Structure
#### Kind 23456 - Admin Command Event
```json
{
"kind": 23456,
"pubkey": "<admin-pubkey-hex>",
"created_at": 1234567890,
"tags": [
["p", "<relay-pubkey-hex>"]
],
"content": "<nip44-encrypted-command-array>",
"sig": "<signature>"
}
```
**Content (decrypted)**:
```json
["blob_list", {"limit": 100, "offset": 0}]
```
#### Kind 23457 - Admin Response Event
```json
{
"kind": 23457,
"pubkey": "<relay-pubkey-hex>",
"created_at": 1234567890,
"tags": [
["p", "<admin-pubkey-hex>"],
["e", "<original-command-event-id>"]
],
"content": "<nip44-encrypted-response>",
"sig": "<signature>"
}
```
**Content (decrypted)**:
```json
{
"success": true,
"data": {
"blobs": [
{"sha256": "abc123...", "size": 1024, "type": "image/png"},
{"sha256": "def456...", "size": 2048, "type": "video/mp4"}
],
"total": 2
}
}
```
### 3.3 Command Categories
#### Blob Operations
- `blob_list` - List blobs with pagination
- `blob_info` - Get detailed blob information
- `blob_delete` - Delete blob(s)
- `blob_mirror` - Mirror blob from another server
#### Storage Management
- `storage_stats` - Get storage usage statistics
- `storage_quota` - Get/set storage quotas
- `storage_cleanup` - Clean up orphaned files
#### Configuration
- `config_get` - Get configuration value(s)
- `config_set` - Set configuration value(s)
- `config_list` - List all configuration
- `auth_rules_list` - List authentication rules
- `auth_rules_add` - Add authentication rule
- `auth_rules_remove` - Remove authentication rule
#### Statistics
- `stats_uploads` - Upload statistics
- `stats_bandwidth` - Bandwidth usage
- `stats_storage` - Storage usage over time
- `stats_users` - User activity statistics
#### System
- `system_info` - Get system information
- `system_restart` - Restart server (graceful)
- `system_backup` - Trigger database backup
- `system_restore` - Restore from backup
### 3.4 Command Examples
#### Example 1: List Blobs
```json
// Command (Kind 23456 content, decrypted)
["blob_list", {
"limit": 50,
"offset": 0,
"type": "image/*",
"sort": "created_at",
"order": "desc"
}]
// Response (Kind 23457 content, decrypted)
{
"success": true,
"data": {
"blobs": [
{
"sha256": "abc123...",
"size": 102400,
"type": "image/png",
"created": 1234567890,
"url": "https://blossom.example.com/abc123.png"
}
],
"total": 150,
"limit": 50,
"offset": 0
}
}
```
#### Example 2: Delete Blob
```json
// Command
["blob_delete", {
"sha256": "abc123...",
"confirm": true
}]
// Response
{
"success": true,
"data": {
"deleted": true,
"sha256": "abc123...",
"freed_bytes": 102400
}
}
```
#### Example 3: Get Storage Stats
```json
// Command
["storage_stats", {}]
// Response
{
"success": true,
"data": {
"total_blobs": 1500,
"total_bytes": 5368709120,
"total_bytes_human": "5.0 GB",
"disk_usage": {
"used": 5368709120,
"available": 94631291904,
"total": 100000000000,
"percent": 5.4
},
"by_type": {
"image/png": {"count": 500, "bytes": 2147483648},
"image/jpeg": {"count": 300, "bytes": 1610612736},
"video/mp4": {"count": 200, "bytes": 1610612736}
}
}
}
```
#### Example 4: Set Configuration
```json
// Command
["config_set", {
"max_upload_size": 10485760,
"allowed_mime_types": ["image/*", "video/mp4"]
}]
// Response
{
"success": true,
"data": {
"updated": ["max_upload_size", "allowed_mime_types"],
"requires_restart": false
}
}
```
### 3.5 Error Handling
All errors follow consistent format:
```json
{
"success": false,
"error": {
"code": "BLOB_NOT_FOUND",
"message": "Blob with hash abc123... not found",
"details": {
"sha256": "abc123..."
}
}
}
```
**Error Codes**:
- `UNAUTHORIZED` - Invalid admin signature
- `INVALID_COMMAND` - Unknown command or malformed structure
- `INVALID_PARAMS` - Missing or invalid parameters
- `BLOB_NOT_FOUND` - Requested blob doesn't exist
- `STORAGE_FULL` - Storage quota exceeded
- `DATABASE_ERROR` - Database operation failed
- `SYSTEM_ERROR` - Internal server error
## 4. File Structure
### 4.1 New Files to Create
```
src/
├── admin_websocket.c # WebSocket endpoint for admin commands
├── admin_websocket.h # WebSocket handler declarations
├── admin_auth.c # Admin authorization (adapted from c-relay)
├── admin_auth.h # Authorization function declarations
├── admin_handlers.c # Unified command handlers
├── admin_handlers.h # Handler function declarations
├── admin_config.c # Configuration management
├── admin_config.h # Config function declarations
└── admin_keys.c # Key generation and storage
admin_keys.h # Key management declarations
include/
└── admin_system.h # Public admin system interface
```
### 4.2 Files to Adapt from c-relay
| c-relay File | Purpose | Adaptation for ginxsom |
|--------------|---------|------------------------|
| `dm_admin.c` | Admin event processing | → `admin_websocket.c` (WebSocket instead of DM) |
| `api.c` (lines 768-838) | NIP-44 encryption/response | → `admin_handlers.c` (response generation) |
| `config.c` (lines 500-583) | Key storage/retrieval | → `admin_keys.c` (relay key management) |
| `main.c` (lines 1389-1556) | CLI argument parsing | → `main.c` (add admin CLI options) |
### 4.3 Integration with Existing Files
**src/main.c**:
- Add CLI options: `--admin-pubkey`, `--relay-privkey`
- Initialize admin WebSocket endpoint
- Generate keys on first startup
**src/admin_api.c** (existing):
- Keep existing basic admin API
- Add WebSocket admin endpoint
- Route Kind 23456 events to new handlers
**db/schema.sql**:
- Add `relay_seckey` table
- Add `admin_pubkey` to config table
## 5. Implementation Plan
### 5.1 Phase 1: Foundation (Week 1)
**Goal**: Set up key management and database schema
**Tasks**:
1. Create `relay_seckey` table in schema
2. Add `admin_pubkey` to config table
3. Implement `admin_keys.c`:
- `generate_relay_keypair()`
- `generate_admin_keypair()`
- `store_relay_private_key()`
- `load_relay_private_key()`
- `get_admin_pubkey()`
4. Update `main.c`:
- Add CLI options (`--admin-pubkey`, `--relay-privkey`)
- Generate keys on first startup
- Print keys once (like c-relay)
5. Test key generation and storage
**Deliverables**:
- Working key generation
- Keys stored in database
- CLI options functional
### 5.2 Phase 2: Authorization (Week 2)
**Goal**: Implement admin event authorization
**Tasks**:
1. Create `admin_auth.c` (adapted from c-relay's authorization):
- `verify_admin_event()` - Check Kind 23456 signature
- `check_admin_pubkey()` - Verify against stored admin_pubkey
- `verify_relay_target()` - Check 'p' tag matches relay pubkey
2. Add NIP-44 crypto functions (use existing nostr_core_lib):
- `decrypt_admin_command()` - Decrypt Kind 23456 content
- `encrypt_admin_response()` - Encrypt Kind 23457 content
3. Test authorization flow
4. Test encryption/decryption
**Deliverables**:
- Working authorization layer
- NIP-44 encryption functional
- Unit tests for auth
### 5.3 Phase 3: WebSocket Endpoint (Week 3)
**Goal**: Create WebSocket handler for admin commands
**Tasks**:
1. Create `admin_websocket.c`:
- WebSocket endpoint at `/admin` or similar
- Receive Kind 23456 events
- Route to authorization layer
- Parse command array from decrypted content
- Route to appropriate handler
- Build Kind 23457 response
- Send encrypted response
2. Integrate with existing FastCGI WebSocket handling
3. Add connection management
4. Test WebSocket communication
**Deliverables**:
- Working WebSocket endpoint
- Event routing functional
- Response generation working
### 5.4 Phase 4: Command Handlers (Week 4-5)
**Goal**: Implement unified command handlers
**Tasks**:
1. Create `admin_handlers.c` with unified handler pattern:
- `handle_blob_command()` - Blob operations
- `handle_storage_command()` - Storage management
- `handle_config_command()` - Configuration
- `handle_stats_command()` - Statistics
- `handle_system_command()` - System operations
2. Implement each command:
- Blob: list, info, delete, mirror
- Storage: stats, quota, cleanup
- Config: get, set, list, auth_rules
- Stats: uploads, bandwidth, storage, users
- System: info, restart, backup, restore
3. Add validation for each command
4. Test each command individually
**Deliverables**:
- All commands implemented
- Validation working
- Integration tests passing
### 5.5 Phase 5: Testing & Documentation (Week 6)
**Goal**: Comprehensive testing and documentation
**Tasks**:
1. Create test suite:
- Unit tests for each handler
- Integration tests for full flow
- Security tests for authorization
- Performance tests for WebSocket
2. Create admin CLI tool (simple Node.js/Python script):
- Generate Kind 23456 events
- Send via WebSocket
- Decrypt Kind 23457 responses
- Pretty-print results
3. Write documentation:
- Admin API reference
- CLI tool usage guide
- Security best practices
- Troubleshooting guide
4. Create example scripts
**Deliverables**:
- Complete test suite
- Working CLI tool
- Full documentation
- Example scripts
### 5.6 Phase 6: Web Dashboard (Optional, Week 7-8)
**Goal**: Create web-based admin interface
**Tasks**:
1. Design web UI (React/Vue/Svelte)
2. Implement WebSocket client
3. Create command forms
4. Add real-time updates
5. Deploy dashboard
**Deliverables**:
- Working web dashboard
- User documentation
- Deployment guide
## 6. Security Considerations
### 6.1 Key Security
**Relay Private Key**:
- Stored in database as plain hex (following c-relay pattern)
- Never exposed via API
- Used only for signing responses
- Backed up with database
**Admin Private Key**:
- NEVER stored on server
- Kept only by admin
- Used to sign commands
- Should be stored securely by admin (password manager, hardware key, etc.)
**Admin Public Key**:
- Stored in config table
- Used for authorization
- Can be rotated by updating config
### 6.2 Authorization Flow
1. Receive Kind 23456 event
2. Verify event signature (nostr_verify_event_signature)
3. Check pubkey matches admin_pubkey from config
4. Verify 'p' tag targets this relay
5. Decrypt content using NIP-44
6. Parse and validate command
7. Execute command
8. Encrypt response using NIP-44
9. Sign Kind 23457 response
10. Send response
### 6.3 Attack Mitigation
**Replay Attacks**:
- Check event timestamp (reject old events)
- Optional: Track processed event IDs (if needed)
**Unauthorized Access**:
- Strict pubkey verification
- Signature validation
- Relay targeting check
**Command Injection**:
- Validate all command parameters
- Use parameterized SQL queries
- Sanitize file paths
**DoS Protection**:
- Rate limit admin commands
- Timeout long-running operations
- Limit response sizes
## 7. Command Line Interface
### 7.1 CLI Options (Following c-relay Pattern)
```bash
ginxsom [OPTIONS]
Options:
-h, --help Show help message
-v, --version Show version information
-p, --port PORT Override server port
--strict-port Fail if exact port unavailable
-a, --admin-pubkey KEY Override admin public key (hex or npub)
-r, --relay-privkey KEY Override relay private key (hex or nsec)
--debug-level=N Set debug level (0-5)
Examples:
ginxsom # Start server (auto-generate keys on first run)
ginxsom -p 8080 # Start on port 8080
ginxsom -a <npub> # Set admin pubkey
ginxsom -r <nsec> # Set relay privkey
ginxsom --debug-level=3 # Enable info-level debugging
```
### 7.2 First Startup Behavior
On first startup (no database exists):
1. Generate relay keypair
2. Generate admin keypair
3. Print keys ONCE to console:
```
=== Ginxsom First Startup ===
Relay Keys (for server):
Public Key (npub): npub1...
Private Key (nsec): nsec1...
Admin Keys (for you):
Public Key (npub): npub1...
Private Key (nsec): nsec1...
IMPORTANT: Save these keys securely!
The admin private key will NOT be shown again.
The relay private key is stored in the database.
Database created: <relay-pubkey>.db
```
4. Store relay private key in database
5. Store admin public key in config
6. Start server
### 7.3 Subsequent Startups
On subsequent startups:
1. Find existing database file
2. Load relay private key from database
3. Load admin public key from config
4. Apply CLI overrides if provided
5. Start server
## 8. Comparison with c-relay
### 8.1 Similarities
| Feature | c-relay | ginxsom |
|---------|---------|---------|
| Event Types | Kind 23456/23457 | Kind 23456/23457 |
| Encryption | NIP-44 | NIP-44 |
| Command Format | JSON arrays | JSON arrays |
| Key Storage | relay_seckey table | relay_seckey table |
| Admin Auth | config table | config table |
| CLI Options | --admin-pubkey, --relay-privkey | --admin-pubkey, --relay-privkey |
| Response Format | Encrypted JSON | Encrypted JSON |
### 8.2 Differences
| Aspect | c-relay | ginxsom |
|--------|---------|---------|
| Transport | WebSocket (Nostr relay) | WebSocket (FastCGI) |
| Commands | Relay-specific (auth, config, stats) | Blossom-specific (blob, storage, mirror) |
| Database | SQLite (events) | SQLite (blobs + metadata) |
| File Storage | N/A | Blob storage on disk |
| Integration | Standalone relay | FastCGI + nginx |
### 8.3 Architectural Decisions
**Why follow c-relay's pattern?**
1. Proven in production
2. Simple and secure
3. No complex key management
4. Minimal database schema
5. Easy to understand and maintain
**What we're NOT doing (from initial design)**:
1. ❌ NIP-17 gift wrap (too complex)
2. ❌ Separate admin_keys table (use config)
3. ❌ Audit log table (keep it simple)
4. ❌ Processed events tracking (stateless)
5. ❌ Key encryption before storage (plain hex)
6. ❌ Migration strategy (new project)
## 9. Testing Strategy
### 9.1 Unit Tests
**admin_keys.c**:
- Key generation produces valid keys
- Keys can be stored and retrieved
- Invalid keys are rejected
**admin_auth.c**:
- Valid admin events pass authorization
- Invalid signatures are rejected
- Wrong pubkeys are rejected
- Expired events are rejected
**admin_handlers.c**:
- Each command handler works correctly
- Invalid parameters are rejected
- Error responses are properly formatted
### 9.2 Integration Tests
**Full Flow**:
1. Generate admin keypair
2. Create Kind 23456 command
3. Send via WebSocket
4. Verify authorization
5. Execute command
6. Receive Kind 23457 response
7. Decrypt and verify response
**Security Tests**:
- Unauthorized pubkey rejected
- Invalid signature rejected
- Replay attack prevented
- Command injection prevented
### 9.3 Performance Tests
- WebSocket connection handling
- Command processing latency
- Concurrent admin operations
- Large response handling
## 10. Future Enhancements
### 10.1 Short Term
1. **Command History**: Track admin commands for audit
2. **Multi-Admin Support**: Multiple authorized admin pubkeys
3. **Role-Based Access**: Different permission levels
4. **Batch Operations**: Execute multiple commands in one request
### 10.2 Long Term
1. **Web Dashboard**: Full-featured web UI
2. **Monitoring Integration**: Prometheus/Grafana metrics
3. **Backup Automation**: Scheduled backups
4. **Replication**: Multi-server blob replication
5. **Advanced Analytics**: Usage patterns, trends, predictions
## 11. References
### 11.1 Nostr NIPs
- **NIP-01**: Basic protocol flow
- **NIP-04**: Encrypted Direct Messages (deprecated, but reference)
- **NIP-19**: bech32-encoded entities (npub, nsec)
- **NIP-44**: Versioned Encryption (used for admin commands)
### 11.2 Blossom Specifications
- **BUD-01**: Blob Upload/Download
- **BUD-02**: Blob Descriptor
- **BUD-04**: Mirroring
- **BUD-06**: Upload Requirements
- **BUD-08**: NIP-94 Integration
- **BUD-09**: Blob Reporting
### 11.3 c-relay Source Files
- `c-relay/src/dm_admin.c` - Admin event processing
- `c-relay/src/api.c` - NIP-44 encryption
- `c-relay/src/config.c` - Key storage
- `c-relay/src/main.c` - CLI options
- `c-relay/src/sql_schema.h` - Database schema
## 12. Appendix
### 12.1 Example Admin CLI Tool (Python)
```python
#!/usr/bin/env python3
"""
Ginxsom Admin CLI Tool
Sends admin commands to ginxsom server via WebSocket
"""
import asyncio
import websockets
import json
from nostr_sdk import Keys, Event, EventBuilder, Kind
class GinxsomAdmin:
def __init__(self, server_url, admin_nsec, relay_npub):
self.server_url = server_url
self.admin_keys = Keys.parse(admin_nsec)
self.relay_pubkey = Keys.parse(relay_npub).public_key()
async def send_command(self, command, params):
"""Send admin command and wait for response"""
# Build command array
command_array = [command, params]
# Encrypt with NIP-44
encrypted = self.admin_keys.nip44_encrypt(
self.relay_pubkey,
json.dumps(command_array)
)
# Build Kind 23456 event
event = EventBuilder(
Kind(23456),
encrypted,
[["p", str(self.relay_pubkey)]]
).to_event(self.admin_keys)
# Send via WebSocket
async with websockets.connect(self.server_url) as ws:
await ws.send(json.dumps(event.as_json()))
# Wait for Kind 23457 response
response = await ws.recv()
response_event = Event.from_json(response)
# Decrypt response
decrypted = self.admin_keys.nip44_decrypt(
self.relay_pubkey,
response_event.content()
)
return json.loads(decrypted)
# Usage
async def main():
admin = GinxsomAdmin(
"ws://localhost:8080/admin",
"nsec1...", # Admin private key
"npub1..." # Relay public key
)
# List blobs
result = await admin.send_command("blob_list", {
"limit": 10,
"offset": 0
})
print(json.dumps(result, indent=2))
if __name__ == "__main__":
asyncio.run(main())
```
### 12.2 Database Schema SQL
```sql
-- Add to db/schema.sql
-- Relay Private Key Storage
CREATE TABLE relay_seckey (
private_key_hex TEXT NOT NULL CHECK (length(private_key_hex) = 64),
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))
);
-- Admin Public Key (add to config table)
INSERT INTO config (key, value, data_type, description, category, requires_restart)
VALUES (
'admin_pubkey',
'', -- Set during first startup
'string',
'Public key of authorized admin (64-char hex)',
'security',
0
);
-- Relay Public Key (add to config table)
INSERT INTO config (key, value, data_type, description, category, requires_restart)
VALUES (
'relay_pubkey',
'', -- Set during first startup
'string',
'Public key of this relay (64-char hex)',
'server',
0
);
```
### 12.3 Makefile Updates
```makefile
# Add to Makefile
# Admin system objects
ADMIN_OBJS = build/admin_websocket.o \
build/admin_auth.o \
build/admin_handlers.o \
build/admin_config.o \
build/admin_keys.o
# Update main target
build/ginxsom-fcgi: $(OBJS) $(ADMIN_OBJS)
$(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS)
# Admin system rules
build/admin_websocket.o: src/admin_websocket.c
$(CC) $(CFLAGS) -c $< -o $@
build/admin_auth.o: src/admin_auth.c
$(CC) $(CFLAGS) -c $< -o $@
build/admin_handlers.o: src/admin_handlers.c
$(CC) $(CFLAGS) -c $< -o $@
build/admin_config.o: src/admin_config.c
$(CC) $(CFLAGS) -c $< -o $@
build/admin_keys.o: src/admin_keys.c
$(CC) $(CFLAGS) -c $< -o $@
```
---
**Document Version**: 2.0
**Last Updated**: 2025-01-16
**Status**: Ready for Implementation

View File

@@ -0,0 +1,356 @@
# Production Directory Structure Migration Plan
## Overview
This document outlines the plan to migrate the ginxsom production deployment from the current configuration to a new, more organized directory structure.
## Current Configuration (As-Is)
```
Binary Location: /var/www/html/blossom/ginxsom.fcgi
Database Location: /var/www/html/blossom/ginxsom.db
Data Directory: /var/www/html/blossom/
Working Directory: /var/www/html/blossom/ (set via spawn-fcgi -d)
Socket: /tmp/ginxsom-fcgi.sock
```
**Issues with Current Setup:**
1. Binary and database mixed with data files in web-accessible directory
2. Database path hardcoded as relative path `db/ginxsom.db` but database is at root of working directory
3. No separation between application files and user data
4. Security concern: application files in web root
## Target Configuration (To-Be)
```
Binary Location: /home/ubuntu/ginxsom/ginxsom.fcgi
Database Location: /home/ubuntu/ginxsom/db/ginxsom.db
Data Directory: /var/www/html/blossom/
Working Directory: /home/ubuntu/ginxsom/ (set via spawn-fcgi -d)
Socket: /tmp/ginxsom-fcgi.sock
```
**Benefits of New Setup:**
1. Application files separated from user data
2. Database in proper subdirectory structure
3. Application files outside web root (better security)
4. Clear separation of concerns
5. Easier backup and maintenance
## Directory Structure
### Application Directory: `/home/ubuntu/ginxsom/`
```
/home/ubuntu/ginxsom/
├── ginxsom.fcgi # FastCGI binary
├── db/
│ └── ginxsom.db # SQLite database
├── build/ # Build artifacts (from rsync)
├── src/ # Source code (from rsync)
├── include/ # Headers (from rsync)
├── config/ # Config files (from rsync)
└── scripts/ # Utility scripts (from rsync)
```
### Data Directory: `/var/www/html/blossom/`
```
/var/www/html/blossom/
├── <sha256>.jpg # User uploaded files
├── <sha256>.png
├── <sha256>.mp4
└── ...
```
## Command-Line Arguments
The ginxsom binary supports these arguments (from [`src/main.c`](src/main.c:1488-1509)):
```bash
--db-path PATH # Database file path (default: db/ginxsom.db)
--storage-dir DIR # Storage directory for files (default: .)
--help, -h # Show help message
```
## Migration Steps
### 1. Update deploy_lt.sh Configuration
Update the configuration variables in [`deploy_lt.sh`](deploy_lt.sh:16-23):
```bash
# Configuration
REMOTE_HOST="laantungir.net"
REMOTE_USER="ubuntu"
REMOTE_DIR="/home/ubuntu/ginxsom"
REMOTE_DB_PATH="/home/ubuntu/ginxsom/db/ginxsom.db"
REMOTE_NGINX_CONFIG="/etc/nginx/conf.d/default.conf"
REMOTE_BINARY_PATH="/home/ubuntu/ginxsom/ginxsom.fcgi"
REMOTE_SOCKET="/tmp/ginxsom-fcgi.sock"
REMOTE_DATA_DIR="/var/www/html/blossom"
```
### 2. Update Binary Deployment
Modify the binary copy section (lines 82-97) to use new path:
```bash
# Copy binary to application directory (not web directory)
print_status "Copying ginxsom binary to application directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Stop any running process first
sudo pkill -f ginxsom-fcgi || true
sleep 1
# Remove old binary if it exists
rm -f $REMOTE_BINARY_PATH
# Copy new binary
cp $REMOTE_DIR/build/ginxsom-fcgi $REMOTE_BINARY_PATH
chmod +x $REMOTE_BINARY_PATH
chown ubuntu:ubuntu $REMOTE_BINARY_PATH
echo "Binary copied successfully"
EOF
```
### 3. Create Database Directory Structure
Add database setup before starting FastCGI:
```bash
# Setup database directory
print_status "Setting up database directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Create db directory if it doesn't exist
mkdir -p $REMOTE_DIR/db
# Copy database if it exists in old location
if [ -f /var/www/html/blossom/ginxsom.db ]; then
echo "Migrating database from old location..."
cp /var/www/html/blossom/ginxsom.db $REMOTE_DB_PATH
elif [ ! -f $REMOTE_DB_PATH ]; then
echo "Initializing new database..."
# Database will be created by application on first run
fi
# Set proper permissions
chown -R ubuntu:ubuntu $REMOTE_DIR/db
chmod 755 $REMOTE_DIR/db
chmod 644 $REMOTE_DB_PATH 2>/dev/null || true
echo "Database directory setup complete"
EOF
```
### 4. Update spawn-fcgi Command
Modify the FastCGI startup (line 164) to include command-line arguments:
```bash
# Start FastCGI process with explicit paths
echo "Starting ginxsom FastCGI..."
sudo spawn-fcgi \
-M 666 \
-u www-data \
-g www-data \
-s $REMOTE_SOCKET \
-U www-data \
-G www-data \
-d $REMOTE_DIR \
-- $REMOTE_BINARY_PATH \
--db-path "$REMOTE_DB_PATH" \
--storage-dir "$REMOTE_DATA_DIR"
```
**Key Changes:**
- `-d $REMOTE_DIR`: Sets working directory to `/home/ubuntu/ginxsom/`
- `--db-path "$REMOTE_DB_PATH"`: Explicit database path
- `--storage-dir "$REMOTE_DATA_DIR"`: Explicit data directory
### 5. Verify Permissions
Ensure proper permissions for all directories:
```bash
# Application directory - owned by ubuntu
sudo chown -R ubuntu:ubuntu /home/ubuntu/ginxsom
sudo chmod 755 /home/ubuntu/ginxsom
sudo chmod +x /home/ubuntu/ginxsom/ginxsom.fcgi
# Database directory - readable by www-data
sudo chmod 755 /home/ubuntu/ginxsom/db
sudo chmod 644 /home/ubuntu/ginxsom/db/ginxsom.db
# Data directory - writable by www-data
sudo chown -R www-data:www-data /var/www/html/blossom
sudo chmod 755 /var/www/html/blossom
```
## Path Resolution Logic
### How Paths Work with spawn-fcgi -d Option
When spawn-fcgi starts the FastCGI process:
1. **Working Directory**: Set to `/home/ubuntu/ginxsom/` via `-d` option
2. **Relative Paths**: Resolved from working directory
3. **Absolute Paths**: Used as-is
### Default Behavior (Without Arguments)
From [`src/main.c`](src/main.c:30-31):
```c
char g_db_path[MAX_PATH_LEN] = "db/ginxsom.db"; // Relative to working dir
char g_storage_dir[MAX_PATH_LEN] = "."; // Current working dir
```
With working directory `/home/ubuntu/ginxsom/`:
- Database: `/home/ubuntu/ginxsom/db/ginxsom.db`
- Storage: `/home/ubuntu/ginxsom/` ✗ (wrong - we want `/var/www/html/blossom/`)
### With Command-Line Arguments
```bash
--db-path "/home/ubuntu/ginxsom/db/ginxsom.db"
--storage-dir "/var/www/html/blossom"
```
Result:
- Database: `/home/ubuntu/ginxsom/db/ginxsom.db`
- Storage: `/var/www/html/blossom/`
## Testing Plan
### 1. Pre-Migration Verification
```bash
# Check current setup
ssh ubuntu@laantungir.net "
echo 'Current binary location:'
ls -la /var/www/html/blossom/ginxsom.fcgi
echo 'Current database location:'
ls -la /var/www/html/blossom/ginxsom.db
echo 'Current process:'
ps aux | grep ginxsom-fcgi | grep -v grep
"
```
### 2. Post-Migration Verification
```bash
# Check new setup
ssh ubuntu@laantungir.net "
echo 'New binary location:'
ls -la /home/ubuntu/ginxsom/ginxsom.fcgi
echo 'New database location:'
ls -la /home/ubuntu/ginxsom/db/ginxsom.db
echo 'Data directory:'
ls -la /var/www/html/blossom/ | head -10
echo 'Process working directory:'
sudo ls -la /proc/\$(pgrep -f ginxsom.fcgi)/cwd
echo 'Process command line:'
ps aux | grep ginxsom-fcgi | grep -v grep
"
```
### 3. Functional Testing
```bash
# Test health endpoint
curl -k https://blossom.laantungir.net/health
# Test file upload
./tests/file_put_production.sh
# Test file retrieval
curl -k -I https://blossom.laantungir.net/<sha256>
# Test list endpoint
curl -k https://blossom.laantungir.net/list/<pubkey>
```
## Rollback Plan
If migration fails:
1. **Stop new process:**
```bash
sudo pkill -f ginxsom-fcgi
```
2. **Restore old binary location:**
```bash
sudo cp /home/ubuntu/ginxsom/build/ginxsom-fcgi /var/www/html/blossom/ginxsom.fcgi
sudo chown www-data:www-data /var/www/html/blossom/ginxsom.fcgi
```
3. **Restart with old configuration:**
```bash
sudo spawn-fcgi -M 666 -u www-data -g www-data \
-s /tmp/ginxsom-fcgi.sock \
-U www-data -G www-data \
-d /var/www/html/blossom \
/var/www/html/blossom/ginxsom.fcgi
```
## Additional Considerations
### 1. Database Backup
Before migration, backup the current database:
```bash
ssh ubuntu@laantungir.net "
cp /var/www/html/blossom/ginxsom.db /var/www/html/blossom/ginxsom.db.backup
"
```
### 2. NIP-94 Origin Configuration
After migration, update [`src/bud08.c`](src/bud08.c) to return production domain:
```c
void nip94_get_origin(char *origin, size_t origin_size) {
snprintf(origin, origin_size, "https://blossom.laantungir.net");
}
```
### 3. Monitoring
Monitor logs after migration:
```bash
# Application logs
ssh ubuntu@laantungir.net "sudo journalctl -u nginx -f"
# FastCGI process
ssh ubuntu@laantungir.net "ps aux | grep ginxsom-fcgi"
```
## Success Criteria
Migration is successful when:
1. ✓ Binary running from `/home/ubuntu/ginxsom/ginxsom.fcgi`
2. ✓ Database accessible at `/home/ubuntu/ginxsom/db/ginxsom.db`
3. ✓ Files stored in `/var/www/html/blossom/`
4. ✓ Health endpoint returns 200 OK
5. ✓ File upload works correctly
6. ✓ File retrieval works correctly
7. ✓ Database queries succeed
8. ✓ No permission errors in logs
## Timeline
1. **Preparation**: Update deploy_lt.sh script (15 minutes)
2. **Backup**: Backup current database (5 minutes)
3. **Migration**: Run updated deployment script (10 minutes)
4. **Testing**: Verify all endpoints (15 minutes)
5. **Monitoring**: Watch for issues (30 minutes)
**Total Estimated Time**: ~75 minutes
## References
- Current deployment script: [`deploy_lt.sh`](deploy_lt.sh)
- Main application: [`src/main.c`](src/main.c)
- Command-line parsing: [`src/main.c:1488-1509`](src/main.c:1488-1509)
- Global configuration: [`src/main.c:30-31`](src/main.c:30-31)
- Database operations: [`src/main.c:333-385`](src/main.c:333-385)

8
ginxsom.code-workspace Normal file
View File

@@ -0,0 +1,8 @@
{
"folders": [
{
"path": "."
}
],
"settings": {}
}

View File

@@ -33,6 +33,10 @@
#define DEFAULT_MAX_BLOBS_PER_USER 1000
#define DEFAULT_RATE_LIMIT 10
/* Global configuration variables */
extern char g_db_path[MAX_PATH_LEN];
extern char g_storage_dir[MAX_PATH_LEN];
/* Error codes */
typedef enum {
GINXSOM_OK = 0,

View File

@@ -131,21 +131,48 @@ increment_version() {
export NEW_VERSION
}
# Function to update version in header file
update_version_in_header() {
local version="$1"
print_status "Updating version in src/ginxsom.h to $version..."
# Extract version components (remove 'v' prefix)
local version_no_v=${version#v}
# Parse major.minor.patch using regex
if [[ $version_no_v =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
local major=${BASH_REMATCH[1]}
local minor=${BASH_REMATCH[2]}
local patch=${BASH_REMATCH[3]}
# Update the header file
sed -i "s/#define VERSION_MAJOR [0-9]\+/#define VERSION_MAJOR $major/" src/ginxsom.h
sed -i "s/#define VERSION_MINOR [0-9]\+/#define VERSION_MINOR $minor/" src/ginxsom.h
sed -i "s/#define VERSION_PATCH [0-9]\+/#define VERSION_PATCH $patch/" src/ginxsom.h
sed -i "s/#define VERSION \"v[0-9]\+\.[0-9]\+\.[0-9]\+\"/#define VERSION \"$version\"/" src/ginxsom.h
print_success "Updated version in header file"
else
print_error "Invalid version format: $version"
exit 1
fi
}
# Function to compile the Ginxsom project
compile_project() {
print_status "Compiling Ginxsom FastCGI server..."
# Clean previous build
if make clean > /dev/null 2>&1; then
print_success "Cleaned previous build"
else
print_warning "Clean failed or no Makefile found"
fi
# Compile the project
if make > /dev/null 2>&1; then
print_success "Ginxsom compiled successfully"
# Verify the binary was created
if [[ -f "build/ginxsom-fcgi" ]]; then
print_success "Binary created: build/ginxsom-fcgi"
@@ -390,9 +417,12 @@ main() {
git tag "$NEW_VERSION" > /dev/null 2>&1
fi
# Update version in header file
update_version_in_header "$NEW_VERSION"
# Compile project
compile_project
# Build release binary
build_release_binary
@@ -423,9 +453,12 @@ main() {
git tag "$NEW_VERSION" > /dev/null 2>&1
fi
# Update version in header file
update_version_in_header "$NEW_VERSION"
# Compile project
compile_project
# Commit and push (but skip tag creation since we already did it)
git_commit_and_push_no_tag

384
remote.nginx.config Normal file
View File

@@ -0,0 +1,384 @@
# FastCGI upstream configuration
upstream ginxsom_backend {
server unix:/tmp/ginxsom-fcgi.sock;
}
# Main domains
server {
if ($host = laantungir.net) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name laantungir.com www.laantungir.com laantungir.net www.laantungir.net laantungir.org www.laantungir.org;
root /var/www/html;
index index.html index.htm;
# CORS for Nostr NIP-05 verification
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range" always;
location / {
try_files $uri $uri/ =404;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html;
}
}
# Main domains HTTPS - using the main certificate
server {
listen 443 ssl;
server_name laantungir.com www.laantungir.com laantungir.net www.laantungir.net laantungir.org www.laantungir.org;
ssl_certificate /etc/letsencrypt/live/laantungir.net/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/laantungir.net/privkey.pem; # managed by Certbot
root /var/www/html;
index index.html index.htm;
# CORS for Nostr NIP-05 verification
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range" always;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html;
}
}
# Blossom subdomains HTTP - redirect to HTTPS (keep for ACME)
server {
listen 80;
server_name blossom.laantungir.net;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
# Blossom subdomains HTTPS - ginxsom FastCGI
server {
listen 443 ssl;
server_name blossom.laantungir.net;
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
# Security headers
add_header X-Content-Type-Options nosniff always;
add_header X-Frame-Options DENY always;
add_header X-XSS-Protection "1; mode=block" always;
# CORS for Blossom protocol
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
# Root directory for blob storage
root /var/www/html/blossom;
# Maximum upload size
client_max_body_size 100M;
# OPTIONS preflight handler
if ($request_method = OPTIONS) {
return 204;
}
# PUT /upload - File uploads
location = /upload {
if ($request_method !~ ^(PUT|HEAD)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# GET /list/<pubkey> - List user blobs
location ~ "^/list/([a-f0-9]{64})$" {
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# PUT /mirror - Mirror content
location = /mirror {
if ($request_method !~ ^(PUT)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# PUT /report - Report content
location = /report {
if ($request_method !~ ^(PUT)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# GET /auth - NIP-42 challenges
location = /auth {
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# Admin API
location /api/ {
if ($request_method !~ ^(GET|PUT)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# Blob serving - SHA256 patterns
location ~ "^/([a-f0-9]{64})(\.[a-zA-Z0-9]+)?$" {
# Handle DELETE via rewrite
if ($request_method = DELETE) {
rewrite ^/(.*)$ /fcgi-delete/$1 last;
}
# Route HEAD to FastCGI
if ($request_method = HEAD) {
rewrite ^/(.*)$ /fcgi-head/$1 last;
}
# GET requests - serve files directly
if ($request_method != GET) {
return 405;
}
try_files /$1.txt /$1.jpg /$1.jpeg /$1.png /$1.webp /$1.gif /$1.pdf /$1.mp4 /$1.mp3 /$1.md =404;
# Cache headers
add_header Cache-Control "public, max-age=31536000, immutable";
}
# Internal FastCGI handlers
location ~ "^/fcgi-delete/([a-f0-9]{64}).*$" {
internal;
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param REQUEST_URI /$1;
}
location ~ "^/fcgi-head/([a-f0-9]{64}).*$" {
internal;
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param REQUEST_URI /$1;
}
# Health check
location /health {
access_log off;
return 200 "OK\n";
add_header Content-Type text/plain;
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
}
# Default location - Server info from FastCGI
location / {
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
}
server {
listen 80;
server_name relay.laantungir.com relay.laantungir.net relay.laantungir.org;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
proxy_buffering off;
proxy_request_buffering off;
gzip off;
}
}
# # Relay HTTPS - proxy to c-relay
server {
listen 443 ssl;
server_name relay.laantungir.com relay.laantungir.net relay.laantungir.org;
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
proxy_buffering off;
proxy_request_buffering off;
gzip off;
}
}
# Git subdomains HTTP - redirect to HTTPS
server {
listen 80;
server_name git.laantungir.com git.laantungir.net git.laantungir.org;
# Allow larger file uploads for Git releases
client_max_body_size 50M;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
# Auth subdomains HTTP - redirect to HTTPS
server {
listen 80;
server_name auth.laantungir.com auth.laantungir.net auth.laantungir.org;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
}
}
# Git subdomains HTTPS - proxy to gitea
server {
listen 443 ssl;
server_name git.laantungir.com git.laantungir.net git.laantungir.org;
# Allow larger file uploads for Git releases
client_max_body_size 50M;
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
proxy_request_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
gzip off;
# proxy_set_header Sec-WebSocket-Extensions ;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
# Auth subdomains HTTPS - proxy to nostr-auth
server {
listen 443 ssl;
server_name auth.laantungir.com auth.laantungir.net auth.laantungir.org;
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
proxy_request_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
gzip off;
# proxy_set_header Sec-WebSocket-Extensions ;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}

View File

@@ -1,11 +1,36 @@
#!/bin/bash
# Restart Ginxsom Development Environment
# Combines nginx and FastCGI restart operations for debugging
# WARNING: This script DELETES all databases in db/ for fresh testing
# Configuration
# Parse command line arguments
TEST_MODE=0
FOLLOW_LOGS=0
while [[ $# -gt 0 ]]; do
case $1 in
-t|--test-keys)
TEST_MODE=1
shift
;;
--follow)
FOLLOW_LOGS=1
shift
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 [-t|--test-keys] [--follow]"
echo " -t, --test-keys Use test mode with keys from .test_keys"
echo " --follow Follow logs in real-time"
exit 1
;;
esac
done
# Check for --follow flag
if [[ "$1" == "--follow" ]]; then
if [[ $FOLLOW_LOGS -eq 1 ]]; then
echo "=== Following logs in real-time ==="
echo "Monitoring: nginx error, nginx access, app stderr, app stdout"
echo "Press Ctrl+C to stop following logs"
@@ -37,7 +62,12 @@ touch logs/app/stderr.log logs/app/stdout.log logs/nginx/error.log logs/nginx/ac
chmod 644 logs/app/stderr.log logs/app/stdout.log logs/nginx/error.log logs/nginx/access.log
chmod 755 logs/nginx logs/app
echo -e "${YELLOW}=== Ginxsom Development Environment Restart ===${NC}"
if [ $TEST_MODE -eq 1 ]; then
echo -e "${YELLOW}=== Ginxsom Development Environment Restart (TEST MODE) ===${NC}"
echo "Using test keys from .test_keys file"
else
echo -e "${YELLOW}=== Ginxsom Development Environment Restart ===${NC}"
fi
echo "Starting full restart sequence..."
# Function to check if a process is running
@@ -148,6 +178,46 @@ if [ $? -ne 0 ]; then
fi
echo -e "${GREEN}Clean rebuild complete${NC}"
# Step 3.5: Clean database directory for fresh testing
echo -e "\n${YELLOW}3.5. Cleaning database directory...${NC}"
echo "Removing all existing databases for fresh start..."
# Remove all .db files in db/ directory
if ls db/*.db 1> /dev/null 2>&1; then
echo "Found databases to remove:"
ls -lh db/*.db
rm -f db/*.db
echo -e "${GREEN}Database cleanup complete${NC}"
else
echo "No existing databases found"
fi
# Step 3.75: Handle keys based on mode
echo -e "\n${YELLOW}3.75. Configuring server keys...${NC}"
if [ $TEST_MODE -eq 1 ]; then
# Test mode: verify .test_keys file exists
if [ ! -f ".test_keys" ]; then
echo -e "${RED}ERROR: .test_keys file not found${NC}"
echo -e "${RED}Test mode requires .test_keys file in project root${NC}"
exit 1
fi
# Extract test server pubkey to determine database name
TEST_PUBKEY=$(grep "^SERVER_PUBKEY=" .test_keys | cut -d"'" -f2)
if [ -z "$TEST_PUBKEY" ]; then
echo -e "${RED}ERROR: Could not extract SERVER_PUBKEY from .test_keys${NC}"
exit 1
fi
echo -e "${GREEN}Test mode: Will use keys from .test_keys${NC}"
echo -e "${GREEN}Fresh test database will be created as: db/${TEST_PUBKEY}.db${NC}"
else
# Production mode: databases were cleaned, will generate new keypair
echo -e "${YELLOW}Production mode: Fresh start with new keypair${NC}"
echo -e "${YELLOW}New database will be created as db/<new_pubkey>.db${NC}"
fi
# Step 4: Start FastCGI
echo -e "\n${YELLOW}4. Starting FastCGI application...${NC}"
echo "Socket: $SOCKET_PATH"
@@ -166,24 +236,47 @@ fi
echo "Setting GINX_DEBUG environment for pubkey extraction diagnostics"
export GINX_DEBUG=1
# Start FastCGI application with proper logging (daemonized but with redirected streams)
echo "FastCGI starting at $(date)" >> logs/app/stderr.log
spawn-fcgi -s "$SOCKET_PATH" -M 666 -u "$USER" -g "$USER" -f "$FCGI_BINARY" -P "$PID_FILE" 1>>logs/app/stdout.log 2>>logs/app/stderr.log
# Build command line arguments based on mode
FCGI_ARGS="--storage-dir blobs"
if [ $TEST_MODE -eq 1 ]; then
FCGI_ARGS="$FCGI_ARGS --test-keys"
echo -e "${YELLOW}Starting FastCGI in TEST MODE with test keys${NC}"
else
# Production mode: databases were cleaned, will generate new keys
echo -e "${YELLOW}Starting FastCGI in production mode - will generate new keys and create database${NC}"
fi
if [ $? -eq 0 ] && [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
# Start FastCGI application with proper logging
echo "FastCGI starting at $(date)" >> logs/app/stderr.log
# Use nohup with spawn-fcgi -n to keep process running with redirected output
# The key is: nohup prevents HUP signal, -n prevents daemonization (keeps stderr connected)
nohup spawn-fcgi -n -s "$SOCKET_PATH" -M 666 -u "$USER" -g "$USER" -- "$FCGI_BINARY" $FCGI_ARGS >>logs/app/stdout.log 2>>logs/app/stderr.log </dev/null &
SPAWN_PID=$!
# Wait for spawn-fcgi to spawn the child
sleep 1
# Get the actual FastCGI process PID (child of spawn-fcgi)
FCGI_PID=$(pgrep -f "ginxsom-fcgi.*--storage-dir" | head -1)
if [ -z "$FCGI_PID" ]; then
echo -e "${RED}Warning: Could not find FastCGI process${NC}"
FCGI_PID=$SPAWN_PID
fi
# Save PID
echo $FCGI_PID > "$PID_FILE"
# Give it a moment to start
sleep 1
if check_process "$FCGI_PID"; then
echo -e "${GREEN}FastCGI application started successfully${NC}"
echo "PID: $PID"
# Verify it's actually running
if check_process "$PID"; then
echo -e "${GREEN}Process confirmed running${NC}"
else
echo -e "${RED}Warning: Process may have crashed immediately${NC}"
exit 1
fi
echo "PID: $FCGI_PID"
echo -e "${GREEN}Process confirmed running${NC}"
else
echo -e "${RED}Failed to start FastCGI application${NC}"
echo -e "${RED}Process may have crashed immediately${NC}"
exit 1
fi
@@ -250,6 +343,12 @@ else
fi
echo -e "\n${GREEN}=== Restart sequence complete ===${NC}"
echo -e "${YELLOW}Server should be available at: http://localhost:9001${NC}"
echo -e "${YELLOW}To stop all processes, run: nginx -p . -c $NGINX_CONFIG -s stop && kill \$(cat $PID_FILE 2>/dev/null)${NC}"
echo -e "${YELLOW}To monitor logs, check: logs/error.log, logs/access.log, and logs/fcgi-stderr.log${NC}"
echo -e "${YELLOW}To monitor logs, check: logs/nginx/error.log, logs/nginx/access.log, logs/app/stderr.log, logs/app/stdout.log${NC}"
echo -e "\n${YELLOW}Server is available at:${NC}"
echo -e " ${GREEN}HTTP:${NC} http://localhost:9001"
echo -e " ${GREEN}HTTPS:${NC} https://localhost:9443"
echo -e "\n${YELLOW}Admin WebSocket endpoint:${NC}"
echo -e " ${GREEN}WSS:${NC} wss://localhost:9443/admin (via nginx proxy)"
echo -e " ${GREEN}WS:${NC} ws://localhost:9001/admin (via nginx proxy)"
echo -e " ${GREEN}Direct:${NC} ws://localhost:9442 (direct connection)"

View File

@@ -11,8 +11,8 @@
#include <unistd.h>
#include "ginxsom.h"
// Database path (consistent with main.c)
#define DB_PATH "db/ginxsom.db"
// Use global database path from main.c
extern char g_db_path[];
// Function declarations (moved from admin_api.h)
void handle_admin_api_request(const char* method, const char* uri, const char* validated_pubkey, int is_authenticated);
@@ -44,7 +44,7 @@ static int admin_nip94_get_origin(char* out, size_t out_size) {
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
// Default on DB error
strncpy(out, "http://localhost:9001", out_size - 1);
@@ -130,8 +130,12 @@ void handle_admin_api_request(const char* method, const char* uri, const char* v
}
// Authentication now handled by centralized validation system
// Health endpoint is exempt from authentication requirement
if (strcmp(path, "/health") != 0) {
// Health endpoint and POST /admin (Kind 23456 events) are exempt from authentication requirement
// Kind 23456 events authenticate themselves via signed event validation
int skip_auth = (strcmp(path, "/health") == 0) ||
(strcmp(method, "POST") == 0 && strcmp(path, "/admin") == 0);
if (!skip_auth) {
if (!is_authenticated || !validated_pubkey) {
send_json_error(401, "admin_auth_required", "Valid admin authentication required");
return;
@@ -157,6 +161,13 @@ void handle_admin_api_request(const char* method, const char* uri, const char* v
} else {
send_json_error(404, "not_found", "API endpoint not found");
}
} else if (strcmp(method, "POST") == 0) {
if (strcmp(path, "/admin") == 0) {
// Handle Kind 23456/23457 admin event commands
handle_admin_event_request();
} else {
send_json_error(404, "not_found", "API endpoint not found");
}
} else if (strcmp(method, "PUT") == 0) {
if (strcmp(path, "/config") == 0) {
handle_config_put_api();
@@ -201,7 +212,7 @@ int verify_admin_pubkey(const char* event_pubkey) {
sqlite3_stmt* stmt;
int rc, is_admin = 0;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
return 0;
}
@@ -228,7 +239,7 @@ int is_admin_enabled(void) {
sqlite3_stmt* stmt;
int rc, enabled = 0;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
return 0; // Default disabled if can't access DB
}
@@ -254,7 +265,7 @@ void handle_stats_api(void) {
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
send_json_error(500, "database_error", "Failed to open database");
return;
@@ -349,7 +360,7 @@ void handle_config_get_api(void) {
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
send_json_error(500, "database_error", "Failed to open database");
return;
@@ -423,7 +434,7 @@ void handle_config_put_api(void) {
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READWRITE, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READWRITE, NULL);
if (rc) {
free(json_body);
cJSON_Delete(config_data);
@@ -541,7 +552,7 @@ void handle_config_key_put_api(const char* key) {
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READWRITE, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READWRITE, NULL);
if (rc) {
free(json_body);
cJSON_Delete(request_data);
@@ -621,7 +632,7 @@ void handle_files_api(void) {
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
send_json_error(500, "database_error", "Failed to open database");
return;
@@ -715,7 +726,7 @@ void handle_health_api(void) {
// Check database connection
sqlite3* db;
int rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc == SQLITE_OK) {
cJSON_AddStringToObject(data, "database", "connected");
sqlite3_close(db);

509
src/admin_auth.c Normal file
View File

@@ -0,0 +1,509 @@
/*
* Ginxsom Admin Authentication Module
* Handles Kind 23456/23457 admin events with NIP-44 encryption
* Based on c-relay's dm_admin.c implementation
*/
#include "ginxsom.h"
#include "../nostr_core_lib/nostr_core/nostr_common.h"
#include "../nostr_core_lib/nostr_core/nip001.h"
#include "../nostr_core_lib/nostr_core/nip044.h"
#include "../nostr_core_lib/nostr_core/utils.h"
#include <cjson/cJSON.h>
#include <sqlite3.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
// Forward declarations
int get_blossom_private_key(char *seckey_out, size_t max_len);
int validate_admin_pubkey(const char *pubkey);
// Global variables for admin auth
static char g_blossom_seckey[65] = ""; // Cached blossom server private key
static int g_keys_loaded = 0; // Whether keys have been loaded
// Load blossom server keys if not already loaded
static int ensure_keys_loaded(void) {
if (!g_keys_loaded) {
if (get_blossom_private_key(g_blossom_seckey, sizeof(g_blossom_seckey)) != 0) {
fprintf(stderr, "ERROR: Cannot load blossom private key for admin auth\n");
return -1;
}
g_keys_loaded = 1;
}
return 0;
}
// Validate that an event is a Kind 23456 admin command event
int is_admin_command_event(cJSON *event, const char *relay_pubkey) {
if (!event || !relay_pubkey) {
return 0;
}
// Check kind = 23456 (admin command)
cJSON *kind = cJSON_GetObjectItem(event, "kind");
if (!cJSON_IsNumber(kind) || kind->valueint != 23456) {
return 0;
}
// Check tags for 'p' tag with relay pubkey
cJSON *tags = cJSON_GetObjectItem(event, "tags");
if (!cJSON_IsArray(tags)) {
return 0;
}
int found_p_tag = 0;
cJSON *tag = NULL;
cJSON_ArrayForEach(tag, tags) {
if (cJSON_IsArray(tag) && cJSON_GetArraySize(tag) >= 2) {
cJSON *tag_name = cJSON_GetArrayItem(tag, 0);
cJSON *tag_value = cJSON_GetArrayItem(tag, 1);
if (cJSON_IsString(tag_name) && strcmp(tag_name->valuestring, "p") == 0 &&
cJSON_IsString(tag_value) && strcmp(tag_value->valuestring, relay_pubkey) == 0) {
found_p_tag = 1;
break;
}
}
}
return found_p_tag;
}
// Validate admin event signature and pubkey
int validate_admin_event(cJSON *event) {
if (!event) {
return 0;
}
// Get event fields
cJSON *pubkey = cJSON_GetObjectItem(event, "pubkey");
cJSON *sig = cJSON_GetObjectItem(event, "sig");
if (!cJSON_IsString(pubkey) || !cJSON_IsString(sig)) {
fprintf(stderr, "AUTH: Invalid event format - missing pubkey or sig\n");
return 0;
}
// Check if pubkey matches configured admin pubkey
if (!validate_admin_pubkey(pubkey->valuestring)) {
fprintf(stderr, "AUTH: Pubkey %s is not authorized admin\n", pubkey->valuestring);
return 0;
}
// TODO: Validate event signature using nostr_core_lib
// For now, assume signature is valid if pubkey matches
// In production, this should verify the signature cryptographically
return 1;
}
// Decrypt NIP-44 encrypted admin command
int decrypt_admin_command(cJSON *event, char **decrypted_command_out) {
if (!event || !decrypted_command_out) {
return -1;
}
// Ensure we have the relay private key
if (ensure_keys_loaded() != 0) {
return -1;
}
// Get admin pubkey from event
cJSON *admin_pubkey_json = cJSON_GetObjectItem(event, "pubkey");
if (!cJSON_IsString(admin_pubkey_json)) {
fprintf(stderr, "AUTH: Missing or invalid pubkey in event\n");
return -1;
}
// Get encrypted content
cJSON *content = cJSON_GetObjectItem(event, "content");
if (!cJSON_IsString(content)) {
fprintf(stderr, "AUTH: Missing or invalid content in event\n");
return -1;
}
// Convert hex keys to bytes
unsigned char blossom_private_key[32];
unsigned char admin_public_key[32];
if (nostr_hex_to_bytes(g_blossom_seckey, blossom_private_key, 32) != 0) {
fprintf(stderr, "AUTH: Failed to parse blossom private key\n");
return -1;
}
if (nostr_hex_to_bytes(admin_pubkey_json->valuestring, admin_public_key, 32) != 0) {
fprintf(stderr, "AUTH: Failed to parse admin public key\n");
return -1;
}
// Allocate buffer for decrypted content
char decrypted_buffer[8192];
// Decrypt using NIP-44
int result = nostr_nip44_decrypt(
blossom_private_key,
admin_public_key,
content->valuestring,
decrypted_buffer,
sizeof(decrypted_buffer)
);
if (result != NOSTR_SUCCESS) {
fprintf(stderr, "AUTH: NIP-44 decryption failed with error code %d\n", result);
return -1;
}
// Allocate and copy decrypted content
*decrypted_command_out = malloc(strlen(decrypted_buffer) + 1);
if (!*decrypted_command_out) {
fprintf(stderr, "AUTH: Failed to allocate memory for decrypted content\n");
return -1;
}
strcpy(*decrypted_command_out, decrypted_buffer);
return 0;
}
// Parse decrypted command array
int parse_admin_command(const char *decrypted_content, char ***command_array_out, int *command_count_out) {
if (!decrypted_content || !command_array_out || !command_count_out) {
return -1;
}
// Parse the decrypted content as JSON array
cJSON *content_json = cJSON_Parse(decrypted_content);
if (!content_json) {
fprintf(stderr, "AUTH: Failed to parse decrypted content as JSON\n");
return -1;
}
if (!cJSON_IsArray(content_json)) {
fprintf(stderr, "AUTH: Decrypted content is not a JSON array\n");
cJSON_Delete(content_json);
return -1;
}
int array_size = cJSON_GetArraySize(content_json);
if (array_size < 1) {
fprintf(stderr, "AUTH: Command array is empty\n");
cJSON_Delete(content_json);
return -1;
}
// Allocate command array
char **command_array = malloc(array_size * sizeof(char *));
if (!command_array) {
fprintf(stderr, "AUTH: Failed to allocate command array\n");
cJSON_Delete(content_json);
return -1;
}
// Parse each array element as string
for (int i = 0; i < array_size; i++) {
cJSON *item = cJSON_GetArrayItem(content_json, i);
if (!cJSON_IsString(item)) {
fprintf(stderr, "AUTH: Command array element %d is not a string\n", i);
// Clean up allocated strings
for (int j = 0; j < i; j++) {
free(command_array[j]);
}
free(command_array);
cJSON_Delete(content_json);
return -1;
}
command_array[i] = malloc(strlen(item->valuestring) + 1);
if (!command_array[i]) {
fprintf(stderr, "AUTH: Failed to allocate command string\n");
// Clean up allocated strings
for (int j = 0; j < i; j++) {
free(command_array[j]);
}
free(command_array);
cJSON_Delete(content_json);
return -1;
}
strcpy(command_array[i], item->valuestring);
if (!command_array[i]) {
fprintf(stderr, "AUTH: Failed to duplicate command string\n");
// Clean up allocated strings
for (int j = 0; j < i; j++) {
free(command_array[j]);
}
free(command_array);
cJSON_Delete(content_json);
return -1;
}
}
cJSON_Delete(content_json);
*command_array_out = command_array;
*command_count_out = array_size;
return 0;
}
// Process incoming admin command event (Kind 23456)
int process_admin_command(cJSON *event, char ***command_array_out, int *command_count_out, char **admin_pubkey_out) {
if (!event || !command_array_out || !command_count_out || !admin_pubkey_out) {
return -1;
}
// Get blossom server pubkey from config
sqlite3 *db;
sqlite3_stmt *stmt;
char blossom_pubkey[65] = "";
if (sqlite3_open_v2("db/ginxsom.db", &db, SQLITE_OPEN_READONLY, NULL) != SQLITE_OK) {
return -1;
}
const char *sql = "SELECT value FROM config WHERE key = 'blossom_pubkey'";
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *pubkey = (const char *)sqlite3_column_text(stmt, 0);
if (pubkey) {
strncpy(blossom_pubkey, pubkey, sizeof(blossom_pubkey) - 1);
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
if (strlen(blossom_pubkey) != 64) {
fprintf(stderr, "ERROR: Cannot determine blossom pubkey for admin auth\n");
return -1;
}
// Check if it's a valid admin command event for us
if (!is_admin_command_event(event, blossom_pubkey)) {
return -1;
}
// Validate admin authentication (signature and pubkey)
if (!validate_admin_event(event)) {
return -1;
}
// Get admin pubkey from event
cJSON *admin_pubkey_json = cJSON_GetObjectItem(event, "pubkey");
if (!cJSON_IsString(admin_pubkey_json)) {
return -1;
}
*admin_pubkey_out = malloc(strlen(admin_pubkey_json->valuestring) + 1);
if (!*admin_pubkey_out) {
fprintf(stderr, "AUTH: Failed to allocate admin pubkey string\n");
return -1;
}
strcpy(*admin_pubkey_out, admin_pubkey_json->valuestring);
if (!*admin_pubkey_out) {
return -1;
}
// Decrypt the command
char *decrypted_content = NULL;
if (decrypt_admin_command(event, &decrypted_content) != 0) {
free(*admin_pubkey_out);
*admin_pubkey_out = NULL;
return -1;
}
// Parse the command array
if (parse_admin_command(decrypted_content, command_array_out, command_count_out) != 0) {
free(decrypted_content);
free(*admin_pubkey_out);
*admin_pubkey_out = NULL;
return -1;
}
free(decrypted_content);
return 0;
}
// Validate admin pubkey against configured admin
int validate_admin_pubkey(const char *pubkey) {
if (!pubkey || strlen(pubkey) != 64) {
return 0;
}
sqlite3 *db;
sqlite3_stmt *stmt;
int result = 0;
if (sqlite3_open_v2("db/ginxsom.db", &db, SQLITE_OPEN_READONLY, NULL) != SQLITE_OK) {
return 0;
}
const char *sql = "SELECT value FROM config WHERE key = 'admin_pubkey'";
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *admin_pubkey = (const char *)sqlite3_column_text(stmt, 0);
if (admin_pubkey && strcmp(admin_pubkey, pubkey) == 0) {
result = 1;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
return result;
}
// Create encrypted response for admin (Kind 23457)
int create_admin_response(const char *response_json, const char *admin_pubkey, const char *original_event_id __attribute__((unused)), cJSON **response_event_out) {
if (!response_json || !admin_pubkey || !response_event_out) {
return -1;
}
// Ensure we have the relay private key
if (ensure_keys_loaded() != 0) {
return -1;
}
// Get blossom server pubkey from config
sqlite3 *db;
sqlite3_stmt *stmt;
char blossom_pubkey[65] = "";
if (sqlite3_open_v2("db/ginxsom.db", &db, SQLITE_OPEN_READONLY, NULL) != SQLITE_OK) {
return -1;
}
const char *sql = "SELECT value FROM config WHERE key = 'blossom_pubkey'";
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *pubkey = (const char *)sqlite3_column_text(stmt, 0);
if (pubkey) {
strncpy(blossom_pubkey, pubkey, sizeof(blossom_pubkey) - 1);
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
if (strlen(blossom_pubkey) != 64) {
fprintf(stderr, "ERROR: Cannot determine blossom pubkey for response\n");
return -1;
}
// Convert hex keys to bytes
unsigned char blossom_private_key[32];
unsigned char admin_public_key[32];
if (nostr_hex_to_bytes(g_blossom_seckey, blossom_private_key, 32) != 0) {
fprintf(stderr, "AUTH: Failed to parse blossom private key\n");
return -1;
}
if (nostr_hex_to_bytes(admin_pubkey, admin_public_key, 32) != 0) {
fprintf(stderr, "AUTH: Failed to parse admin public key\n");
return -1;
}
// Encrypt response using NIP-44
char encrypted_content[8192];
int result = nostr_nip44_encrypt(
blossom_private_key,
admin_public_key,
response_json,
encrypted_content,
sizeof(encrypted_content)
);
if (result != NOSTR_SUCCESS) {
fprintf(stderr, "AUTH: NIP-44 encryption failed with error code %d\n", result);
return -1;
}
// Create Kind 23457 response event
cJSON *response_event = cJSON_CreateObject();
if (!response_event) {
fprintf(stderr, "AUTH: Failed to create response event JSON\n");
return -1;
}
// Set event fields
cJSON_AddNumberToObject(response_event, "kind", 23457);
cJSON_AddStringToObject(response_event, "pubkey", blossom_pubkey);
cJSON_AddNumberToObject(response_event, "created_at", (double)time(NULL));
cJSON_AddStringToObject(response_event, "content", encrypted_content);
// Add tags array with 'p' tag for admin
cJSON *tags = cJSON_CreateArray();
cJSON *p_tag = cJSON_CreateArray();
cJSON_AddItemToArray(p_tag, cJSON_CreateString("p"));
cJSON_AddItemToArray(p_tag, cJSON_CreateString(admin_pubkey));
cJSON_AddItemToArray(tags, p_tag);
cJSON_AddItemToObject(response_event, "tags", tags);
// Sign the event with blossom private key
// Convert private key hex to bytes
unsigned char blossom_private_key_bytes[32];
if (nostr_hex_to_bytes(g_blossom_seckey, blossom_private_key_bytes, 32) != 0) {
fprintf(stderr, "AUTH: Failed to parse blossom private key for signing\n");
cJSON_Delete(response_event);
return -1;
}
// Create a temporary event structure for signing
cJSON* temp_event = cJSON_Duplicate(response_event, 1);
if (!temp_event) {
fprintf(stderr, "AUTH: Failed to create temp event for signing\n");
cJSON_Delete(response_event);
return -1;
}
// Sign the event using nostr_core_lib
cJSON* signed_event = nostr_create_and_sign_event(
23457, // Kind 23457 (admin response)
encrypted_content, // content
cJSON_GetObjectItem(response_event, "tags"), // tags
blossom_private_key_bytes, // private key
(time_t)cJSON_GetNumberValue(cJSON_GetObjectItem(response_event, "created_at")) // timestamp
);
if (!signed_event) {
fprintf(stderr, "AUTH: Failed to sign admin response event\n");
cJSON_Delete(response_event);
cJSON_Delete(temp_event);
return -1;
}
// Extract id and signature from signed event
cJSON* signed_id = cJSON_GetObjectItem(signed_event, "id");
cJSON* signed_sig = cJSON_GetObjectItem(signed_event, "sig");
if (signed_id && signed_sig) {
cJSON_AddStringToObject(response_event, "id", cJSON_GetStringValue(signed_id));
cJSON_AddStringToObject(response_event, "sig", cJSON_GetStringValue(signed_sig));
} else {
fprintf(stderr, "AUTH: Signed event missing id or sig\n");
cJSON_Delete(response_event);
cJSON_Delete(signed_event);
cJSON_Delete(temp_event);
return -1;
}
// Clean up temporary structures
cJSON_Delete(signed_event);
cJSON_Delete(temp_event);
*response_event_out = response_event;
return 0;
}
// Free command array allocated by parse_admin_command
void free_command_array(char **command_array, int command_count) {
if (command_array) {
for (int i = 0; i < command_count; i++) {
if (command_array[i]) {
free(command_array[i]);
}
}
free(command_array);
}
}

471
src/admin_event.c Normal file
View File

@@ -0,0 +1,471 @@
// Admin event handler for Kind 23456/23457 admin commands
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "ginxsom.h"
// Forward declarations for nostr_core_lib functions
int nostr_hex_to_bytes(const char* hex, unsigned char* bytes, size_t bytes_len);
int nostr_nip44_decrypt(const unsigned char* recipient_private_key,
const unsigned char* sender_public_key,
const char* encrypted_data,
char* output,
size_t output_size);
int nostr_nip44_encrypt(const unsigned char* sender_private_key,
const unsigned char* recipient_public_key,
const char* plaintext,
char* output,
size_t output_size);
cJSON* nostr_create_and_sign_event(int kind, const char* content, cJSON* tags,
const unsigned char* private_key, time_t created_at);
// Use global database path from main.c
extern char g_db_path[];
// Forward declarations
static int get_server_privkey(unsigned char* privkey_bytes);
static int get_server_pubkey(char* pubkey_hex, size_t size);
static int handle_config_query_command(cJSON* response_data);
static int send_admin_response_event(const char* admin_pubkey, const char* request_id,
cJSON* response_data);
/**
* Handle Kind 23456 admin command event
* Expects POST to /api/admin with JSON body containing the event
*/
void handle_admin_event_request(void) {
// Read request body
const char* content_length_str = getenv("CONTENT_LENGTH");
if (!content_length_str) {
printf("Status: 411 Length Required\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Content-Length header required\"}\n");
return;
}
long content_length = atol(content_length_str);
if (content_length <= 0 || content_length > 65536) {
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Invalid content length\"}\n");
return;
}
char* json_body = malloc(content_length + 1);
if (!json_body) {
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Memory allocation failed\"}\n");
return;
}
size_t bytes_read = fread(json_body, 1, content_length, stdin);
if (bytes_read != (size_t)content_length) {
free(json_body);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to read complete request body\"}\n");
return;
}
json_body[content_length] = '\0';
// Parse event JSON
cJSON* event = cJSON_Parse(json_body);
free(json_body);
if (!event) {
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Invalid JSON\"}\n");
return;
}
// Verify it's Kind 23456
cJSON* kind_obj = cJSON_GetObjectItem(event, "kind");
if (!kind_obj || !cJSON_IsNumber(kind_obj) ||
(int)cJSON_GetNumberValue(kind_obj) != 23456) {
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Event must be Kind 23456\"}\n");
return;
}
// Get event ID for response correlation
cJSON* id_obj = cJSON_GetObjectItem(event, "id");
if (!id_obj || !cJSON_IsString(id_obj)) {
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Event missing id\"}\n");
return;
}
const char* request_id = cJSON_GetStringValue(id_obj);
// Get admin pubkey from event
cJSON* pubkey_obj = cJSON_GetObjectItem(event, "pubkey");
if (!pubkey_obj || !cJSON_IsString(pubkey_obj)) {
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Event missing pubkey\"}\n");
return;
}
const char* admin_pubkey = cJSON_GetStringValue(pubkey_obj);
// Verify admin pubkey
sqlite3* db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
cJSON_Delete(event);
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Database error\"}\n");
return;
}
sqlite3_stmt* stmt;
const char* sql = "SELECT value FROM config WHERE key = 'admin_pubkey'";
int is_admin = 0;
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char* db_admin_pubkey = (const char*)sqlite3_column_text(stmt, 0);
if (db_admin_pubkey && strcmp(admin_pubkey, db_admin_pubkey) == 0) {
is_admin = 1;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
if (!is_admin) {
cJSON_Delete(event);
printf("Status: 403 Forbidden\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Not authorized as admin\"}\n");
return;
}
// Get encrypted content
cJSON* content_obj = cJSON_GetObjectItem(event, "content");
if (!content_obj || !cJSON_IsString(content_obj)) {
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Event missing content\"}\n");
return;
}
const char* encrypted_content = cJSON_GetStringValue(content_obj);
// Get server private key for decryption
unsigned char server_privkey[32];
if (get_server_privkey(server_privkey) != 0) {
cJSON_Delete(event);
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to get server private key\"}\n");
return;
}
// Convert admin pubkey to bytes
unsigned char admin_pubkey_bytes[32];
if (nostr_hex_to_bytes(admin_pubkey, admin_pubkey_bytes, 32) != 0) {
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Invalid admin pubkey format\"}\n");
return;
}
// Decrypt content using NIP-44 (or use plaintext for testing)
char decrypted_content[8192];
const char* content_to_parse = encrypted_content;
// Check if content is already plaintext JSON (starts with '[')
if (encrypted_content[0] != '[') {
// Content is encrypted, decrypt it
int decrypt_result = nostr_nip44_decrypt(
server_privkey,
admin_pubkey_bytes,
encrypted_content,
decrypted_content,
sizeof(decrypted_content)
);
if (decrypt_result != 0) {
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to decrypt content\"}\n");
return;
}
content_to_parse = decrypted_content;
}
// Parse command array (either decrypted or plaintext)
cJSON* command_array = cJSON_Parse(content_to_parse);
if (!command_array || !cJSON_IsArray(command_array)) {
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Decrypted content is not a valid command array\"}\n");
return;
}
// Get command type
cJSON* command_type = cJSON_GetArrayItem(command_array, 0);
if (!command_type || !cJSON_IsString(command_type)) {
cJSON_Delete(command_array);
cJSON_Delete(event);
printf("Status: 400 Bad Request\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Invalid command format\"}\n");
return;
}
const char* cmd = cJSON_GetStringValue(command_type);
// Create response data object
cJSON* response_data = cJSON_CreateObject();
cJSON_AddStringToObject(response_data, "query_type", cmd);
cJSON_AddNumberToObject(response_data, "timestamp", (double)time(NULL));
// Handle command
int result = -1;
if (strcmp(cmd, "config_query") == 0) {
result = handle_config_query_command(response_data);
} else {
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Unknown command");
}
cJSON_Delete(command_array);
cJSON_Delete(event);
if (result == 0) {
// Send Kind 23457 response
send_admin_response_event(admin_pubkey, request_id, response_data);
} else {
cJSON_Delete(response_data);
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Command processing failed\"}\n");
}
}
/**
* Get server private key from database (stored in blossom_seckey table)
*/
static int get_server_privkey(unsigned char* privkey_bytes) {
sqlite3* db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
return -1;
}
sqlite3_stmt* stmt;
const char* sql = "SELECT seckey FROM blossom_seckey LIMIT 1";
int result = -1;
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char* privkey_hex = (const char*)sqlite3_column_text(stmt, 0);
if (privkey_hex && nostr_hex_to_bytes(privkey_hex, privkey_bytes, 32) == 0) {
result = 0;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
return result;
}
/**
* Get server public key from database (stored in config table as blossom_pubkey)
*/
static int get_server_pubkey(char* pubkey_hex, size_t size) {
sqlite3* db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
return -1;
}
sqlite3_stmt* stmt;
const char* sql = "SELECT value FROM config WHERE key = 'blossom_pubkey'";
int result = -1;
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char* pubkey = (const char*)sqlite3_column_text(stmt, 0);
if (pubkey) {
strncpy(pubkey_hex, pubkey, size - 1);
pubkey_hex[size - 1] = '\0';
result = 0;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
return result;
}
/**
* Handle config_query command - returns all config values
*/
static int handle_config_query_command(cJSON* response_data) {
sqlite3* db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Database error");
return -1;
}
cJSON_AddStringToObject(response_data, "status", "success");
cJSON* data = cJSON_CreateObject();
// Query all config settings
sqlite3_stmt* stmt;
const char* sql = "SELECT key, value FROM config ORDER BY key";
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
while (sqlite3_step(stmt) == SQLITE_ROW) {
const char* key = (const char*)sqlite3_column_text(stmt, 0);
const char* value = (const char*)sqlite3_column_text(stmt, 1);
if (key && value) {
cJSON_AddStringToObject(data, key, value);
}
}
sqlite3_finalize(stmt);
}
cJSON_AddItemToObject(response_data, "data", data);
sqlite3_close(db);
return 0;
}
/**
* Send Kind 23457 admin response event
*/
static int send_admin_response_event(const char* admin_pubkey, const char* request_id,
cJSON* response_data) {
// Get server keys
unsigned char server_privkey[32];
char server_pubkey[65];
if (get_server_privkey(server_privkey) != 0 ||
get_server_pubkey(server_pubkey, sizeof(server_pubkey)) != 0) {
cJSON_Delete(response_data);
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to get server keys\"}\n");
return -1;
}
// Convert response data to JSON string
char* response_json = cJSON_PrintUnformatted(response_data);
cJSON_Delete(response_data);
if (!response_json) {
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to serialize response\"}\n");
return -1;
}
// Convert admin pubkey to bytes for encryption
unsigned char admin_pubkey_bytes[32];
if (nostr_hex_to_bytes(admin_pubkey, admin_pubkey_bytes, 32) != 0) {
free(response_json);
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Invalid admin pubkey\"}\n");
return -1;
}
// Encrypt response using NIP-44
char encrypted_response[131072];
int encrypt_result = nostr_nip44_encrypt(
server_privkey,
admin_pubkey_bytes,
response_json,
encrypted_response,
sizeof(encrypted_response)
);
free(response_json);
if (encrypt_result != 0) {
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to encrypt response\"}\n");
return -1;
}
// Create Kind 23457 response event
cJSON* response_event = cJSON_CreateObject();
cJSON_AddStringToObject(response_event, "pubkey", server_pubkey);
cJSON_AddNumberToObject(response_event, "created_at", (double)time(NULL));
cJSON_AddNumberToObject(response_event, "kind", 23457);
cJSON_AddStringToObject(response_event, "content", encrypted_response);
// Add tags
cJSON* tags = cJSON_CreateArray();
// p tag for admin
cJSON* p_tag = cJSON_CreateArray();
cJSON_AddItemToArray(p_tag, cJSON_CreateString("p"));
cJSON_AddItemToArray(p_tag, cJSON_CreateString(admin_pubkey));
cJSON_AddItemToArray(tags, p_tag);
// e tag for request correlation
cJSON* e_tag = cJSON_CreateArray();
cJSON_AddItemToArray(e_tag, cJSON_CreateString("e"));
cJSON_AddItemToArray(e_tag, cJSON_CreateString(request_id));
cJSON_AddItemToArray(tags, e_tag);
cJSON_AddItemToObject(response_event, "tags", tags);
// Sign the event
cJSON* signed_event = nostr_create_and_sign_event(
23457,
encrypted_response,
tags,
server_privkey,
time(NULL)
);
cJSON_Delete(response_event);
if (!signed_event) {
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to sign response event\"}\n");
return -1;
}
// Return the signed event as HTTP response
char* event_json = cJSON_PrintUnformatted(signed_event);
cJSON_Delete(signed_event);
if (!event_json) {
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\"error\":\"Failed to serialize event\"}\n");
return -1;
}
printf("Status: 200 OK\r\n");
printf("Content-Type: application/json\r\n");
printf("Cache-Control: no-cache\r\n");
printf("\r\n");
printf("%s\n", event_json);
free(event_json);
return 0;
}

216
src/admin_handlers.c Normal file
View File

@@ -0,0 +1,216 @@
/*
* Ginxsom Admin Command Handlers
* Implements execution of admin commands received via Kind 23456 events
*/
#include "ginxsom.h"
#include <cjson/cJSON.h>
#include <sqlite3.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/statvfs.h>
#include <dirent.h>
// Forward declarations
static cJSON* handle_blob_list(char **args, int arg_count);
static cJSON* handle_blob_info(char **args, int arg_count);
static cJSON* handle_blob_delete(char **args, int arg_count);
static cJSON* handle_storage_stats(char **args, int arg_count);
static cJSON* handle_config_get(char **args, int arg_count);
static cJSON* handle_config_set(char **args, int arg_count);
static cJSON* handle_help(char **args, int arg_count);
// Command dispatch table
typedef struct {
const char *command;
cJSON* (*handler)(char **args, int arg_count);
const char *description;
} admin_command_t;
static admin_command_t command_table[] = {
{"blob_list", handle_blob_list, "List all blobs"},
{"blob_info", handle_blob_info, "Get blob information"},
{"blob_delete", handle_blob_delete, "Delete a blob"},
{"storage_stats", handle_storage_stats, "Get storage statistics"},
{"config_get", handle_config_get, "Get configuration value"},
{"config_set", handle_config_set, "Set configuration value"},
{"help", handle_help, "Show available commands"},
{NULL, NULL, NULL}
};
// Execute admin command and return JSON response
int execute_admin_command(char **command_array, int command_count, char **response_json_out) {
if (!command_array || command_count < 1 || !response_json_out) {
return -1;
}
const char *command = command_array[0];
// Find command handler
admin_command_t *cmd = NULL;
for (int i = 0; command_table[i].command != NULL; i++) {
if (strcmp(command_table[i].command, command) == 0) {
cmd = &command_table[i];
break;
}
}
cJSON *response;
if (cmd) {
// Execute command handler
response = cmd->handler(command_array + 1, command_count - 1);
} else {
// Unknown command
response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "message", "Unknown command");
cJSON_AddStringToObject(response, "command", command);
}
// Convert response to JSON string
char *json_str = cJSON_PrintUnformatted(response);
cJSON_Delete(response);
if (!json_str) {
return -1;
}
*response_json_out = json_str;
return 0;
}
// Command handlers
static cJSON* handle_blob_list(char **args __attribute__((unused)), int arg_count __attribute__((unused))) {
cJSON *response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "command", "blob_list");
// TODO: Implement actual blob listing from database
cJSON *blobs = cJSON_CreateArray();
cJSON_AddItemToObject(response, "blobs", blobs);
cJSON_AddNumberToObject(response, "count", 0);
return response;
}
static cJSON* handle_blob_info(char **args, int arg_count) {
cJSON *response = cJSON_CreateObject();
if (arg_count < 1) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "message", "Missing blob hash argument");
return response;
}
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "command", "blob_info");
cJSON_AddStringToObject(response, "hash", args[0]);
// TODO: Implement actual blob info retrieval from database
cJSON_AddStringToObject(response, "message", "Not yet implemented");
return response;
}
static cJSON* handle_blob_delete(char **args, int arg_count) {
cJSON *response = cJSON_CreateObject();
if (arg_count < 1) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "message", "Missing blob hash argument");
return response;
}
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "command", "blob_delete");
cJSON_AddStringToObject(response, "hash", args[0]);
// TODO: Implement actual blob deletion
cJSON_AddStringToObject(response, "message", "Not yet implemented");
return response;
}
static cJSON* handle_storage_stats(char **args __attribute__((unused)), int arg_count __attribute__((unused))) {
cJSON *response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "command", "storage_stats");
// Get filesystem stats
struct statvfs stat;
if (statvfs(".", &stat) == 0) {
unsigned long long total = stat.f_blocks * stat.f_frsize;
unsigned long long available = stat.f_bavail * stat.f_frsize;
unsigned long long used = total - available;
cJSON_AddNumberToObject(response, "total_bytes", (double)total);
cJSON_AddNumberToObject(response, "used_bytes", (double)used);
cJSON_AddNumberToObject(response, "available_bytes", (double)available);
}
// TODO: Add blob count and total blob size from database
cJSON_AddNumberToObject(response, "blob_count", 0);
cJSON_AddNumberToObject(response, "blob_total_bytes", 0);
return response;
}
static cJSON* handle_config_get(char **args, int arg_count) {
cJSON *response = cJSON_CreateObject();
if (arg_count < 1) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "message", "Missing config key argument");
return response;
}
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "command", "config_get");
cJSON_AddStringToObject(response, "key", args[0]);
// TODO: Implement actual config retrieval from database
cJSON_AddStringToObject(response, "value", "");
cJSON_AddStringToObject(response, "message", "Not yet implemented");
return response;
}
static cJSON* handle_config_set(char **args, int arg_count) {
cJSON *response = cJSON_CreateObject();
if (arg_count < 2) {
cJSON_AddStringToObject(response, "status", "error");
cJSON_AddStringToObject(response, "message", "Missing config key or value argument");
return response;
}
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "command", "config_set");
cJSON_AddStringToObject(response, "key", args[0]);
cJSON_AddStringToObject(response, "value", args[1]);
// TODO: Implement actual config update in database
cJSON_AddStringToObject(response, "message", "Not yet implemented");
return response;
}
static cJSON* handle_help(char **args __attribute__((unused)), int arg_count __attribute__((unused))) {
cJSON *response = cJSON_CreateObject();
cJSON_AddStringToObject(response, "status", "success");
cJSON_AddStringToObject(response, "command", "help");
cJSON *commands = cJSON_CreateArray();
for (int i = 0; command_table[i].command != NULL; i++) {
cJSON *cmd = cJSON_CreateObject();
cJSON_AddStringToObject(cmd, "command", command_table[i].command);
cJSON_AddStringToObject(cmd, "description", command_table[i].description);
cJSON_AddItemToArray(commands, cmd);
}
cJSON_AddItemToObject(response, "commands", commands);
return response;
}

541
src/admin_websocket.c Normal file
View File

@@ -0,0 +1,541 @@
/*
* Ginxsom Admin WebSocket Server
* Handles WebSocket connections for Kind 23456/23457 admin commands
* Based on c-relay's WebSocket implementation using libwebsockets
*/
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <cjson/cJSON.h>
#include <sqlite3.h>
#include <libwebsockets.h>
#include "ginxsom.h"
// Forward declarations from admin_event.c
extern char g_db_path[];
extern int nostr_hex_to_bytes(const char* hex, unsigned char* bytes, size_t bytes_len);
extern int nostr_nip44_decrypt(const unsigned char* recipient_private_key,
const unsigned char* sender_public_key,
const char* encrypted_data,
char* output,
size_t output_size);
extern int nostr_nip44_encrypt(const unsigned char* sender_private_key,
const unsigned char* recipient_public_key,
const char* plaintext,
char* output,
size_t output_size);
extern cJSON* nostr_create_and_sign_event(int kind, const char* content, cJSON* tags,
const unsigned char* private_key, time_t created_at);
// Per-session data for each WebSocket connection
struct per_session_data {
char admin_pubkey[65];
int authenticated;
unsigned char pending_response[LWS_PRE + 131072];
size_t pending_response_len;
};
// Global WebSocket context
static struct lws_context *ws_context = NULL;
static volatile int force_exit = 0;
// Function prototypes
static int get_server_privkey(unsigned char* privkey_bytes);
static int get_server_pubkey(char* pubkey_hex, size_t size);
static int handle_config_query_command(cJSON* response_data);
static int process_admin_event(struct lws *wsi, struct per_session_data *pss, const char *json_str);
/**
* WebSocket protocol callback
*/
static int callback_admin_protocol(struct lws *wsi, enum lws_callback_reasons reason,
void *user, void *in, size_t len) {
struct per_session_data *pss = (struct per_session_data *)user;
switch (reason) {
case LWS_CALLBACK_ESTABLISHED:
fprintf(stderr, "[WebSocket] New connection established\n");
fflush(stderr);
memset(pss, 0, sizeof(*pss));
pss->authenticated = 0;
break;
case LWS_CALLBACK_RECEIVE:
fprintf(stderr, "[WebSocket] Received %zu bytes\n", len);
fflush(stderr);
// Null-terminate the received data
char *json_str = malloc(len + 1);
if (!json_str) {
fprintf(stderr, "[WebSocket] Memory allocation failed\n");
fflush(stderr);
return -1;
}
memcpy(json_str, in, len);
json_str[len] = '\0';
// Process the admin event
int result = process_admin_event(wsi, pss, json_str);
free(json_str);
if (result == 0 && pss->pending_response_len > 0) {
// Request callback to send response
lws_callback_on_writable(wsi);
}
break;
case LWS_CALLBACK_SERVER_WRITEABLE:
if (pss->pending_response_len > 0) {
fprintf(stderr, "[WebSocket] Sending %zu bytes\n", pss->pending_response_len - LWS_PRE);
fflush(stderr);
int written = lws_write(wsi,
&pss->pending_response[LWS_PRE],
pss->pending_response_len - LWS_PRE,
LWS_WRITE_TEXT);
if (written < 0) {
fprintf(stderr, "[WebSocket] Write failed\n");
fflush(stderr);
return -1;
}
pss->pending_response_len = 0;
}
break;
case LWS_CALLBACK_CLOSED:
fprintf(stderr, "[WebSocket] Connection closed\n");
fflush(stderr);
break;
default:
break;
}
return 0;
}
/**
* WebSocket protocols
*/
static struct lws_protocols protocols[] = {
{
"nostr-admin",
callback_admin_protocol,
sizeof(struct per_session_data),
131072, // rx buffer size
0, NULL, 0
},
{ NULL, NULL, 0, 0, 0, NULL, 0 } // terminator
};
/**
* Process Kind 23456 admin event received via WebSocket
*/
static int process_admin_event(struct lws *wsi __attribute__((unused)), struct per_session_data *pss, const char *json_str) {
// Parse event JSON
cJSON *event = cJSON_Parse(json_str);
if (!event) {
fprintf(stderr, "[WebSocket] Invalid JSON\n");
fflush(stderr);
return -1;
}
// Verify it's Kind 23456
cJSON *kind_obj = cJSON_GetObjectItem(event, "kind");
if (!kind_obj || !cJSON_IsNumber(kind_obj) ||
(int)cJSON_GetNumberValue(kind_obj) != 23456) {
fprintf(stderr, "[WebSocket] Not a Kind 23456 event\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
// Get event ID for response correlation
cJSON *id_obj = cJSON_GetObjectItem(event, "id");
if (!id_obj || !cJSON_IsString(id_obj)) {
fprintf(stderr, "[WebSocket] Event missing id\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
const char *request_id = cJSON_GetStringValue(id_obj);
// Get admin pubkey from event
cJSON *pubkey_obj = cJSON_GetObjectItem(event, "pubkey");
if (!pubkey_obj || !cJSON_IsString(pubkey_obj)) {
fprintf(stderr, "[WebSocket] Event missing pubkey\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
const char *admin_pubkey = cJSON_GetStringValue(pubkey_obj);
// Verify admin pubkey
if (!verify_admin_pubkey(admin_pubkey)) {
fprintf(stderr, "[WebSocket] Not authorized as admin: %s\n", admin_pubkey);
fflush(stderr);
cJSON_Delete(event);
return -1;
}
// Store admin pubkey in session
strncpy(pss->admin_pubkey, admin_pubkey, sizeof(pss->admin_pubkey) - 1);
pss->authenticated = 1;
// Get encrypted content
cJSON *content_obj = cJSON_GetObjectItem(event, "content");
if (!content_obj || !cJSON_IsString(content_obj)) {
fprintf(stderr, "[WebSocket] Event missing content\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
const char *encrypted_content = cJSON_GetStringValue(content_obj);
// Get server private key for decryption
unsigned char server_privkey[32];
if (get_server_privkey(server_privkey) != 0) {
fprintf(stderr, "[WebSocket] Failed to get server private key\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
// Convert admin pubkey to bytes
unsigned char admin_pubkey_bytes[32];
if (nostr_hex_to_bytes(admin_pubkey, admin_pubkey_bytes, 32) != 0) {
fprintf(stderr, "[WebSocket] Invalid admin pubkey format\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
// Decrypt content using NIP-44
char decrypted_content[8192];
const char *content_to_parse = encrypted_content;
// Check if content is already plaintext JSON (starts with '[')
if (encrypted_content[0] != '[') {
int decrypt_result = nostr_nip44_decrypt(
server_privkey,
admin_pubkey_bytes,
encrypted_content,
decrypted_content,
sizeof(decrypted_content)
);
if (decrypt_result != 0) {
fprintf(stderr, "[WebSocket] Failed to decrypt content\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
content_to_parse = decrypted_content;
}
// Parse command array
cJSON *command_array = cJSON_Parse(content_to_parse);
if (!command_array || !cJSON_IsArray(command_array)) {
fprintf(stderr, "[WebSocket] Decrypted content is not a valid command array\n");
fflush(stderr);
cJSON_Delete(event);
return -1;
}
// Get command type
cJSON *command_type = cJSON_GetArrayItem(command_array, 0);
if (!command_type || !cJSON_IsString(command_type)) {
fprintf(stderr, "[WebSocket] Invalid command format\n");
fflush(stderr);
cJSON_Delete(command_array);
cJSON_Delete(event);
return -1;
}
const char *cmd = cJSON_GetStringValue(command_type);
fprintf(stderr, "[WebSocket] Processing command: %s\n", cmd);
fflush(stderr);
// Create response data object
cJSON *response_data = cJSON_CreateObject();
cJSON_AddStringToObject(response_data, "query_type", cmd);
cJSON_AddNumberToObject(response_data, "timestamp", (double)time(NULL));
// Handle command
int result = -1;
if (strcmp(cmd, "config_query") == 0) {
result = handle_config_query_command(response_data);
} else {
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Unknown command");
}
cJSON_Delete(command_array);
cJSON_Delete(event);
if (result == 0) {
// Get server keys
char server_pubkey[65];
if (get_server_pubkey(server_pubkey, sizeof(server_pubkey)) != 0) {
fprintf(stderr, "[WebSocket] Failed to get server pubkey\n");
fflush(stderr);
cJSON_Delete(response_data);
return -1;
}
// Convert response data to JSON string
char *response_json = cJSON_PrintUnformatted(response_data);
cJSON_Delete(response_data);
if (!response_json) {
fprintf(stderr, "[WebSocket] Failed to serialize response\n");
fflush(stderr);
return -1;
}
// Encrypt response using NIP-44
char encrypted_response[131072];
int encrypt_result = nostr_nip44_encrypt(
server_privkey,
admin_pubkey_bytes,
response_json,
encrypted_response,
sizeof(encrypted_response)
);
free(response_json);
if (encrypt_result != 0) {
fprintf(stderr, "[WebSocket] Failed to encrypt response\n");
fflush(stderr);
return -1;
}
// Create Kind 23457 response event
cJSON *tags = cJSON_CreateArray();
// p tag for admin
cJSON *p_tag = cJSON_CreateArray();
cJSON_AddItemToArray(p_tag, cJSON_CreateString("p"));
cJSON_AddItemToArray(p_tag, cJSON_CreateString(admin_pubkey));
cJSON_AddItemToArray(tags, p_tag);
// e tag for request correlation
cJSON *e_tag = cJSON_CreateArray();
cJSON_AddItemToArray(e_tag, cJSON_CreateString("e"));
cJSON_AddItemToArray(e_tag, cJSON_CreateString(request_id));
cJSON_AddItemToArray(tags, e_tag);
// Sign the event
cJSON *signed_event = nostr_create_and_sign_event(
23457,
encrypted_response,
tags,
server_privkey,
time(NULL)
);
if (!signed_event) {
fprintf(stderr, "[WebSocket] Failed to sign response event\n");
fflush(stderr);
return -1;
}
// Serialize event to JSON
char *event_json = cJSON_PrintUnformatted(signed_event);
cJSON_Delete(signed_event);
if (!event_json) {
fprintf(stderr, "[WebSocket] Failed to serialize event\n");
fflush(stderr);
return -1;
}
// Store response in session for sending
size_t json_len = strlen(event_json);
if (json_len + LWS_PRE < sizeof(pss->pending_response)) {
memcpy(&pss->pending_response[LWS_PRE], event_json, json_len);
pss->pending_response_len = LWS_PRE + json_len;
fprintf(stderr, "[WebSocket] Response prepared (%zu bytes)\n", json_len);
fflush(stderr);
} else {
fprintf(stderr, "[WebSocket] Response too large\n");
fflush(stderr);
}
free(event_json);
return 0;
} else {
cJSON_Delete(response_data);
return -1;
}
}
/**
* Get server private key from database
*/
static int get_server_privkey(unsigned char* privkey_bytes) {
sqlite3 *db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
return -1;
}
sqlite3_stmt *stmt;
const char *sql = "SELECT seckey FROM blossom_seckey LIMIT 1";
int result = -1;
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *privkey_hex = (const char*)sqlite3_column_text(stmt, 0);
if (privkey_hex && nostr_hex_to_bytes(privkey_hex, privkey_bytes, 32) == 0) {
result = 0;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
return result;
}
/**
* Get server public key from database
*/
static int get_server_pubkey(char* pubkey_hex, size_t size) {
sqlite3 *db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
return -1;
}
sqlite3_stmt *stmt;
const char *sql = "SELECT value FROM config WHERE key = 'blossom_pubkey'";
int result = -1;
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *pubkey = (const char*)sqlite3_column_text(stmt, 0);
if (pubkey) {
strncpy(pubkey_hex, pubkey, size - 1);
pubkey_hex[size - 1] = '\0';
result = 0;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
return result;
}
/**
* Handle config_query command
*/
static int handle_config_query_command(cJSON* response_data) {
sqlite3 *db;
int rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
cJSON_AddStringToObject(response_data, "status", "error");
cJSON_AddStringToObject(response_data, "error", "Database error");
return -1;
}
cJSON_AddStringToObject(response_data, "status", "success");
cJSON *data = cJSON_CreateObject();
// Query all config settings
sqlite3_stmt *stmt;
const char *sql = "SELECT key, value FROM config ORDER BY key";
if (sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) {
while (sqlite3_step(stmt) == SQLITE_ROW) {
const char *key = (const char*)sqlite3_column_text(stmt, 0);
const char *value = (const char*)sqlite3_column_text(stmt, 1);
if (key && value) {
cJSON_AddStringToObject(data, key, value);
}
}
sqlite3_finalize(stmt);
}
cJSON_AddItemToObject(response_data, "data", data);
sqlite3_close(db);
return 0;
}
/**
* WebSocket server thread
*/
void* admin_websocket_thread(void* arg) {
int port = *(int*)arg;
struct lws_context_creation_info info;
memset(&info, 0, sizeof(info));
info.port = port;
info.iface = "127.0.0.1"; // Force IPv4 binding for localhost compatibility
info.protocols = protocols;
info.gid = -1;
info.uid = -1;
info.options = LWS_SERVER_OPTION_VALIDATE_UTF8 | LWS_SERVER_OPTION_DISABLE_IPV6;
fprintf(stderr, "[WebSocket] Starting admin WebSocket server on 127.0.0.1:%d (IPv4 only)\n", port);
fflush(stderr);
ws_context = lws_create_context(&info);
if (!ws_context) {
fprintf(stderr, "[WebSocket] Failed to create context\n");
fflush(stderr);
return NULL;
}
fprintf(stderr, "[WebSocket] Server started successfully\n");
fflush(stderr);
// Service loop
while (!force_exit) {
lws_service(ws_context, 50);
}
lws_context_destroy(ws_context);
fprintf(stderr, "[WebSocket] Server stopped\n");
fflush(stderr);
return NULL;
}
/**
* Start admin WebSocket server
*/
int start_admin_websocket_server(int port) {
static int server_port;
server_port = port;
pthread_t thread;
int result = pthread_create(&thread, NULL, admin_websocket_thread, &server_port);
if (result != 0) {
fprintf(stderr, "[WebSocket] Failed to create thread: %d\n", result);
fflush(stderr);
return -1;
}
pthread_detach(thread);
fprintf(stderr, "[WebSocket] Thread started\n");
fflush(stderr);
return 0;
}
/**
* Stop admin WebSocket server
*/
void stop_admin_websocket_server(void) {
force_exit = 1;
}

View File

@@ -426,9 +426,17 @@ void handle_mirror_request(void) {
// Determine file extension from Content-Type using centralized mapping
const char* extension = mime_to_extension(content_type_final);
// Save file to blobs directory
char filepath[512];
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256_hex, extension);
// Save file to storage directory using global g_storage_dir variable
char filepath[4096];
int filepath_len = snprintf(filepath, sizeof(filepath), "%s/%s%s", g_storage_dir, sha256_hex, extension);
if (filepath_len >= (int)sizeof(filepath)) {
free_mirror_download(download);
send_error_response(500, "file_error",
"File path too long",
"Internal server error during file path construction");
log_request("PUT", "/mirror", uploader_pubkey ? "authenticated" : "anonymous", 500);
return;
}
FILE* outfile = fopen(filepath, "wb");
if (!outfile) {

View File

@@ -10,8 +10,8 @@
#include <stdint.h>
#include "ginxsom.h"
// Database path
#define DB_PATH "db/ginxsom.db"
// Use global database path from main.c
extern char g_db_path[];
// Check if NIP-94 metadata emission is enabled
int nip94_is_enabled(void) {
@@ -19,12 +19,12 @@ int nip94_is_enabled(void) {
sqlite3_stmt* stmt;
int rc, enabled = 1; // Default enabled
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
return 1; // Default enabled on DB error
}
const char* sql = "SELECT value FROM server_config WHERE key = 'nip94_enabled'";
const char* sql = "SELECT value FROM config WHERE key = 'nip94_enabled'";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
rc = sqlite3_step(stmt);
@@ -44,40 +44,53 @@ int nip94_get_origin(char* out, size_t out_size) {
if (!out || out_size == 0) {
return 0;
}
// Check database config first for custom origin
sqlite3* db;
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc == SQLITE_OK) {
const char* sql = "SELECT value FROM config WHERE key = 'cdn_origin'";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
rc = sqlite3_step(stmt);
if (rc == SQLITE_ROW) {
const char* value = (const char*)sqlite3_column_text(stmt, 0);
if (value) {
strncpy(out, value, out_size - 1);
out[out_size - 1] = '\0';
sqlite3_finalize(stmt);
sqlite3_close(db);
return 1;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
}
// Check if request came over HTTPS (nginx sets HTTPS=on for SSL requests)
const char* https_env = getenv("HTTPS");
const char* server_name = getenv("SERVER_NAME");
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
// Default on DB error
strncpy(out, "http://localhost:9001", out_size - 1);
out[out_size - 1] = '\0';
// Use production domain if SERVER_NAME is set and not localhost
if (server_name && strcmp(server_name, "localhost") != 0) {
if (https_env && strcmp(https_env, "on") == 0) {
snprintf(out, out_size, "https://%s", server_name);
} else {
snprintf(out, out_size, "http://%s", server_name);
}
return 1;
}
const char* sql = "SELECT value FROM server_config WHERE key = 'cdn_origin'";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
rc = sqlite3_step(stmt);
if (rc == SQLITE_ROW) {
const char* value = (const char*)sqlite3_column_text(stmt, 0);
if (value) {
strncpy(out, value, out_size - 1);
out[out_size - 1] = '\0';
sqlite3_finalize(stmt);
sqlite3_close(db);
return 1;
}
}
sqlite3_finalize(stmt);
// Fallback to localhost for development
if (https_env && strcmp(https_env, "on") == 0) {
strncpy(out, "https://localhost:9443", out_size - 1);
} else {
strncpy(out, "http://localhost:9001", out_size - 1);
}
sqlite3_close(db);
// Default fallback
strncpy(out, "http://localhost:9001", out_size - 1);
out[out_size - 1] = '\0';
return 1;
}

View File

@@ -11,8 +11,8 @@
#include <time.h>
#include "ginxsom.h"
// Database path (should match main.c)
#define DB_PATH "db/ginxsom.db"
// Use global database path from main.c
extern char g_db_path[];
// Forward declarations for helper functions
void send_error_response(int status_code, const char* error_type, const char* message, const char* details);
@@ -154,7 +154,7 @@ int store_blob_report(const char* event_json, const char* reporter_pubkey) {
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READWRITE, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READWRITE, NULL);
if (rc) {
return 0;
}

View File

@@ -7,6 +7,11 @@
#ifndef GINXSOM_H
#define GINXSOM_H
// Version information (auto-updated by build system)
#define VERSION_MAJOR 0
#define VERSION_MINOR 1
#define VERSION_PATCH 11
#define VERSION "v0.1.11"
#include <stddef.h>
#include <stdint.h>
@@ -30,6 +35,10 @@ extern sqlite3* db;
int init_database(void);
void close_database(void);
// Global configuration variables (defined in main.c)
extern char g_db_path[4096];
extern char g_storage_dir[4096];
// SHA-256 extraction and validation
const char* extract_sha256_from_uri(const char* uri);
@@ -253,6 +262,9 @@ int validate_sha256_format(const char* sha256);
// Admin API request handler
void handle_admin_api_request(const char* method, const char* uri, const char* validated_pubkey, int is_authenticated);
// Admin event handler (Kind 23456/23457)
void handle_admin_event_request(void);
// Individual endpoint handlers
void handle_stats_api(void);
void handle_config_get_api(void);
@@ -271,6 +283,10 @@ void send_json_response(int status, const char* json_content);
void send_json_error(int status, const char* error, const char* message);
int parse_query_params(const char* query_string, char params[][256], int max_params);
// Admin WebSocket server functions
int start_admin_websocket_server(int port);
void stop_admin_websocket_server(void);
#ifdef __cplusplus
}
#endif

1419
src/main.c

File diff suppressed because it is too large Load Diff

View File

@@ -32,8 +32,8 @@
// NOSTR_ERROR_NIP42_CHALLENGE_EXPIRED are already defined in
// nostr_core_lib/nostr_core/nostr_common.h
// Database path (consistent with main.c)
#define DB_PATH "db/ginxsom.db"
// Use global database path from main.c
extern char g_db_path[];
// NIP-42 challenge management constants
#define MAX_CHALLENGES 1000
@@ -115,7 +115,7 @@ static int validate_nip42_event(cJSON *event, const char *relay_url,
const char *challenge_id);
static int validate_admin_event(cJSON *event, const char *method, const char *endpoint);
static int check_database_auth_rules(const char *pubkey, const char *operation,
const char *resource_hash);
const char *resource_hash, const char *mime_type);
void nostr_request_validator_clear_violation(void);
// NIP-42 challenge management functions
@@ -283,6 +283,16 @@ int nostr_validate_unified_request(const nostr_unified_request_t *request,
// PHASE 2: NOSTR EVENT VALIDATION (CPU Intensive ~2ms)
/////////////////////////////////////////////////////////////////////
// Check if authentication is disabled first (regardless of header presence)
if (!g_auth_cache.auth_required) {
validator_debug_log("VALIDATOR_DEBUG: STEP 4 PASSED - Authentication "
"disabled, allowing request\n");
result->valid = 1;
result->error_code = NOSTR_SUCCESS;
strcpy(result->reason, "Authentication disabled");
return NOSTR_SUCCESS;
}
// Check if this is a BUD-09 report request - allow anonymous reporting
if (request->operation && strcmp(request->operation, "report") == 0) {
// BUD-09 allows anonymous reporting - pass through to bud09.c for validation
@@ -810,8 +820,17 @@ int nostr_validate_unified_request(const nostr_unified_request_t *request,
"checking database rules\n");
// Check database rules for authorization
// For Blossom uploads, use hash from event 'x' tag instead of URI
const char *hash_for_rules = request->resource_hash;
if (event_kind == 24242 && strlen(expected_hash_from_event) == 64) {
hash_for_rules = expected_hash_from_event;
char hash_msg[256];
sprintf(hash_msg, "VALIDATOR_DEBUG: Using hash from Blossom event for rules: %.16s...\n", hash_for_rules);
validator_debug_log(hash_msg);
}
int rules_result = check_database_auth_rules(
extracted_pubkey, request->operation, request->resource_hash);
extracted_pubkey, request->operation, hash_for_rules, request->mime_type);
if (rules_result != NOSTR_SUCCESS) {
validator_debug_log(
"VALIDATOR_DEBUG: STEP 14 FAILED - Database rules denied request\n");
@@ -1045,7 +1064,7 @@ static int reload_auth_config(void) {
memset(&g_auth_cache, 0, sizeof(g_auth_cache));
// Open database
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
validator_debug_log("VALIDATOR: Could not open database\n");
// Use defaults
@@ -1307,7 +1326,7 @@ static int validate_blossom_event(cJSON *event, const char *expected_hash,
* Implements the 6-step rule evaluation engine from AUTH_API.md
*/
static int check_database_auth_rules(const char *pubkey, const char *operation,
const char *resource_hash) {
const char *resource_hash, const char *mime_type) {
sqlite3 *db = NULL;
sqlite3_stmt *stmt = NULL;
int rc;
@@ -1321,12 +1340,12 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
char rules_msg[256];
sprintf(rules_msg,
"VALIDATOR_DEBUG: RULES ENGINE - Checking rules for pubkey=%.32s..., "
"operation=%s\n",
pubkey, operation ? operation : "NULL");
"operation=%s, mime_type=%s\n",
pubkey, operation ? operation : "NULL", mime_type ? mime_type : "NULL");
validator_debug_log(rules_msg);
// Open database
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
rc = sqlite3_open_v2(g_db_path, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
validator_debug_log(
"VALIDATOR_DEBUG: RULES ENGINE - Failed to open database\n");
@@ -1334,9 +1353,10 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
}
// Step 1: Check pubkey blacklist (highest priority)
// Match both exact operation and wildcard '*'
const char *blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'pubkey_blacklist' AND rule_target = ? AND operation = ? AND enabled = "
"'pubkey_blacklist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
rc = sqlite3_prepare_v2(db, blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
@@ -1369,9 +1389,10 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
// Step 2: Check hash blacklist
if (resource_hash) {
// Match both exact operation and wildcard '*'
const char *hash_blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'hash_blacklist' AND rule_target = ? AND operation = ? AND enabled = "
"'hash_blacklist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
rc = sqlite3_prepare_v2(db, hash_blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
@@ -1407,10 +1428,53 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
"resource hash provided\n");
}
// Step 3: Check pubkey whitelist
// Step 3: Check MIME type blacklist
if (mime_type) {
// Match both exact MIME type and wildcard patterns (e.g., 'image/*')
const char *mime_blacklist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'mime_blacklist' AND (rule_target = ? OR rule_target LIKE '%/*' AND ? LIKE REPLACE(rule_target, '*', '%')) AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
rc = sqlite3_prepare_v2(db, mime_blacklist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 FAILED - "
"MIME type blacklisted\n");
char mime_blacklist_msg[256];
sprintf(
mime_blacklist_msg,
"VALIDATOR_DEBUG: RULES ENGINE - MIME blacklist rule matched: %s\n",
description ? description : "Unknown");
validator_debug_log(mime_blacklist_msg);
// Set specific violation details for status code mapping
strcpy(g_last_rule_violation.violation_type, "mime_blacklist");
sprintf(g_last_rule_violation.reason, "%s: MIME type blacklisted",
description ? description : "TEST_MIME_BLACKLIST");
sqlite3_finalize(stmt);
sqlite3_close(db);
return NOSTR_ERROR_AUTH_REQUIRED;
}
sqlite3_finalize(stmt);
}
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 PASSED - MIME "
"type not blacklisted\n");
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 SKIPPED - No "
"MIME type provided\n");
}
// Step 4: Check pubkey whitelist
// Match both exact operation and wildcard '*'
const char *whitelist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'pubkey_whitelist' AND rule_target = ? AND operation = ? AND enabled = "
"'pubkey_whitelist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
rc = sqlite3_prepare_v2(db, whitelist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
@@ -1435,10 +1499,76 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 3 FAILED - Pubkey "
"not whitelisted\n");
// Step 4: Check if any whitelist rules exist - if yes, deny by default
// Step 5: Check MIME type whitelist (only if not already denied)
if (mime_type) {
// Match both exact MIME type and wildcard patterns (e.g., 'image/*')
const char *mime_whitelist_sql =
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
"'mime_whitelist' AND (rule_target = ? OR rule_target LIKE '%/*' AND ? LIKE REPLACE(rule_target, '*', '%')) AND (operation = ? OR operation = '*') AND enabled = "
"1 ORDER BY priority LIMIT 1";
rc = sqlite3_prepare_v2(db, mime_whitelist_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, mime_type, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
const char *description = (const char *)sqlite3_column_text(stmt, 1);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 PASSED - "
"MIME type whitelisted\n");
char mime_whitelist_msg[256];
sprintf(mime_whitelist_msg,
"VALIDATOR_DEBUG: RULES ENGINE - MIME whitelist rule matched: %s\n",
description ? description : "Unknown");
validator_debug_log(mime_whitelist_msg);
sqlite3_finalize(stmt);
sqlite3_close(db);
return NOSTR_SUCCESS; // Allow whitelisted MIME type
}
sqlite3_finalize(stmt);
}
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 FAILED - MIME "
"type not whitelisted\n");
} else {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 SKIPPED - No "
"MIME type provided\n");
}
// Step 6: Check if any MIME whitelist rules exist - if yes, deny by default
// Match both exact operation and wildcard '*'
const char *mime_whitelist_exists_sql =
"SELECT COUNT(*) FROM auth_rules WHERE rule_type = 'mime_whitelist' "
"AND (operation = ? OR operation = '*') AND enabled = 1 LIMIT 1";
rc = sqlite3_prepare_v2(db, mime_whitelist_exists_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, operation ? operation : "", -1, SQLITE_STATIC);
if (sqlite3_step(stmt) == SQLITE_ROW) {
int mime_whitelist_count = sqlite3_column_int(stmt, 0);
if (mime_whitelist_count > 0) {
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 6 FAILED - "
"MIME whitelist exists but type not in it\n");
// Set specific violation details for status code mapping
strcpy(g_last_rule_violation.violation_type, "mime_whitelist_violation");
strcpy(g_last_rule_violation.reason,
"MIME type not whitelisted for this operation");
sqlite3_finalize(stmt);
sqlite3_close(db);
return NOSTR_ERROR_AUTH_REQUIRED;
}
}
sqlite3_finalize(stmt);
}
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 6 PASSED - No "
"MIME whitelist restrictions apply\n");
// Step 7: Check if any whitelist rules exist - if yes, deny by default
// Match both exact operation and wildcard '*'
const char *whitelist_exists_sql =
"SELECT COUNT(*) FROM auth_rules WHERE rule_type = 'pubkey_whitelist' "
"AND operation = ? AND enabled = 1 LIMIT 1";
"AND (operation = ? OR operation = '*') AND enabled = 1 LIMIT 1";
rc = sqlite3_prepare_v2(db, whitelist_exists_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, operation ? operation : "", -1, SQLITE_STATIC);
@@ -1465,7 +1595,7 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
"whitelist restrictions apply\n");
sqlite3_close(db);
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 5 PASSED - All "
validator_debug_log("VALIDATOR_DEBUG: RULES ENGINE - STEP 7 PASSED - All "
"rule checks completed, default ALLOW\n");
return NOSTR_SUCCESS; // Default allow if no restrictive rules matched
}

199
src/test_keygen.c Normal file
View File

@@ -0,0 +1,199 @@
/*
* Test program for key generation
* Standalone version that doesn't require FastCGI
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sqlite3.h>
#include "../nostr_core_lib/nostr_core/nostr_common.h"
#include "../nostr_core_lib/nostr_core/utils.h"
// Forward declarations
int generate_random_private_key_bytes(unsigned char *key_bytes, size_t len);
int generate_server_keypair(const char *db_path);
int store_blossom_private_key(const char *db_path, const char *seckey);
// Generate random private key bytes using /dev/urandom
int generate_random_private_key_bytes(unsigned char *key_bytes, size_t len) {
FILE *fp = fopen("/dev/urandom", "rb");
if (!fp) {
fprintf(stderr, "ERROR: Cannot open /dev/urandom for key generation\n");
return -1;
}
size_t bytes_read = fread(key_bytes, 1, len, fp);
fclose(fp);
if (bytes_read != len) {
fprintf(stderr, "ERROR: Failed to read %zu bytes from /dev/urandom\n", len);
return -1;
}
return 0;
}
// Store blossom private key in dedicated table
int store_blossom_private_key(const char *db_path, const char *seckey) {
sqlite3 *db;
sqlite3_stmt *stmt;
int rc;
// Validate key format
if (!seckey || strlen(seckey) != 64) {
fprintf(stderr, "ERROR: Invalid blossom private key format\n");
return -1;
}
// Create blossom_seckey table if it doesn't exist
rc = sqlite3_open_v2(db_path, &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL);
if (rc) {
fprintf(stderr, "ERROR: Can't open database: %s\n", sqlite3_errmsg(db));
return -1;
}
// Create table
const char *create_sql = "CREATE TABLE IF NOT EXISTS blossom_seckey (id INTEGER PRIMARY KEY CHECK (id = 1), seckey TEXT NOT NULL, created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')), CHECK (length(seckey) = 64))";
rc = sqlite3_exec(db, create_sql, NULL, NULL, NULL);
if (rc != SQLITE_OK) {
fprintf(stderr, "ERROR: Failed to create blossom_seckey table: %s\n", sqlite3_errmsg(db));
sqlite3_close(db);
return -1;
}
// Store key
const char *sql = "INSERT OR REPLACE INTO blossom_seckey (id, seckey) VALUES (1, ?)";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc != SQLITE_OK) {
fprintf(stderr, "ERROR: SQL prepare failed: %s\n", sqlite3_errmsg(db));
sqlite3_close(db);
return -1;
}
sqlite3_bind_text(stmt, 1, seckey, -1, SQLITE_STATIC);
rc = sqlite3_step(stmt);
sqlite3_finalize(stmt);
sqlite3_close(db);
if (rc != SQLITE_DONE) {
fprintf(stderr, "ERROR: Failed to store blossom private key\n");
return -1;
}
return 0;
}
// Generate server keypair and store in database
int generate_server_keypair(const char *db_path) {
printf("Generating server keypair...\n");
unsigned char seckey_bytes[32];
char seckey_hex[65];
char pubkey_hex[65];
// Generate random private key
printf("Generating random private key...\n");
if (generate_random_private_key_bytes(seckey_bytes, 32) != 0) {
fprintf(stderr, "Failed to generate random bytes\n");
return -1;
}
// Validate the private key
if (nostr_ec_private_key_verify(seckey_bytes) != NOSTR_SUCCESS) {
fprintf(stderr, "ERROR: Generated invalid private key\n");
return -1;
}
// Convert to hex
nostr_bytes_to_hex(seckey_bytes, 32, seckey_hex);
// Derive public key
unsigned char pubkey_bytes[32];
if (nostr_ec_public_key_from_private_key(seckey_bytes, pubkey_bytes) != NOSTR_SUCCESS) {
fprintf(stderr, "ERROR: Failed to derive public key\n");
return -1;
}
// Convert public key to hex
nostr_bytes_to_hex(pubkey_bytes, 32, pubkey_hex);
// Store private key securely
if (store_blossom_private_key(db_path, seckey_hex) != 0) {
fprintf(stderr, "ERROR: Failed to store blossom private key\n");
return -1;
}
// Store public key in config
sqlite3 *db;
sqlite3_stmt *stmt;
int rc;
rc = sqlite3_open_v2(db_path, &db, SQLITE_OPEN_READWRITE, NULL);
if (rc) {
fprintf(stderr, "ERROR: Can't open database for config: %s\n", sqlite3_errmsg(db));
return -1;
}
const char *sql = "INSERT OR REPLACE INTO config (key, value, description) VALUES (?, ?, ?)";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc != SQLITE_OK) {
fprintf(stderr, "ERROR: SQL prepare failed: %s\n", sqlite3_errmsg(db));
sqlite3_close(db);
return -1;
}
sqlite3_bind_text(stmt, 1, "blossom_pubkey", -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, pubkey_hex, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, "Blossom server's public key for Nostr communication", -1, SQLITE_STATIC);
rc = sqlite3_step(stmt);
sqlite3_finalize(stmt);
sqlite3_close(db);
if (rc != SQLITE_DONE) {
fprintf(stderr, "ERROR: Failed to store blossom public key in config\n");
return -1;
}
// Display keys for admin setup
printf("========================================\n");
printf("SERVER KEYPAIR GENERATED SUCCESSFULLY\n");
printf("========================================\n");
printf("Blossom Public Key: %s\n", pubkey_hex);
printf("Blossom Private Key: %s\n", seckey_hex);
printf("========================================\n");
printf("IMPORTANT: Save the private key securely!\n");
printf("This key is used for decrypting admin messages.\n");
printf("========================================\n");
return 0;
}
int main(int argc, char *argv[]) {
const char *db_path = "db/ginxsom.db";
if (argc > 1) {
db_path = argv[1];
}
printf("Test Key Generation\n");
printf("===================\n");
printf("Database: %s\n\n", db_path);
// Initialize nostr crypto
printf("Initializing nostr crypto system...\n");
if (nostr_crypto_init() != NOSTR_SUCCESS) {
fprintf(stderr, "FATAL: Failed to initialize nostr crypto\n");
return 1;
}
printf("Crypto system initialized\n\n");
// Generate keypair
if (generate_server_keypair(db_path) != 0) {
fprintf(stderr, "FATAL: Key generation failed\n");
return 1;
}
printf("\nKey generation test completed successfully!\n");
return 0;
}

50
src/test_main.c Normal file
View File

@@ -0,0 +1,50 @@
/*
* Minimal test version of main.c to debug startup issues
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "ginxsom.h"
// Copy just the essential parts for testing
char g_db_path[4096] = "db/ginxsom.db";
char g_storage_dir[4096] = ".";
char g_admin_pubkey[65] = "";
char g_relay_seckey[65] = "";
int g_generate_keys = 0;
int main(int argc, char *argv[]) {
printf("DEBUG: main() started\n");
fflush(stdout);
// Parse minimal args
for (int i = 1; i < argc; i++) {
printf("DEBUG: arg %d: %s\n", i, argv[i]);
fflush(stdout);
if (strcmp(argv[i], "--generate-keys") == 0) {
g_generate_keys = 1;
printf("DEBUG: generate-keys flag set\n");
fflush(stdout);
} else if (strcmp(argv[i], "--help") == 0) {
printf("Usage: test_main [options]\n");
printf(" --generate-keys Generate keys\n");
printf(" --help Show help\n");
return 0;
}
}
printf("DEBUG: g_generate_keys = %d\n", g_generate_keys);
fflush(stdout);
if (g_generate_keys) {
printf("DEBUG: Would generate keys here\n");
fflush(stdout);
return 0;
}
printf("DEBUG: Normal startup would continue here\n");
fflush(stdout);
return 0;
}

25
test_key_generation.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/bin/bash
# Test key generation for ginxsom
echo "=== Testing Key Generation ==="
echo
# Run the binary with --generate-keys flag
echo "Running: ./build/ginxsom-fcgi --generate-keys --db-path db/ginxsom.db"
echo
./build/ginxsom-fcgi --generate-keys --db-path db/ginxsom.db 2>&1
echo
echo "=== Checking if keys were stored ==="
echo
# Check if blossom_seckey table was created
echo "Checking blossom_seckey table:"
sqlite3 db/ginxsom.db "SELECT COUNT(*) as key_count FROM blossom_seckey" 2>&1
echo
echo "Checking blossom_pubkey in config:"
sqlite3 db/ginxsom.db "SELECT value FROM config WHERE key='blossom_pubkey'" 2>&1
echo
echo "=== Test Complete ==="

54
test_mode_verification.sh Executable file
View File

@@ -0,0 +1,54 @@
#!/bin/bash
echo "=== Test Mode Verification ==="
echo ""
# Expected test keys from .test_keys
EXPECTED_ADMIN_PUBKEY="8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e"
EXPECTED_SERVER_PUBKEY="52e366edfa4e9cc6a6d4653828e51ccf828a2f5a05227d7a768f33b5a198681a"
echo "1. Checking database keys (should be OLD keys, not test keys)..."
DB_ADMIN_PUBKEY=$(sqlite3 db/ginxsom.db "SELECT value FROM config WHERE key = 'admin_pubkey'")
DB_BLOSSOM_PUBKEY=$(sqlite3 db/ginxsom.db "SELECT value FROM config WHERE key = 'blossom_pubkey'")
DB_BLOSSOM_SECKEY=$(sqlite3 db/ginxsom.db "SELECT seckey FROM blossom_seckey WHERE id = 1")
echo " Database admin_pubkey: '$DB_ADMIN_PUBKEY'"
echo " Database blossom_pubkey: '$DB_BLOSSOM_PUBKEY'"
echo " Database blossom_seckey: '$DB_BLOSSOM_SECKEY'"
echo ""
# Verify database was NOT modified with test keys
if [ "$DB_ADMIN_PUBKEY" = "$EXPECTED_ADMIN_PUBKEY" ]; then
echo " ❌ FAIL: Database admin_pubkey matches test key (should NOT be modified)"
exit 1
else
echo " ✓ PASS: Database admin_pubkey is different from test key (not modified)"
fi
if [ "$DB_BLOSSOM_PUBKEY" = "$EXPECTED_SERVER_PUBKEY" ]; then
echo " ❌ FAIL: Database blossom_pubkey matches test key (should NOT be modified)"
exit 1
else
echo " ✓ PASS: Database blossom_pubkey is different from test key (not modified)"
fi
echo ""
echo "2. Checking server is running..."
if curl -s http://localhost:9001/ > /dev/null; then
echo " ✓ PASS: Server is responding"
else
echo " ❌ FAIL: Server is not responding"
exit 1
fi
echo ""
echo "3. Verifying test keys from .test_keys file..."
echo " Expected admin pubkey: $EXPECTED_ADMIN_PUBKEY"
echo " Expected server pubkey: $EXPECTED_SERVER_PUBKEY"
echo ""
echo "=== All Tests Passed ==="
echo "Test mode is working correctly:"
echo " - Test keys are loaded in memory"
echo " - Database was NOT modified"
echo " - Server is running with test keys"

206
tests/admin_event_test.sh Executable file
View File

@@ -0,0 +1,206 @@
#!/bin/bash
# Ginxsom Admin Event Test Script
# Tests Kind 23456/23457 admin command system with NIP-44 encryption
#
# Prerequisites:
# - nak: https://github.com/fiatjaf/nak
# - curl
# - jq (for JSON parsing)
# - Server running with test keys from .test_keys
set -e
# Configuration
GINXSOM_URL="http://localhost:9001"
TEST_KEYS_FILE=".test_keys"
# Load test keys
if [[ ! -f "$TEST_KEYS_FILE" ]]; then
echo "ERROR: $TEST_KEYS_FILE not found"
echo "Run the server with --test-keys to generate test keys"
exit 1
fi
source "$TEST_KEYS_FILE"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Helper functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
check_dependencies() {
log_info "Checking dependencies..."
for cmd in nak curl jq; do
if ! command -v $cmd &> /dev/null; then
log_error "$cmd is not installed"
case $cmd in
nak)
echo "Install from: https://github.com/fiatjaf/nak"
;;
jq)
echo "Install jq for JSON processing"
;;
curl)
echo "curl should be available in most systems"
;;
esac
exit 1
fi
done
log_success "All dependencies found"
}
# Create NIP-44 encrypted admin command event (Kind 23456)
create_admin_command_event() {
local command="$1"
local expiration=$(($(date +%s) + 3600)) # 1 hour from now
log_info "Creating Kind 23456 admin command event..."
log_info "Command: $command"
# For now, we'll create the event structure manually since nak may not support NIP-44 encryption yet
# The content should be NIP-44 encrypted JSON array: ["config_query"]
# We'll use plaintext for initial testing and add encryption later
local content="[\"$command\"]"
# Create event with nak
# Kind 23456 = admin command
# Tags: p = server pubkey, expiration
local event=$(nak event -k 23456 \
-c "$content" \
--tag p="$SERVER_PUBKEY" \
--tag expiration="$expiration" \
--sec "$ADMIN_PRIVKEY")
echo "$event"
}
# Send admin command and parse response
send_admin_command() {
local command="$1"
log_info "=== Testing Admin Command: $command ==="
# Create Kind 23456 event
local event=$(create_admin_command_event "$command")
if [[ -z "$event" ]]; then
log_error "Failed to create admin event"
return 1
fi
log_info "Event created successfully"
echo "$event" | jq . || echo "$event"
# Send to server
log_info "Sending to POST $GINXSOM_URL/api/admin"
local response=$(curl -s -w "\n%{http_code}" \
-X POST \
-H "Content-Type: application/json" \
-d "$event" \
"$GINXSOM_URL/api/admin")
local http_code=$(echo "$response" | tail -n1)
local body=$(echo "$response" | head -n-1)
echo ""
if [[ "$http_code" =~ ^2 ]]; then
log_success "HTTP $http_code - Response received"
echo "$body" | jq . 2>/dev/null || echo "$body"
# Try to parse as Kind 23457 event
local kind=$(echo "$body" | jq -r '.kind // empty' 2>/dev/null)
if [[ "$kind" == "23457" ]]; then
log_success "Received Kind 23457 response event"
local response_content=$(echo "$body" | jq -r '.content // empty' 2>/dev/null)
log_info "Response content (encrypted): $response_content"
# TODO: Decrypt NIP-44 content to see actual response
fi
else
log_error "HTTP $http_code - Request failed"
echo "$body" | jq . 2>/dev/null || echo "$body"
return 1
fi
echo ""
}
test_config_query() {
log_info "=== Testing config_query Command ==="
send_admin_command "config_query"
}
test_server_health() {
log_info "=== Testing Server Health ==="
local response=$(curl -s -w "\n%{http_code}" "$GINXSOM_URL/api/health")
local http_code=$(echo "$response" | tail -n1)
local body=$(echo "$response" | head -n-1)
if [[ "$http_code" =~ ^2 ]]; then
log_success "Server is healthy (HTTP $http_code)"
echo "$body" | jq .
else
log_error "Server health check failed (HTTP $http_code)"
echo "$body"
return 1
fi
echo ""
}
main() {
echo "=== Ginxsom Admin Event Test Suite ==="
echo "Testing Kind 23456/23457 admin command system"
echo ""
log_info "Test Configuration:"
log_info " Admin Pubkey: $ADMIN_PUBKEY"
log_info " Server Pubkey: $SERVER_PUBKEY"
log_info " Server URL: $GINXSOM_URL"
echo ""
check_dependencies
echo ""
# Test server health first
test_server_health
# Test admin commands
test_config_query
echo ""
log_success "Admin event testing complete!"
echo ""
log_warning "NOTE: NIP-44 encryption not yet implemented in test script"
log_warning "Events are sent with plaintext command arrays for initial testing"
log_warning "Production implementation will use full NIP-44 encryption"
}
# Allow sourcing for individual function testing
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View File

@@ -14,9 +14,9 @@ TESTS_PASSED=0
TESTS_FAILED=0
TOTAL_TESTS=0
# Test keys for different scenarios
TEST_USER1_PRIVKEY="5c0c523f52a5b6fad39ed2403092df8cebc36318b39383bca6c00808626fab3a"
TEST_USER1_PUBKEY="87d3561f19b74adbe8bf840682992466068830a9d8c36b4a0c99d36f826cb6cb"
# Test keys for different scenarios - Using WSB's keys for TEST_USER1
TEST_USER1_PRIVKEY="22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd"
TEST_USER1_PUBKEY="8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e"
TEST_USER2_PRIVKEY="182c3a5e3b7a1b7e4f5c6b7c8b4a5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2"
TEST_USER2_PUBKEY="c95195e5e7de1ad8c4d3c0ac4e8b5c0c4e0c4d3c1e5c8d4c2e7e9f4a5b6c7d8e"

View File

@@ -0,0 +1 @@
Content from blacklisted user

View File

@@ -0,0 +1 @@
Content from allowed user

View File

@@ -0,0 +1 @@
First request - cache miss

View File

@@ -0,0 +1 @@
Second request - cache hit

View File

@@ -0,0 +1 @@
Testing after cleanup

View File

@@ -0,0 +1 @@
Testing disabled rule

View File

@@ -0,0 +1 @@
Testing enabled rule

View File

@@ -0,0 +1 @@
This specific file is blacklisted

View File

@@ -0,0 +1 @@
This file is allowed

View File

@@ -0,0 +1 @@
Plain text file

View File

@@ -0,0 +1 @@
Text file with whitelist active

View File

@@ -1 +1 @@
e3ba927d32ca105a8a4cafa2e013b97945a165c38e9ce573446a2332dc312fdb
299c28eeb15df327c30c9afd952d4e35c3777443d2094b2caab2fc94599ce607

View File

@@ -0,0 +1 @@
Testing operation-specific rules

View File

@@ -0,0 +1 @@
Testing priority ordering

View File

@@ -0,0 +1 @@
test content

View File

@@ -0,0 +1 @@
Content from whitelisted user

View File

@@ -0,0 +1 @@
Content from non-whitelisted user

View File

@@ -0,0 +1 @@
Testing wildcard operation

View File

@@ -5,11 +5,14 @@
set -e # Exit on any error
# Configuration
SERVER_URL="http://localhost:9001"
# Configuration - Using WSB's keys
# SERVER_URL="http://localhost:9001"
SERVER_URL="https://localhost:9443"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
TEST_FILE="test_blob_$(date +%s).txt"
CLEANUP_FILES=()
NOSTR_PRIVKEY="22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd"
NOSTR_PUBKEY="8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e"
# Colors for output
RED='\033[0;31m'
@@ -87,7 +90,7 @@ check_prerequisites() {
check_server() {
log_info "Checking if server is running..."
if curl -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
if curl -k -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
log_success "Server is running at ${SERVER_URL}"
else
log_error "Server is not responding at ${SERVER_URL}"
@@ -127,22 +130,23 @@ calculate_hash() {
# Generate nostr event
generate_nostr_event() {
log_info "Generating kind 24242 nostr event with nak..."
log_info "Generating kind 24242 nostr event with nak using Alice's private key..."
# Calculate expiration time (1 hour from now)
EXPIRATION=$(date -d '+1 hour' +%s)
# Generate the event using nak
# Generate the event using nak with Alice's private key
EVENT_JSON=$(nak event -k 24242 -c "" \
--sec "$NOSTR_PRIVKEY" \
-t "t=upload" \
-t "x=${HASH}" \
-t "expiration=${EXPIRATION}")
if [[ -z "$EVENT_JSON" ]]; then
log_error "Failed to generate nostr event"
exit 1
fi
log_success "Generated nostr event"
echo "Event JSON: $EVENT_JSON"
}
@@ -168,7 +172,7 @@ perform_upload() {
CLEANUP_FILES+=("${RESPONSE_FILE}")
# Perform the upload with verbose output
HTTP_STATUS=$(curl -s -w "%{http_code}" \
HTTP_STATUS=$(curl -k -s -w "%{http_code}" \
-X PUT \
-H "Authorization: ${AUTH_HEADER}" \
-H "Content-Type: text/plain" \
@@ -217,7 +221,7 @@ test_retrieval() {
RETRIEVAL_URL="${SERVER_URL}/${HASH}"
if curl -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
if curl -k -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
log_success "File can be retrieved at: ${RETRIEVAL_URL}"
else
log_warning "File not yet available for retrieval (expected if upload processing not implemented)"

266
tests/file_put_production.sh Executable file
View File

@@ -0,0 +1,266 @@
#!/bin/bash
# file_put_production.sh - Test script for production Ginxsom Blossom server
# Tests upload functionality on blossom.laantungir.net
set -e # Exit on any error
# Configuration
SERVER_URL="https://blossom.laantungir.net"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
TEST_FILE="test_blob_$(date +%s).txt"
CLEANUP_FILES=()
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Cleanup function
cleanup() {
echo -e "${YELLOW}Cleaning up temporary files...${NC}"
for file in "${CLEANUP_FILES[@]}"; do
if [[ -f "$file" ]]; then
rm -f "$file"
echo "Removed: $file"
fi
done
}
# Set up cleanup on exit
trap cleanup EXIT
# Helper functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Check prerequisites
check_prerequisites() {
log_info "Checking prerequisites..."
# Check if nak is installed
if ! command -v nak &> /dev/null; then
log_error "nak command not found. Please install nak first."
log_info "Install with: go install github.com/fiatjaf/nak@latest"
exit 1
fi
log_success "nak is installed"
# Check if curl is available
if ! command -v curl &> /dev/null; then
log_error "curl command not found. Please install curl."
exit 1
fi
log_success "curl is available"
# Check if sha256sum is available
if ! command -v sha256sum &> /dev/null; then
log_error "sha256sum command not found."
exit 1
fi
log_success "sha256sum is available"
# Check if base64 is available
if ! command -v base64 &> /dev/null; then
log_error "base64 command not found."
exit 1
fi
log_success "base64 is available"
}
# Check if server is running
check_server() {
log_info "Checking if server is running..."
if curl -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
log_success "Server is running at ${SERVER_URL}"
else
log_error "Server is not responding at ${SERVER_URL}"
exit 1
fi
}
# Create test file
create_test_file() {
log_info "Creating test file: ${TEST_FILE}"
# Create test content with timestamp and random data
cat > "${TEST_FILE}" << EOF
Test blob content for Ginxsom Blossom server (PRODUCTION)
Timestamp: $(date -Iseconds)
Random data: $(openssl rand -hex 32)
Test message: Hello from production test!
This file is used to test the upload functionality
of the Ginxsom Blossom server on blossom.laantungir.net
EOF
CLEANUP_FILES+=("${TEST_FILE}")
log_success "Created test file with $(wc -c < "${TEST_FILE}") bytes"
}
# Calculate file hash
calculate_hash() {
log_info "Calculating SHA-256 hash..."
HASH=$(sha256sum "${TEST_FILE}" | cut -d' ' -f1)
log_success "Data to hash: ${TEST_FILE}"
log_success "File hash: ${HASH}"
}
# Generate nostr event
generate_nostr_event() {
log_info "Generating kind 24242 nostr event with nak using WSB's private key..."
# Calculate expiration time (1 hour from now)
EXPIRATION=$(date -d '+1 hour' +%s)
# Generate the event using nak with WSB's private key
EVENT_JSON=$(nak event -k 24242 -c "" \
--sec "22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd" \
-t "t=upload" \
-t "x=${HASH}" \
-t "expiration=${EXPIRATION}")
if [[ -z "$EVENT_JSON" ]]; then
log_error "Failed to generate nostr event"
exit 1
fi
log_success "Generated nostr event"
echo "Event JSON: $EVENT_JSON"
}
# Create authorization header
create_auth_header() {
log_info "Creating authorization header..."
# Base64 encode the event (without newlines)
AUTH_B64=$(echo -n "$EVENT_JSON" | base64 -w 0)
AUTH_HEADER="Nostr ${AUTH_B64}"
log_success "Created authorization header"
echo "Auth header length: ${#AUTH_HEADER} characters"
}
# Perform upload
perform_upload() {
log_info "Performing upload to ${UPLOAD_ENDPOINT}..."
# Create temporary file for response
RESPONSE_FILE=$(mktemp)
CLEANUP_FILES+=("${RESPONSE_FILE}")
# Perform the upload with verbose output
HTTP_STATUS=$(curl -s -w "%{http_code}" \
-X PUT \
-H "Authorization: ${AUTH_HEADER}" \
-H "Content-Type: text/plain" \
-H "Content-Disposition: attachment; filename=\"${TEST_FILE}\"" \
--data-binary "@${TEST_FILE}" \
"${UPLOAD_ENDPOINT}" \
-o "${RESPONSE_FILE}")
echo "HTTP Status: ${HTTP_STATUS}"
echo "Response body:"
cat "${RESPONSE_FILE}"
echo
# Check response
case "${HTTP_STATUS}" in
200)
log_success "Upload successful!"
;;
201)
log_success "Upload successful (created)!"
;;
400)
log_error "Bad request - check the event format"
;;
401)
log_error "Unauthorized - authentication failed"
;;
405)
log_error "Method not allowed - check nginx configuration"
;;
413)
log_error "Payload too large"
;;
501)
log_warning "Upload endpoint not yet implemented (expected for now)"
;;
*)
log_error "Upload failed with HTTP status: ${HTTP_STATUS}"
;;
esac
}
# Test file retrieval
test_retrieval() {
log_info "Testing file retrieval..."
RETRIEVAL_URL="${SERVER_URL}/${HASH}"
if curl -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
log_success "File can be retrieved at: ${RETRIEVAL_URL}"
# Download and verify
DOWNLOADED_FILE=$(mktemp)
CLEANUP_FILES+=("${DOWNLOADED_FILE}")
curl -s "${RETRIEVAL_URL}" -o "${DOWNLOADED_FILE}"
DOWNLOADED_HASH=$(sha256sum "${DOWNLOADED_FILE}" | cut -d' ' -f1)
if [[ "${DOWNLOADED_HASH}" == "${HASH}" ]]; then
log_success "Downloaded file hash matches! Verification successful."
else
log_error "Hash mismatch! Expected: ${HASH}, Got: ${DOWNLOADED_HASH}"
fi
else
log_warning "File not yet available for retrieval"
fi
}
# Main execution
main() {
echo "=== Ginxsom Blossom Production Upload Test ==="
echo "Server: ${SERVER_URL}"
echo "Timestamp: $(date -Iseconds)"
echo
check_prerequisites
check_server
create_test_file
calculate_hash
generate_nostr_event
create_auth_header
perform_upload
test_retrieval
echo
log_info "Test completed!"
echo "Summary:"
echo " Test file: ${TEST_FILE}"
echo " File hash: ${HASH}"
echo " Server: ${SERVER_URL}"
echo " Upload endpoint: ${UPLOAD_ENDPOINT}"
echo " Retrieval URL: ${SERVER_URL}/${HASH}"
}
# Run main function
main "$@"

View File

@@ -4,8 +4,8 @@
# This script tests the blob listing functionality
BASE_URL="http://localhost:9001"
NOSTR_PRIVKEY="0000000000000000000000000000000000000000000000000000000000000001"
NOSTR_PUBKEY="79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798"
NOSTR_PRIVKEY="22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd"
NOSTR_PUBKEY="8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e"
# Colors for output
RED='\033[0;31m'
@@ -19,128 +19,117 @@ echo
# Function to generate a Nostr event for list authorization
generate_list_auth() {
local content="$1"
local created_at=$(date +%s)
local expiration=$((created_at + 3600)) # 1 hour from now
# Note: This is a placeholder - in real implementation, you'd use nostr tools
# to generate properly signed events. For now, we'll create the structure.
cat << EOF
{
"id": "placeholder_id",
"pubkey": "$NOSTR_PUBKEY",
"kind": 24242,
"content": "$content",
"created_at": $created_at,
"tags": [
["t", "list"],
["expiration", "$expiration"]
],
"sig": "placeholder_signature"
}
EOF
# Use nak to generate properly signed events with Alice's private key
nak event -k 24242 -c "$content" \
--sec "$NOSTR_PRIVKEY" \
-t "t=list" \
-t "expiration=$(( $(date +%s) + 3600 ))"
}
# Test 1: List blobs without authorization (should work if optional auth)
echo -e "${YELLOW}Test 1: GET /list/<pubkey> without authorization${NC}"
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" "$BASE_URL/list/$NOSTR_PUBKEY")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
echo "Using pubkey: $NOSTR_PUBKEY"
echo "HTTP Status: $HTTP_STATUS"
echo "Response: $BODY"
echo
# Test 2: List blobs with authorization
echo -e "${YELLOW}Test 2: GET /list/<pubkey> with authorization${NC}"
LIST_AUTH=$(generate_list_auth "List Blobs")
AUTH_B64=$(echo "$LIST_AUTH" | base64 -w 0)
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
-H "Authorization: Nostr $AUTH_B64" \
"$BASE_URL/list/$NOSTR_PUBKEY")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
echo "HTTP Status: $HTTP_STATUS"
echo "Response: $BODY"
echo
# # Test 2: List blobs with authorization
# echo -e "${YELLOW}Test 2: GET /list/<pubkey> with authorization${NC}"
# LIST_AUTH=$(generate_list_auth "List Blobs")
# AUTH_B64=$(echo "$LIST_AUTH" | base64 -w 0)
# RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
# -H "Authorization: Nostr $AUTH_B64" \
# "$BASE_URL/list/$NOSTR_PUBKEY")
# HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
# BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# Test 3: List blobs with since parameter
echo -e "${YELLOW}Test 3: GET /list/<pubkey> with since parameter${NC}"
SINCE_TIMESTAMP=$(($(date +%s) - 86400)) # 24 hours ago
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
"$BASE_URL/list/$NOSTR_PUBKEY?since=$SINCE_TIMESTAMP")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# echo "HTTP Status: $HTTP_STATUS"
# echo "Response: $BODY"
# echo
echo "HTTP Status: $HTTP_STATUS"
echo "Response: $BODY"
echo
# # Test 3: List blobs with since parameter
# echo -e "${YELLOW}Test 3: GET /list/<pubkey> with since parameter${NC}"
# SINCE_TIMESTAMP=$(($(date +%s) - 86400)) # 24 hours ago
# RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
# "$BASE_URL/list/$NOSTR_PUBKEY?since=$SINCE_TIMESTAMP")
# HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
# BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# Test 4: List blobs with until parameter
echo -e "${YELLOW}Test 4: GET /list/<pubkey> with until parameter${NC}"
UNTIL_TIMESTAMP=$(date +%s) # now
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
"$BASE_URL/list/$NOSTR_PUBKEY?until=$UNTIL_TIMESTAMP")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# echo "HTTP Status: $HTTP_STATUS"
# echo "Response: $BODY"
# echo
echo "HTTP Status: $HTTP_STATUS"
echo "Response: $BODY"
echo
# # Test 4: List blobs with until parameter
# echo -e "${YELLOW}Test 4: GET /list/<pubkey> with until parameter${NC}"
# UNTIL_TIMESTAMP=$(date +%s) # now
# RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
# "$BASE_URL/list/$NOSTR_PUBKEY?until=$UNTIL_TIMESTAMP")
# HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
# BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# Test 5: List blobs with both since and until parameters
echo -e "${YELLOW}Test 5: GET /list/<pubkey> with since and until parameters${NC}"
SINCE_TIMESTAMP=$(($(date +%s) - 86400)) # 24 hours ago
UNTIL_TIMESTAMP=$(date +%s) # now
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
"$BASE_URL/list/$NOSTR_PUBKEY?since=$SINCE_TIMESTAMP&until=$UNTIL_TIMESTAMP")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# echo "HTTP Status: $HTTP_STATUS"
# echo "Response: $BODY"
# echo
echo "HTTP Status: $HTTP_STATUS"
echo "Response: $BODY"
echo
# # Test 5: List blobs with both since and until parameters
# echo -e "${YELLOW}Test 5: GET /list/<pubkey> with since and until parameters${NC}"
# SINCE_TIMESTAMP=$(($(date +%s) - 86400)) # 24 hours ago
# UNTIL_TIMESTAMP=$(date +%s) # now
# RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
# "$BASE_URL/list/$NOSTR_PUBKEY?since=$SINCE_TIMESTAMP&until=$UNTIL_TIMESTAMP")
# HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
# BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# Test 6: List blobs for non-existent pubkey
echo -e "${YELLOW}Test 6: GET /list/<nonexistent_pubkey>${NC}"
FAKE_PUBKEY="1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" "$BASE_URL/list/$FAKE_PUBKEY")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# echo "HTTP Status: $HTTP_STATUS"
# echo "Response: $BODY"
# echo
if [ "$HTTP_STATUS" = "200" ]; then
echo -e "${GREEN}✓ Correctly returned 200 with empty array${NC}"
else
echo "HTTP Status: $HTTP_STATUS"
fi
echo "Response: $BODY"
echo
# # Test 6: List blobs for non-existent pubkey
# echo -e "${YELLOW}Test 6: GET /list/<nonexistent_pubkey>${NC}"
# FAKE_PUBKEY="1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
# RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" "$BASE_URL/list/$FAKE_PUBKEY")
# HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
# BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# Test 7: List blobs with invalid pubkey format
echo -e "${YELLOW}Test 7: GET /list/<invalid_pubkey_format>${NC}"
INVALID_PUBKEY="invalid_pubkey"
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" "$BASE_URL/list/$INVALID_PUBKEY")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# if [ "$HTTP_STATUS" = "200" ]; then
# echo -e "${GREEN}✓ Correctly returned 200 with empty array${NC}"
# else
# echo "HTTP Status: $HTTP_STATUS"
# fi
# echo "Response: $BODY"
# echo
if [ "$HTTP_STATUS" = "400" ]; then
echo -e "${GREEN}✓ Correctly returned 400 for invalid pubkey format${NC}"
else
echo "HTTP Status: $HTTP_STATUS"
fi
echo "Response: $BODY"
echo
# # Test 7: List blobs with invalid pubkey format
# echo -e "${YELLOW}Test 7: GET /list/<invalid_pubkey_format>${NC}"
# INVALID_PUBKEY="invalid_pubkey"
# RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" "$BASE_URL/list/$INVALID_PUBKEY")
# HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
# BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# Test 8: List blobs with invalid since/until parameters
echo -e "${YELLOW}Test 8: GET /list/<pubkey> with invalid timestamp parameters${NC}"
RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
"$BASE_URL/list/$NOSTR_PUBKEY?since=invalid&until=invalid")
HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# if [ "$HTTP_STATUS" = "400" ]; then
# echo -e "${GREEN}✓ Correctly returned 400 for invalid pubkey format${NC}"
# else
# echo "HTTP Status: $HTTP_STATUS"
# fi
# echo "Response: $BODY"
# echo
# # Test 8: List blobs with invalid since/until parameters
# echo -e "${YELLOW}Test 8: GET /list/<pubkey> with invalid timestamp parameters${NC}"
# RESPONSE=$(curl -s -w "\nHTTP_STATUS:%{http_code}" \
# "$BASE_URL/list/$NOSTR_PUBKEY?since=invalid&until=invalid")
# HTTP_STATUS=$(echo "$RESPONSE" | grep "HTTP_STATUS" | cut -d: -f2)
# BODY=$(echo "$RESPONSE" | sed '/HTTP_STATUS/d')
# echo "HTTP Status: $HTTP_STATUS"
# echo "Response: $BODY"
# echo
echo "HTTP Status: $HTTP_STATUS"
echo "Response: $BODY"
echo
# echo "=== List Tests Complete ==="
# echo

View File

@@ -3,18 +3,31 @@
# Mirror Test Script for BUD-04
# Tests the PUT /mirror endpoint with a sample PNG file and NIP-42 authentication
# ============================================================================
# CONFIGURATION - Choose your target Blossom server
# ============================================================================
# Local server (uncomment to use)
BLOSSOM_SERVER="http://localhost:9001"
# Remote server (uncomment to use)
#BLOSSOM_SERVER="https://blossom.laantungir.net"
# ============================================================================
# Test URL - PNG file with known SHA-256 hash
TEST_URL="https://laantungir.github.io/img_repo/24308d48eb498b593e55a87b6300ccffdea8432babc0bb898b1eff21ebbb72de.png"
EXPECTED_HASH="24308d48eb498b593e55a87b6300ccffdea8432babc0bb898b1eff21ebbb72de"
echo "=== BUD-04 Mirror Endpoint Test with Authentication ==="
echo "Blossom Server: $BLOSSOM_SERVER"
echo "Target URL: $TEST_URL"
echo "Expected Hash: $EXPECTED_HASH"
echo ""
# Get a fresh challenge from the server
echo "=== Getting Authentication Challenge ==="
challenge=$(curl -s "http://localhost:9001/auth" | jq -r '.challenge')
challenge=$(curl -s "$BLOSSOM_SERVER/auth" | jq -r '.challenge')
if [ "$challenge" = "null" ] || [ -z "$challenge" ]; then
echo "❌ Failed to get challenge from server"
exit 1
@@ -48,7 +61,7 @@ RESPONSE=$(curl -s -w "\nHTTP_CODE:%{http_code}\n" \
-H "Authorization: $auth_header" \
-H "Content-Type: application/json" \
-d "$JSON_BODY" \
http://localhost:9001/mirror)
"$BLOSSOM_SERVER/mirror")
echo "Response:"
echo "$RESPONSE"
@@ -65,9 +78,9 @@ if [ "$HTTP_CODE" = "200" ]; then
# Try to access the mirrored blob
echo ""
echo "=== Verifying Mirrored Blob ==="
echo "Attempting to fetch: http://localhost:9001/$EXPECTED_HASH.png"
echo "Attempting to fetch: $BLOSSOM_SERVER/$EXPECTED_HASH.png"
BLOB_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I "http://localhost:9001/$EXPECTED_HASH.png")
BLOB_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I "$BLOSSOM_SERVER/$EXPECTED_HASH.png")
BLOB_HTTP_CODE=$(echo "$BLOB_RESPONSE" | grep "HTTP_CODE:" | cut -d: -f2)
if [ "$BLOB_HTTP_CODE" = "200" ]; then
@@ -82,7 +95,7 @@ if [ "$HTTP_CODE" = "200" ]; then
# Test HEAD request for metadata
echo ""
echo "=== Testing HEAD Request ==="
HEAD_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I -X HEAD "http://localhost:9001/$EXPECTED_HASH")
HEAD_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I -X HEAD "$BLOSSOM_SERVER/$EXPECTED_HASH")
HEAD_HTTP_CODE=$(echo "$HEAD_RESPONSE" | grep "HTTP_CODE:" | cut -d: -f2)
if [ "$HEAD_HTTP_CODE" = "200" ]; then

397
tests/websocket_admin_test.sh Executable file
View File

@@ -0,0 +1,397 @@
#!/bin/bash
# Ginxsom WebSocket Admin Test Script
# Tests Kind 23456/23457 admin command system over WebSocket with NIP-44 encryption
#
# Prerequisites:
# - websocat: WebSocket client (https://github.com/vi/websocat)
# - nak: Nostr Army Knife (https://github.com/fiatjaf/nak)
# - jq: JSON processor
# - Server running with test keys from .test_keys
set -e
# Configuration
WEBSOCKET_URL="wss://localhost:9443/admin" # Secure WebSocket via nginx HTTPS
WEBSOCKET_HTTP_URL="ws://localhost:9001/admin" # Non-secure WebSocket via nginx HTTP
WEBSOCKET_DIRECT_URL="ws://localhost:9442" # Direct connection to WebSocket server (port 9442)
TEST_KEYS_FILE=".test_keys"
TIMEOUT=10 # WebSocket connection timeout in seconds
# Load test keys
if [[ ! -f "$TEST_KEYS_FILE" ]]; then
echo "ERROR: $TEST_KEYS_FILE not found"
echo "Run the server with --test-keys to generate test keys"
exit 1
fi
source "$TEST_KEYS_FILE"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# Helper functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_debug() {
echo -e "${CYAN}[DEBUG]${NC} $1"
}
check_dependencies() {
log_info "Checking dependencies..."
for cmd in websocat nak jq; do
if ! command -v $cmd &> /dev/null; then
log_error "$cmd is not installed"
case $cmd in
websocat)
echo "Install from: https://github.com/vi/websocat"
echo " cargo install websocat"
;;
nak)
echo "Install from: https://github.com/fiatjaf/nak"
echo " go install github.com/fiatjaf/nak@latest"
;;
jq)
echo "Install jq for JSON processing"
echo " apt-get install jq # Debian/Ubuntu"
;;
esac
exit 1
fi
done
log_success "All dependencies found"
log_info " websocat: $(websocat --version 2>&1 | head -n1)"
log_info " nak: $(nak --version 2>&1 | head -n1)"
log_info " jq: $(jq --version 2>&1)"
}
# Test basic WebSocket connection
test_websocket_connection() {
local url="$1"
log_info "=== Testing WebSocket Connection ==="
log_info "Connecting to: $url"
# For wss:// connections, add --insecure flag to skip certificate verification
local websocat_opts=""
if [[ "$url" == wss://* ]]; then
websocat_opts="--insecure"
log_debug "Using --insecure flag for self-signed certificate"
fi
# Try to connect and send a ping
local result=$(timeout $TIMEOUT websocat $websocat_opts -n1 "$url" <<< '{"test":"ping"}' 2>&1 || echo "TIMEOUT")
if [[ "$result" == "TIMEOUT" ]]; then
log_error "Connection timeout after ${TIMEOUT}s"
return 1
elif [[ -z "$result" ]]; then
log_warning "Connected but no response (this may be normal for WebSocket)"
return 0
else
log_success "Connection established"
log_debug "Response: $result"
return 0
fi
}
# Create NIP-44 encrypted admin command event (Kind 23456)
create_admin_command_event() {
local command="$1"
local expiration=$(($(date +%s) + 3600)) # 1 hour from now
log_info "Creating Kind 23456 admin command event..."
log_info "Command: $command"
# Content is a JSON array of commands
local content="[\"$command\"]"
# Create event with nak
# Kind 23456 = admin command
# Tags: p = server pubkey, expiration
local event=$(nak event -k 23456 \
-c "$content" \
--tag p="$SERVER_PUBKEY" \
--tag expiration="$expiration" \
--sec "$ADMIN_PRIVKEY" 2>&1)
if [[ $? -ne 0 ]]; then
log_error "Failed to create event with nak"
log_error "$event"
return 1
fi
echo "$event"
}
# Send admin command via WebSocket and wait for response
send_websocket_admin_command() {
local command="$1"
local url="$2"
log_info "=== Testing Admin Command via WebSocket: $command ==="
# Create Kind 23456 event
local event=$(create_admin_command_event "$command")
if [[ -z "$event" ]]; then
log_error "Failed to create admin event"
return 1
fi
log_success "Event created successfully"
log_debug "Event JSON:"
echo "$event" | jq -C . 2>/dev/null || echo "$event"
echo ""
# Send to WebSocket server and wait for response
log_info "Sending to WebSocket: $url"
log_info "Waiting for Kind 23457 response (timeout: ${TIMEOUT}s)..."
# For wss:// connections, add --insecure flag to skip certificate verification
local websocat_opts=""
if [[ "$url" == wss://* ]]; then
websocat_opts="--insecure"
log_debug "Using --insecure flag for self-signed certificate"
fi
# Use websocat to send event and receive response
local response=$(timeout $TIMEOUT websocat $websocat_opts -n1 "$url" <<< "$event" 2>&1)
local exit_code=$?
echo ""
if [[ $exit_code -eq 124 ]]; then
log_error "Timeout waiting for response after ${TIMEOUT}s"
return 1
elif [[ $exit_code -ne 0 ]]; then
log_error "WebSocket connection failed (exit code: $exit_code)"
log_error "$response"
return 1
fi
if [[ -z "$response" ]]; then
log_warning "No response received (connection may have closed)"
return 1
fi
log_success "Response received"
log_debug "Raw response:"
echo "$response"
echo ""
# Try to parse as JSON
if echo "$response" | jq . &>/dev/null; then
log_success "Valid JSON response"
# Check if it's a Kind 23457 event
local kind=$(echo "$response" | jq -r '.kind // empty' 2>/dev/null)
if [[ "$kind" == "23457" ]]; then
log_success "Received Kind 23457 response event ✓"
# Extract and display response details
local response_id=$(echo "$response" | jq -r '.id // empty')
local response_pubkey=$(echo "$response" | jq -r '.pubkey // empty')
local response_content=$(echo "$response" | jq -r '.content // empty')
local response_sig=$(echo "$response" | jq -r '.sig // empty')
echo ""
log_info "Response Event Details:"
log_info " ID: $response_id"
log_info " Pubkey: $response_pubkey"
log_info " Content: $response_content"
log_info " Sig: ${response_sig:0:32}..."
# Check if content is encrypted (NIP-44)
if [[ ${#response_content} -gt 50 ]]; then
log_info " Content appears to be NIP-44 encrypted"
log_warning " Decryption not yet implemented in test script"
else
log_info " Content (plaintext): $response_content"
fi
# Verify signature
log_info "Verifying event signature..."
if echo "$response" | nak verify 2>&1 | grep -q "signature is valid"; then
log_success "Event signature is valid ✓"
else
log_error "Event signature verification failed"
return 1
fi
else
log_warning "Response is not Kind 23457 (got kind: $kind)"
fi
# Pretty print the full response
echo ""
log_info "Full Response Event:"
echo "$response" | jq -C .
else
log_warning "Response is not valid JSON"
log_debug "Raw response: $response"
fi
echo ""
return 0
}
# Test config_query command
test_config_query() {
log_info "=== Testing config_query Command ==="
send_websocket_admin_command "config_query" "$WEBSOCKET_URL"
}
# Test with HTTP WebSocket connection
test_http_connection() {
log_info "=== Testing HTTP WebSocket Connection ==="
log_info "Connecting via HTTP (port 9001)"
send_websocket_admin_command "config_query" "$WEBSOCKET_HTTP_URL"
}
# Test with direct WebSocket connection (bypassing nginx)
test_direct_connection() {
log_info "=== Testing Direct WebSocket Connection ==="
log_info "Connecting directly to WebSocket server (port 9442)"
send_websocket_admin_command "config_query" "$WEBSOCKET_DIRECT_URL"
}
# Test invalid command
test_invalid_command() {
log_info "=== Testing Invalid Command ==="
send_websocket_admin_command "invalid_command_xyz" "$WEBSOCKET_URL" || log_warning "Expected failure for invalid command"
}
# Test connection persistence
test_connection_persistence() {
log_info "=== Testing Connection Persistence ==="
log_info "Sending multiple commands over same connection..."
# Create two events
local event1=$(create_admin_command_event "config_query")
local event2=$(create_admin_command_event "config_query")
if [[ -z "$event1" ]] || [[ -z "$event2" ]]; then
log_error "Failed to create events"
return 1
fi
# For wss:// connections, add --insecure flag
local websocat_opts=""
if [[ "$WEBSOCKET_URL" == wss://* ]]; then
websocat_opts="--insecure"
fi
# Send both events and collect responses
log_info "Sending two events sequentially..."
local responses=$(timeout $((TIMEOUT * 2)) websocat $websocat_opts -n2 "$WEBSOCKET_URL" <<EOF
$event1
$event2
EOF
)
if [[ $? -eq 0 ]]; then
log_success "Received responses for both events"
echo "$responses" | while IFS= read -r line; do
if [[ -n "$line" ]]; then
echo "$line" | jq -C . 2>/dev/null || echo "$line"
fi
done
else
log_warning "Connection persistence test inconclusive"
fi
echo ""
}
main() {
echo "=========================================="
echo " Ginxsom WebSocket Admin Test Suite"
echo " Kind 23456/23457 over WebSocket"
echo "=========================================="
echo ""
log_info "Test Configuration:"
log_info " Admin Privkey: ${ADMIN_PRIVKEY:0:16}...${ADMIN_PRIVKEY: -16}"
log_info " Admin Pubkey: $ADMIN_PUBKEY"
log_info " Server Pubkey: $SERVER_PUBKEY"
log_info " HTTPS URL: $WEBSOCKET_URL"
log_info " HTTP URL: $WEBSOCKET_HTTP_URL"
log_info " Direct URL: $WEBSOCKET_DIRECT_URL"
log_info " Timeout: ${TIMEOUT}s"
echo ""
check_dependencies
echo ""
# Test basic WebSocket connectivity
if ! test_websocket_connection "$WEBSOCKET_URL"; then
log_error "Basic WebSocket connection failed"
log_info "Trying direct connection to port 9442..."
if ! test_websocket_connection "$WEBSOCKET_DIRECT_URL"; then
log_error "Direct connection also failed"
log_error "Make sure the server is running with WebSocket admin enabled"
exit 1
fi
fi
echo ""
# Test admin commands via HTTPS
test_config_query
echo ""
# Test via HTTP
test_http_connection
echo ""
# Test direct connection (bypassing nginx)
test_direct_connection
echo ""
# Test invalid command
test_invalid_command
echo ""
# Test connection persistence
test_connection_persistence
echo ""
echo "=========================================="
log_success "WebSocket admin testing complete!"
echo "=========================================="
echo ""
log_info "Summary:"
log_info " ✓ WebSocket connection established"
log_info " ✓ Kind 23456 events sent"
log_info " ✓ Kind 23457 responses received"
log_info " ✓ Event signatures verified"
echo ""
log_warning "NOTE: NIP-44 encryption/decryption not yet implemented in test script"
log_warning "Events use plaintext command arrays for initial testing"
log_warning "Production implementation uses full NIP-44 encryption"
}
# Allow sourcing for individual function testing
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

353
tests/white_black_list_test.sh Executable file
View File

@@ -0,0 +1,353 @@
#!/bin/bash
# white_black_list_test.sh - Whitelist/Blacklist Rules Test Suite
# Tests the auth_rules table functionality for pubkey and MIME type filtering
# Configuration
SERVER_URL="http://localhost:9001"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
DB_PATH="db/ginxsom.db"
TEST_DIR="tests/auth_test_tmp"
# Test results tracking
TESTS_PASSED=0
TESTS_FAILED=0
TOTAL_TESTS=0
# Test keys for different scenarios - Using WSB's keys for TEST_USER1
# Generated using: nak key public <privkey>
TEST_USER1_PRIVKEY="22cc83aa57928a2800234c939240c9a6f0f44a33ea3838a860ed38930b195afd"
TEST_USER1_PUBKEY="8ff74724ed641b3c28e5a86d7c5cbc49c37638ace8c6c38935860e7a5eedde0e"
TEST_USER2_PRIVKEY="182c3a5e3b7a1b7e4f5c6b7c8b4a5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2"
TEST_USER2_PUBKEY="0396b426090284a28294078dce53fe73791ab623c3fc46ab4409fea05109a6db"
TEST_USER3_PRIVKEY="abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234"
TEST_USER3_PUBKEY="769a740386211c76f81bb235de50a5e6fa463cb4fae25e62625607fc2cfc0f28"
# Helper function to record test results
record_test_result() {
local test_name="$1"
local expected="$2"
local actual="$3"
TOTAL_TESTS=$((TOTAL_TESTS + 1))
if [[ "$actual" == "$expected" ]]; then
echo "$test_name - PASSED"
TESTS_PASSED=$((TESTS_PASSED + 1))
else
echo "$test_name - FAILED (Expected: $expected, Got: $actual)"
TESTS_FAILED=$((TESTS_FAILED + 1))
fi
}
# Check prerequisites
for cmd in nak curl jq sqlite3; do
if ! command -v $cmd &> /dev/null; then
echo "$cmd command not found"
exit 1
fi
done
# Check if server is running
if ! curl -s -f "${SERVER_URL}/" > /dev/null 2>&1; then
echo "❌ Server not running at $SERVER_URL"
echo "Start with: ./restart-all.sh"
exit 1
fi
# Check if database exists
if [[ ! -f "$DB_PATH" ]]; then
echo "❌ Database not found at $DB_PATH"
exit 1
fi
# Setup test environment
mkdir -p "$TEST_DIR"
echo "=========================================="
echo " WHITELIST/BLACKLIST RULES TEST SUITE"
echo "=========================================="
echo
# Helper functions
create_test_file() {
local filename="$1"
local content="${2:-test content for $filename}"
local filepath="$TEST_DIR/$filename"
echo "$content" > "$filepath"
echo "$filepath"
}
create_auth_event() {
local privkey="$1"
local operation="$2"
local hash="$3"
local expiration_offset="${4:-3600}" # 1 hour default
local expiration=$(date -d "+${expiration_offset} seconds" +%s)
local event_args=(-k 24242 -c "" --tag "t=$operation" --tag "expiration=$expiration" --sec "$privkey")
if [[ -n "$hash" ]]; then
event_args+=(--tag "x=$hash")
fi
nak event "${event_args[@]}"
}
test_upload() {
local test_name="$1"
local privkey="$2"
local file_path="$3"
local expected_status="${4:-200}"
local file_hash=$(sha256sum "$file_path" | cut -d' ' -f1)
# Create auth event
local event=$(create_auth_event "$privkey" "upload" "$file_hash")
local auth_header="Nostr $(echo "$event" | base64 -w 0)"
# Make upload request
local response_file=$(mktemp)
local http_status=$(curl -s -w "%{http_code}" \
-H "Authorization: $auth_header" \
-H "Content-Type: text/plain" \
--data-binary "@$file_path" \
-X PUT "$UPLOAD_ENDPOINT" \
-o "$response_file" 2>/dev/null)
# Show response if test fails
if [[ "$http_status" != "$expected_status" ]]; then
echo " Response: $(cat "$response_file")"
fi
rm -f "$response_file"
# Record result
record_test_result "$test_name" "$expected_status" "$http_status"
}
# Clean up any existing rules from previous tests
echo "Cleaning up existing auth rules..."
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;" 2>/dev/null
# Enable authentication rules
echo "Enabling authentication rules..."
sqlite3 "$DB_PATH" "UPDATE config SET value = 'true' WHERE key = 'auth_rules_enabled';"
echo
echo "=== SECTION 1: PUBKEY BLACKLIST TESTS ==="
echo
# Test 1: Add pubkey blacklist rule
echo "Adding blacklist rule for TEST_USER3..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER3_PUBKEY', 'upload', 10, 'Test blacklist');"
# Test 1a: Blacklisted user should be denied
test_file1=$(create_test_file "blacklist_test1.txt" "Content from blacklisted user")
test_upload "Test 1a: Blacklisted Pubkey Upload" "$TEST_USER3_PRIVKEY" "$test_file1" "403"
# Test 1b: Non-blacklisted user should succeed
test_file2=$(create_test_file "blacklist_test2.txt" "Content from allowed user")
test_upload "Test 1b: Non-Blacklisted Pubkey Upload" "$TEST_USER1_PRIVKEY" "$test_file2" "200"
echo
echo "=== SECTION 2: PUBKEY WHITELIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Test 2: Add pubkey whitelist rule
echo "Adding whitelist rule for TEST_USER1..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_whitelist', '$TEST_USER1_PUBKEY', 'upload', 300, 'Test whitelist');"
# Test 2a: Whitelisted user should succeed
test_file3=$(create_test_file "whitelist_test1.txt" "Content from whitelisted user")
test_upload "Test 2a: Whitelisted Pubkey Upload" "$TEST_USER1_PRIVKEY" "$test_file3" "200"
# Test 2b: Non-whitelisted user should be denied (whitelist default-deny)
test_file4=$(create_test_file "whitelist_test2.txt" "Content from non-whitelisted user")
test_upload "Test 2b: Non-Whitelisted Pubkey Upload" "$TEST_USER2_PRIVKEY" "$test_file4" "403"
echo
echo "=== SECTION 3: HASH BLACKLIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
# Test 3: Create a file and blacklist its hash
test_file5=$(create_test_file "hash_blacklist_test.txt" "This specific file is blacklisted")
BLACKLISTED_HASH=$(sha256sum "$test_file5" | cut -d' ' -f1)
echo "Adding hash blacklist rule for $BLACKLISTED_HASH..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('hash_blacklist', '$BLACKLISTED_HASH', 'upload', 100, 'Test hash blacklist');"
# Test 3a: Blacklisted hash should be denied
test_upload "Test 3a: Blacklisted Hash Upload" "$TEST_USER1_PRIVKEY" "$test_file5" "403"
# Test 3b: Different file should succeed
test_file6=$(create_test_file "hash_blacklist_test2.txt" "This file is allowed")
test_upload "Test 3b: Non-Blacklisted Hash Upload" "$TEST_USER1_PRIVKEY" "$test_file6" "200"
echo
echo "=== SECTION 4: MIME TYPE BLACKLIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Test 4: Blacklist executable MIME types
echo "Adding MIME type blacklist rules..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_blacklist', 'application/x-executable', 'upload', 200, 'Block executables');"
# Note: This test would require the server to detect MIME types from file content
# For now, we'll test with text/plain which should be allowed
test_file7=$(create_test_file "mime_test1.txt" "Plain text file")
test_upload "Test 4a: Allowed MIME Type Upload" "$TEST_USER1_PRIVKEY" "$test_file7" "200"
echo
echo "=== SECTION 5: MIME TYPE WHITELIST TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Test 5: Whitelist only image MIME types
echo "Adding MIME type whitelist rules..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_whitelist', 'image/jpeg', 'upload', 400, 'Allow JPEG');"
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_whitelist', 'image/png', 'upload', 400, 'Allow PNG');"
# Note: MIME type detection would need to be implemented in the server
# For now, text/plain should be denied if whitelist exists
test_file8=$(create_test_file "mime_whitelist_test.txt" "Text file with whitelist active")
test_upload "Test 5a: Non-Whitelisted MIME Type Upload" "$TEST_USER1_PRIVKEY" "$test_file8" "403"
echo
echo "=== SECTION 6: PRIORITY ORDERING TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Test 6: Blacklist should override whitelist (priority ordering)
echo "Adding both blacklist (priority 10) and whitelist (priority 300) for same pubkey..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER1_PUBKEY', 'upload', 10, 'Blacklist priority test');"
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_whitelist', '$TEST_USER1_PUBKEY', 'upload', 300, 'Whitelist priority test');"
# Test 6a: Blacklist should win (lower priority number = higher priority)
test_file9=$(create_test_file "priority_test.txt" "Testing priority ordering")
test_upload "Test 6a: Blacklist Priority Over Whitelist" "$TEST_USER1_PRIVKEY" "$test_file9" "403"
echo
echo "=== SECTION 7: OPERATION-SPECIFIC RULES ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Test 7: Blacklist only for upload operation
echo "Adding blacklist rule for upload operation only..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER2_PUBKEY', 'upload', 10, 'Upload-only blacklist');"
# Test 7a: Upload should be denied
test_file10=$(create_test_file "operation_test.txt" "Testing operation-specific rules")
test_upload "Test 7a: Operation-Specific Blacklist" "$TEST_USER2_PRIVKEY" "$test_file10" "403"
echo
echo "=== SECTION 8: WILDCARD OPERATION TESTS ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Test 8: Blacklist for all operations using wildcard
echo "Adding blacklist rule for all operations (*)..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER3_PUBKEY', '*', 10, 'All operations blacklist');"
# Test 8a: Upload should be denied
test_file11=$(create_test_file "wildcard_test.txt" "Testing wildcard operation")
test_upload "Test 8a: Wildcard Operation Blacklist" "$TEST_USER3_PRIVKEY" "$test_file11" "403"
echo
echo "=== SECTION 9: ENABLED/DISABLED RULES ==="
echo
# Clean rules
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
# Test 9: Disabled rule should not be enforced
echo "Adding disabled blacklist rule..."
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description) VALUES ('pubkey_blacklist', '$TEST_USER1_PUBKEY', 'upload', 10, 0, 'Disabled blacklist');"
# Test 9a: Upload should succeed (rule is disabled)
test_file12=$(create_test_file "disabled_rule_test.txt" "Testing disabled rule")
test_upload "Test 9a: Disabled Rule Not Enforced" "$TEST_USER1_PRIVKEY" "$test_file12" "200"
# Test 9b: Enable the rule
echo "Enabling the blacklist rule..."
sqlite3 "$DB_PATH" "UPDATE auth_rules SET enabled = 1 WHERE rule_target = '$TEST_USER1_PUBKEY';"
# Test 9c: Upload should now be denied
test_file13=$(create_test_file "enabled_rule_test.txt" "Testing enabled rule")
test_upload "Test 9c: Enabled Rule Enforced" "$TEST_USER1_PRIVKEY" "$test_file13" "403"
echo
echo "=== SECTION 11: CLEANUP AND RESET ==="
echo
# Clean up all test rules
echo "Cleaning up test rules..."
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
# Verify cleanup
RULE_COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM auth_rules;" 2>/dev/null)
if [[ "$RULE_COUNT" -eq 0 ]]; then
record_test_result "Test 10a: Rules Cleanup" "0" "0"
else
record_test_result "Test 10a: Rules Cleanup" "0" "$RULE_COUNT"
fi
# Test that uploads work again after cleanup
test_file16=$(create_test_file "cleanup_test.txt" "Testing after cleanup")
test_upload "Test 10b: Upload After Cleanup" "$TEST_USER1_PRIVKEY" "$test_file16" "200"
echo
echo "=========================================="
echo " TEST SUITE RESULTS"
echo "=========================================="
echo
echo "Total Tests: $TOTAL_TESTS"
echo "✅ Passed: $TESTS_PASSED"
echo "❌ Failed: $TESTS_FAILED"
echo
if [[ $TESTS_FAILED -eq 0 ]]; then
echo "🎉 ALL TESTS PASSED!"
echo
echo "Whitelist/Blacklist functionality verified:"
echo "- Pubkey blacklist: Working"
echo "- Pubkey whitelist: Working"
echo "- Hash blacklist: Working"
echo "- MIME type rules: Working"
echo "- Priority ordering: Working"
echo "- Operation-specific rules: Working"
echo "- Wildcard operations: Working"
echo "- Enable/disable rules: Working"
else
echo "⚠️ Some tests failed. Check output above for details."
echo "Success rate: $(( (TESTS_PASSED * 100) / TOTAL_TESTS ))%"
fi
echo
echo "To clean up test data: rm -rf $TEST_DIR"
echo "=========================================="

54
update_remote_nginx_conf.sh Executable file
View File

@@ -0,0 +1,54 @@
#!/bin/bash
# update_remote_nginx_conf.sh
# Updates the remote nginx configuration on laantungir.net
# Copies contents of ./remote.nginx.config to /etc/nginx/conf.d/default.conf
set -e
echo "=== Updating Remote Nginx Configuration ==="
echo "Server: laantungir.net"
echo "User: ubuntu"
echo "Local config: ./remote.nginx.config"
echo "Remote config: /etc/nginx/conf.d/default.conf"
echo
# Check if local config exists
if [[ ! -f "./remote.nginx.config" ]]; then
echo "ERROR: ./remote.nginx.config not found"
exit 1
fi
echo "Copying remote.nginx.config to laantungir.net:/etc/nginx/conf.d/default.conf..."
# Copy the config file to the remote server (using user's home directory)
scp ./remote.nginx.config ubuntu@laantungir.net:~/remote.nginx.config
# Move to final location and backup old config
ssh ubuntu@laantungir.net << 'EOF'
echo "Creating backup of current config..."
sudo cp /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.backup.$(date +%Y%m%d_%H%M%S)
echo "Installing new config..."
sudo cp ~/remote.nginx.config /etc/nginx/conf.d/default.conf
echo "Testing nginx configuration..."
if sudo nginx -t; then
echo "✅ Nginx config test passed"
echo "Reloading nginx..."
sudo nginx -s reload
echo "✅ Nginx reloaded successfully"
else
echo "❌ Nginx config test failed"
echo "Restoring backup..."
sudo cp /etc/nginx/conf.d/default.conf.backup.* /etc/nginx/conf.d/default.conf 2>/dev/null || true
exit 1
fi
echo "Cleaning up temporary file..."
rm ~/remote.nginx.config
EOF
echo
echo "=== Update Complete ==="
echo "The remote nginx configuration has been updated."