Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
455aab1eac | ||
|
|
533c7f29f2 | ||
|
|
35f8385508 | ||
|
|
fe2495f897 | ||
|
|
30e4408b28 | ||
|
|
e43dd5c64f | ||
|
|
bb18ffcdce |
109
42.md
109
42.md
@@ -1,109 +0,0 @@
|
||||
NIP-42
|
||||
======
|
||||
|
||||
Authentication of clients to relays
|
||||
-----------------------------------
|
||||
|
||||
`draft` `optional`
|
||||
|
||||
This NIP defines a way for clients to authenticate to relays by signing an ephemeral event.
|
||||
|
||||
## Motivation
|
||||
|
||||
A relay may want to require clients to authenticate to access restricted resources. For example,
|
||||
|
||||
- A relay may request payment or other forms of whitelisting to publish events -- this can naïvely be achieved by limiting publication to events signed by the whitelisted key, but with this NIP they may choose to accept any events as long as they are published from an authenticated user;
|
||||
- A relay may limit access to `kind: 4` DMs to only the parties involved in the chat exchange, and for that it may require authentication before clients can query for that kind.
|
||||
- A relay may limit subscriptions of any kind to paying users or users whitelisted through any other means, and require authentication.
|
||||
|
||||
## Definitions
|
||||
|
||||
### New client-relay protocol messages
|
||||
|
||||
This NIP defines a new message, `AUTH`, which relays CAN send when they support authentication and clients can send to relays when they want to authenticate. When sent by relays the message has the following form:
|
||||
|
||||
```
|
||||
["AUTH", <challenge-string>]
|
||||
```
|
||||
|
||||
And, when sent by clients, the following form:
|
||||
|
||||
```
|
||||
["AUTH", <signed-event-json>]
|
||||
```
|
||||
|
||||
Clients MAY provide signed events from multiple pubkeys in a sequence of `AUTH` messages. Relays MUST treat all pubkeys as authenticated accordingly.
|
||||
|
||||
`AUTH` messages sent by clients MUST be answered with an `OK` message, like any `EVENT` message.
|
||||
|
||||
### Canonical authentication event
|
||||
|
||||
The signed event is an ephemeral event not meant to be published or queried, it must be of `kind: 22242` and it should have at least two tags, one for the relay URL and one for the challenge string as received from the relay. Relays MUST exclude `kind: 22242` events from being broadcasted to any client. `created_at` should be the current time. Example:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"kind": 22242,
|
||||
"tags": [
|
||||
["relay", "wss://relay.example.com/"],
|
||||
["challenge", "challengestringhere"]
|
||||
],
|
||||
// other fields...
|
||||
}
|
||||
```
|
||||
|
||||
### `OK` and `CLOSED` machine-readable prefixes
|
||||
|
||||
This NIP defines two new prefixes that can be used in `OK` (in response to event writes by clients) and `CLOSED` (in response to rejected subscriptions by clients):
|
||||
|
||||
- `"auth-required: "` - for when a client has not performed `AUTH` and the relay requires that to fulfill the query or write the event.
|
||||
- `"restricted: "` - for when a client has already performed `AUTH` but the key used to perform it is still not allowed by the relay or is exceeding its authorization.
|
||||
|
||||
## Protocol flow
|
||||
|
||||
At any moment the relay may send an `AUTH` message to the client containing a challenge. The challenge is valid for the duration of the connection or until another challenge is sent by the relay. The client MAY decide to send its `AUTH` event at any point and the authenticated session is valid afterwards for the duration of the connection.
|
||||
|
||||
### `auth-required` in response to a `REQ` message
|
||||
|
||||
Given that a relay is likely to require clients to perform authentication only for certain jobs, like answering a `REQ` or accepting an `EVENT` write, these are some expected common flows:
|
||||
|
||||
```
|
||||
relay: ["AUTH", "<challenge>"]
|
||||
client: ["REQ", "sub_1", {"kinds": [4]}]
|
||||
relay: ["CLOSED", "sub_1", "auth-required: we can't serve DMs to unauthenticated users"]
|
||||
client: ["AUTH", {"id": "abcdef...", ...}]
|
||||
client: ["AUTH", {"id": "abcde2...", ...}]
|
||||
relay: ["OK", "abcdef...", true, ""]
|
||||
relay: ["OK", "abcde2...", true, ""]
|
||||
client: ["REQ", "sub_1", {"kinds": [4]}]
|
||||
relay: ["EVENT", "sub_1", {...}]
|
||||
relay: ["EVENT", "sub_1", {...}]
|
||||
relay: ["EVENT", "sub_1", {...}]
|
||||
relay: ["EVENT", "sub_1", {...}]
|
||||
...
|
||||
```
|
||||
|
||||
In this case, the `AUTH` message from the relay could be sent right as the client connects or it can be sent immediately before the `CLOSED` is sent. The only requirement is that _the client must have a stored challenge associated with that relay_ so it can act upon that in response to the `auth-required` `CLOSED` message.
|
||||
|
||||
### `auth-required` in response to an `EVENT` message
|
||||
|
||||
The same flow is valid for when a client wants to write an `EVENT` to the relay, except now the relay sends back an `OK` message instead of a `CLOSED` message:
|
||||
|
||||
```
|
||||
relay: ["AUTH", "<challenge>"]
|
||||
client: ["EVENT", {"id": "012345...", ...}]
|
||||
relay: ["OK", "012345...", false, "auth-required: we only accept events from registered users"]
|
||||
client: ["AUTH", {"id": "abcdef...", ...}]
|
||||
relay: ["OK", "abcdef...", true, ""]
|
||||
client: ["EVENT", {"id": "012345...", ...}]
|
||||
relay: ["OK", "012345...", true, ""]
|
||||
```
|
||||
|
||||
## Signed Event Verification
|
||||
|
||||
To verify `AUTH` messages, relays must ensure:
|
||||
|
||||
- that the `kind` is `22242`;
|
||||
- that the event `created_at` is close (e.g. within ~10 minutes) of the current time;
|
||||
- that the `"challenge"` tag matches the challenge sent before;
|
||||
- that the `"relay"` tag matches the relay URL:
|
||||
- URL normalization techniques can be applied. For most cases just checking if the domain name is correct should be enough.
|
||||
612
STATIC_MUSL_GUIDE.md
Normal file
612
STATIC_MUSL_GUIDE.md
Normal file
@@ -0,0 +1,612 @@
|
||||
# Static MUSL Build Guide for C Programs
|
||||
|
||||
## Overview
|
||||
|
||||
This guide explains how to build truly portable static binaries using Alpine Linux and MUSL libc. These binaries have **zero runtime dependencies** and work on any Linux distribution without modification.
|
||||
|
||||
This guide is specifically tailored for C programs that use:
|
||||
- **nostr_core_lib** - Nostr protocol implementation
|
||||
- **nostr_login_lite** - Nostr authentication library
|
||||
- Common dependencies: libwebsockets, OpenSSL, SQLite, curl, secp256k1
|
||||
|
||||
## Why MUSL Static Binaries?
|
||||
|
||||
### Advantages Over glibc
|
||||
|
||||
| Feature | MUSL Static | glibc Static | glibc Dynamic |
|
||||
|---------|-------------|--------------|---------------|
|
||||
| **Portability** | ✓ Any Linux | ⚠ glibc only | ✗ Requires matching libs |
|
||||
| **Binary Size** | ~7-10 MB | ~12-15 MB | ~2-3 MB |
|
||||
| **Dependencies** | None | NSS libs | Many system libs |
|
||||
| **Deployment** | Single file | Single file + NSS | Binary + libraries |
|
||||
| **Compatibility** | Universal | glibc version issues | Library version hell |
|
||||
|
||||
### Key Benefits
|
||||
|
||||
1. **True Portability**: Works on Alpine, Ubuntu, Debian, CentOS, Arch, etc.
|
||||
2. **No Library Hell**: No `GLIBC_2.XX not found` errors
|
||||
3. **Simple Deployment**: Just copy one file
|
||||
4. **Reproducible Builds**: Same Docker image = same binary
|
||||
5. **Security**: No dependency on system libraries with vulnerabilities
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Docker installed and running
|
||||
- Your C project with source code
|
||||
- Internet connection for downloading dependencies
|
||||
|
||||
### Basic Build Process
|
||||
|
||||
```bash
|
||||
# 1. Copy the Dockerfile template (see below)
|
||||
cp /path/to/c-relay/Dockerfile.alpine-musl ./Dockerfile.static
|
||||
|
||||
# 2. Customize for your project (see Customization section)
|
||||
vim Dockerfile.static
|
||||
|
||||
# 3. Build the static binary
|
||||
docker build --platform linux/amd64 -f Dockerfile.static -t my-app-builder .
|
||||
|
||||
# 4. Extract the binary
|
||||
docker create --name temp-container my-app-builder
|
||||
docker cp temp-container:/build/my_app_static ./my_app_static
|
||||
docker rm temp-container
|
||||
|
||||
# 5. Verify it's static
|
||||
ldd ./my_app_static # Should show "not a dynamic executable"
|
||||
```
|
||||
|
||||
## Dockerfile Template
|
||||
|
||||
Here's a complete Dockerfile template you can customize for your project:
|
||||
|
||||
```dockerfile
|
||||
# Alpine-based MUSL static binary builder
|
||||
# Produces truly portable binaries with zero runtime dependencies
|
||||
|
||||
FROM alpine:3.19 AS builder
|
||||
|
||||
# Install build dependencies
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
musl-dev \
|
||||
git \
|
||||
cmake \
|
||||
pkgconfig \
|
||||
autoconf \
|
||||
automake \
|
||||
libtool \
|
||||
openssl-dev \
|
||||
openssl-libs-static \
|
||||
zlib-dev \
|
||||
zlib-static \
|
||||
curl-dev \
|
||||
curl-static \
|
||||
sqlite-dev \
|
||||
sqlite-static \
|
||||
linux-headers \
|
||||
wget \
|
||||
bash
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Build libsecp256k1 static (required for Nostr)
|
||||
RUN cd /tmp && \
|
||||
git clone https://github.com/bitcoin-core/secp256k1.git && \
|
||||
cd secp256k1 && \
|
||||
./autogen.sh && \
|
||||
./configure --enable-static --disable-shared --prefix=/usr \
|
||||
CFLAGS="-fPIC" && \
|
||||
make -j$(nproc) && \
|
||||
make install && \
|
||||
rm -rf /tmp/secp256k1
|
||||
|
||||
# Build libwebsockets static (if needed for WebSocket support)
|
||||
RUN cd /tmp && \
|
||||
git clone --depth 1 --branch v4.3.3 https://github.com/warmcat/libwebsockets.git && \
|
||||
cd libwebsockets && \
|
||||
mkdir build && cd build && \
|
||||
cmake .. \
|
||||
-DLWS_WITH_STATIC=ON \
|
||||
-DLWS_WITH_SHARED=OFF \
|
||||
-DLWS_WITH_SSL=ON \
|
||||
-DLWS_WITHOUT_TESTAPPS=ON \
|
||||
-DLWS_WITHOUT_TEST_SERVER=ON \
|
||||
-DLWS_WITHOUT_TEST_CLIENT=ON \
|
||||
-DLWS_WITHOUT_TEST_PING=ON \
|
||||
-DLWS_WITH_HTTP2=OFF \
|
||||
-DLWS_WITH_LIBUV=OFF \
|
||||
-DLWS_WITH_LIBEVENT=OFF \
|
||||
-DLWS_IPV6=ON \
|
||||
-DCMAKE_BUILD_TYPE=Release \
|
||||
-DCMAKE_INSTALL_PREFIX=/usr \
|
||||
-DCMAKE_C_FLAGS="-fPIC" && \
|
||||
make -j$(nproc) && \
|
||||
make install && \
|
||||
rm -rf /tmp/libwebsockets
|
||||
|
||||
# Copy git configuration for submodules
|
||||
COPY .gitmodules /build/.gitmodules
|
||||
COPY .git /build/.git
|
||||
|
||||
# Initialize submodules
|
||||
RUN git submodule update --init --recursive
|
||||
|
||||
# Copy and build nostr_core_lib
|
||||
COPY nostr_core_lib /build/nostr_core_lib/
|
||||
RUN cd nostr_core_lib && \
|
||||
chmod +x build.sh && \
|
||||
sed -i 's/CFLAGS="-Wall -Wextra -std=c99 -fPIC -O2"/CFLAGS="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -Wall -Wextra -std=c99 -fPIC -O2"/' build.sh && \
|
||||
rm -f *.o *.a 2>/dev/null || true && \
|
||||
./build.sh --nips=1,6,13,17,19,44,59
|
||||
|
||||
# Copy and build nostr_login_lite (if used)
|
||||
# COPY nostr_login_lite /build/nostr_login_lite/
|
||||
# RUN cd nostr_login_lite && make static
|
||||
|
||||
# Copy your application source
|
||||
COPY src/ /build/src/
|
||||
COPY Makefile /build/Makefile
|
||||
|
||||
# Build your application with full static linking
|
||||
RUN gcc -static -O2 -Wall -Wextra -std=c99 \
|
||||
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
|
||||
-I. -Inostr_core_lib -Inostr_core_lib/nostr_core \
|
||||
-Inostr_core_lib/cjson -Inostr_core_lib/nostr_websocket \
|
||||
src/*.c \
|
||||
-o /build/my_app_static \
|
||||
nostr_core_lib/libnostr_core_x64.a \
|
||||
-lwebsockets -lssl -lcrypto -lsqlite3 -lsecp256k1 \
|
||||
-lcurl -lz -lpthread -lm -ldl && \
|
||||
strip /build/my_app_static
|
||||
|
||||
# Verify it's truly static
|
||||
RUN echo "=== Binary Information ===" && \
|
||||
file /build/my_app_static && \
|
||||
ls -lh /build/my_app_static && \
|
||||
echo "=== Checking for dynamic dependencies ===" && \
|
||||
(ldd /build/my_app_static 2>&1 || echo "Binary is static")
|
||||
|
||||
# Output stage - just the binary
|
||||
FROM scratch AS output
|
||||
COPY --from=builder /build/my_app_static /my_app_static
|
||||
```
|
||||
|
||||
## Customization Guide
|
||||
|
||||
### 1. Adjust Dependencies
|
||||
|
||||
**Add dependencies** by modifying the `apk add` section:
|
||||
|
||||
```dockerfile
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
musl-dev \
|
||||
# Add your dependencies here:
|
||||
libpng-dev \
|
||||
libpng-static \
|
||||
libjpeg-turbo-dev \
|
||||
libjpeg-turbo-static
|
||||
```
|
||||
|
||||
**Remove unused dependencies** to speed up builds:
|
||||
- Remove `libwebsockets` section if you don't need WebSocket support
|
||||
- Remove `sqlite` if you don't use databases
|
||||
- Remove `curl` if you don't make HTTP requests
|
||||
|
||||
### 2. Configure nostr_core_lib NIPs
|
||||
|
||||
Specify which NIPs your application needs:
|
||||
|
||||
```bash
|
||||
./build.sh --nips=1,6,19 # Minimal: Basic protocol, keys, bech32
|
||||
./build.sh --nips=1,6,13,17,19,44,59 # Full: All common NIPs
|
||||
./build.sh --nips=all # Everything available
|
||||
```
|
||||
|
||||
**Common NIP combinations:**
|
||||
- **Basic client**: `1,6,19` (events, keys, bech32)
|
||||
- **With encryption**: `1,6,19,44` (add modern encryption)
|
||||
- **With DMs**: `1,6,17,19,44,59` (add private messages)
|
||||
- **Relay/server**: `1,6,13,17,19,42,44,59` (add PoW, auth)
|
||||
|
||||
### 3. Modify Compilation Flags
|
||||
|
||||
**For your application:**
|
||||
|
||||
```dockerfile
|
||||
RUN gcc -static -O2 -Wall -Wextra -std=c99 \
|
||||
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \ # REQUIRED for MUSL
|
||||
-I. -Inostr_core_lib \ # Include paths
|
||||
src/*.c \ # Your source files
|
||||
-o /build/my_app_static \ # Output binary
|
||||
nostr_core_lib/libnostr_core_x64.a \ # Nostr library
|
||||
-lwebsockets -lssl -lcrypto \ # Link libraries
|
||||
-lsqlite3 -lsecp256k1 -lcurl \
|
||||
-lz -lpthread -lm -ldl
|
||||
```
|
||||
|
||||
**Debug build** (with symbols, no optimization):
|
||||
|
||||
```dockerfile
|
||||
RUN gcc -static -g -O0 -DDEBUG \
|
||||
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
|
||||
# ... rest of flags
|
||||
```
|
||||
|
||||
### 4. Multi-Architecture Support
|
||||
|
||||
Build for different architectures:
|
||||
|
||||
```bash
|
||||
# x86_64 (Intel/AMD)
|
||||
docker build --platform linux/amd64 -f Dockerfile.static -t my-app-x86 .
|
||||
|
||||
# ARM64 (Apple Silicon, Raspberry Pi 4+)
|
||||
docker build --platform linux/arm64 -f Dockerfile.static -t my-app-arm64 .
|
||||
```
|
||||
|
||||
## Build Script Template
|
||||
|
||||
Create a `build_static.sh` script for convenience:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
BUILD_DIR="$SCRIPT_DIR/build"
|
||||
DOCKERFILE="$SCRIPT_DIR/Dockerfile.static"
|
||||
|
||||
# Detect architecture
|
||||
ARCH=$(uname -m)
|
||||
case "$ARCH" in
|
||||
x86_64)
|
||||
PLATFORM="linux/amd64"
|
||||
OUTPUT_NAME="my_app_static_x86_64"
|
||||
;;
|
||||
aarch64|arm64)
|
||||
PLATFORM="linux/arm64"
|
||||
OUTPUT_NAME="my_app_static_arm64"
|
||||
;;
|
||||
*)
|
||||
echo "Unknown architecture: $ARCH"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "Building for platform: $PLATFORM"
|
||||
mkdir -p "$BUILD_DIR"
|
||||
|
||||
# Build Docker image
|
||||
docker build \
|
||||
--platform "$PLATFORM" \
|
||||
-f "$DOCKERFILE" \
|
||||
-t my-app-builder:latest \
|
||||
--progress=plain \
|
||||
.
|
||||
|
||||
# Extract binary
|
||||
CONTAINER_ID=$(docker create my-app-builder:latest)
|
||||
docker cp "$CONTAINER_ID:/build/my_app_static" "$BUILD_DIR/$OUTPUT_NAME"
|
||||
docker rm "$CONTAINER_ID"
|
||||
|
||||
chmod +x "$BUILD_DIR/$OUTPUT_NAME"
|
||||
|
||||
echo "✓ Build complete: $BUILD_DIR/$OUTPUT_NAME"
|
||||
echo "✓ Size: $(du -h "$BUILD_DIR/$OUTPUT_NAME" | cut -f1)"
|
||||
|
||||
# Verify
|
||||
if ldd "$BUILD_DIR/$OUTPUT_NAME" 2>&1 | grep -q "not a dynamic executable"; then
|
||||
echo "✓ Binary is fully static"
|
||||
else
|
||||
echo "⚠ Warning: Binary may have dynamic dependencies"
|
||||
fi
|
||||
```
|
||||
|
||||
Make it executable:
|
||||
|
||||
```bash
|
||||
chmod +x build_static.sh
|
||||
./build_static.sh
|
||||
```
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
### Issue 1: Fortification Errors
|
||||
|
||||
**Error:**
|
||||
```
|
||||
undefined reference to '__snprintf_chk'
|
||||
undefined reference to '__fprintf_chk'
|
||||
```
|
||||
|
||||
**Cause**: GCC's `-O2` enables fortification by default, which uses glibc-specific functions.
|
||||
|
||||
**Solution**: Add these flags to **all** compilation commands:
|
||||
```bash
|
||||
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0
|
||||
```
|
||||
|
||||
This must be applied to:
|
||||
1. nostr_core_lib build.sh
|
||||
2. Your application compilation
|
||||
3. Any other libraries you build
|
||||
|
||||
### Issue 2: Missing Symbols from nostr_core_lib
|
||||
|
||||
**Error:**
|
||||
```
|
||||
undefined reference to 'nostr_create_event'
|
||||
undefined reference to 'nostr_sign_event'
|
||||
```
|
||||
|
||||
**Cause**: Required NIPs not included in nostr_core_lib build.
|
||||
|
||||
**Solution**: Add missing NIPs:
|
||||
```bash
|
||||
./build.sh --nips=1,6,19 # Add the NIPs you need
|
||||
```
|
||||
|
||||
### Issue 3: Docker Permission Denied
|
||||
|
||||
**Error:**
|
||||
```
|
||||
permission denied while trying to connect to the Docker daemon socket
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
sudo usermod -aG docker $USER
|
||||
newgrp docker # Or logout and login
|
||||
```
|
||||
|
||||
### Issue 4: Binary Won't Run on Target System
|
||||
|
||||
**Checks**:
|
||||
```bash
|
||||
# 1. Verify it's static
|
||||
ldd my_app_static # Should show "not a dynamic executable"
|
||||
|
||||
# 2. Check architecture
|
||||
file my_app_static # Should match target system
|
||||
|
||||
# 3. Test on different distributions
|
||||
docker run --rm -v $(pwd):/app alpine:latest /app/my_app_static --version
|
||||
docker run --rm -v $(pwd):/app ubuntu:latest /app/my_app_static --version
|
||||
```
|
||||
|
||||
## Project Structure Example
|
||||
|
||||
Organize your project for easy static builds:
|
||||
|
||||
```
|
||||
my-nostr-app/
|
||||
├── src/
|
||||
│ ├── main.c
|
||||
│ ├── handlers.c
|
||||
│ └── utils.c
|
||||
├── nostr_core_lib/ # Git submodule
|
||||
├── nostr_login_lite/ # Git submodule (if used)
|
||||
├── Dockerfile.static # Static build Dockerfile
|
||||
├── build_static.sh # Build script
|
||||
├── Makefile # Regular build
|
||||
└── README.md
|
||||
```
|
||||
|
||||
### Makefile Integration
|
||||
|
||||
Add static build targets to your Makefile:
|
||||
|
||||
```makefile
|
||||
# Regular dynamic build
|
||||
all: my_app
|
||||
|
||||
my_app: src/*.c
|
||||
gcc -O2 src/*.c -o my_app \
|
||||
nostr_core_lib/libnostr_core_x64.a \
|
||||
-lssl -lcrypto -lsecp256k1 -lz -lpthread -lm
|
||||
|
||||
# Static MUSL build via Docker
|
||||
static:
|
||||
./build_static.sh
|
||||
|
||||
# Clean
|
||||
clean:
|
||||
rm -f my_app build/my_app_static_*
|
||||
|
||||
.PHONY: all static clean
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Single Binary Deployment
|
||||
|
||||
```bash
|
||||
# Copy to server
|
||||
scp build/my_app_static_x86_64 user@server:/opt/my-app/
|
||||
|
||||
# Run (no dependencies needed!)
|
||||
ssh user@server
|
||||
/opt/my-app/my_app_static_x86_64
|
||||
```
|
||||
|
||||
### SystemD Service
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=My Nostr Application
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=myapp
|
||||
WorkingDirectory=/opt/my-app
|
||||
ExecStart=/opt/my-app/my_app_static_x86_64
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### Docker Container (Minimal)
|
||||
|
||||
```dockerfile
|
||||
FROM scratch
|
||||
COPY my_app_static_x86_64 /app
|
||||
ENTRYPOINT ["/app"]
|
||||
```
|
||||
|
||||
Build and run:
|
||||
```bash
|
||||
docker build -t my-app:latest .
|
||||
docker run --rm my-app:latest --help
|
||||
```
|
||||
|
||||
## Reusing c-relay Files
|
||||
|
||||
You can directly copy these files from c-relay:
|
||||
|
||||
### 1. Dockerfile.alpine-musl
|
||||
```bash
|
||||
cp /path/to/c-relay/Dockerfile.alpine-musl ./Dockerfile.static
|
||||
```
|
||||
|
||||
Then customize:
|
||||
- Change binary name (line 125)
|
||||
- Adjust source files (line 122-124)
|
||||
- Modify include paths (line 120-121)
|
||||
|
||||
### 2. build_static.sh
|
||||
```bash
|
||||
cp /path/to/c-relay/build_static.sh ./
|
||||
```
|
||||
|
||||
Then customize:
|
||||
- Change `OUTPUT_NAME` variable (lines 66, 70)
|
||||
- Update Docker image name (line 98)
|
||||
- Modify verification commands (lines 180-184)
|
||||
|
||||
### 3. .dockerignore (Optional)
|
||||
```bash
|
||||
cp /path/to/c-relay/.dockerignore ./
|
||||
```
|
||||
|
||||
Helps speed up Docker builds by excluding unnecessary files.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Version Control**: Commit your Dockerfile and build script
|
||||
2. **Tag Builds**: Include git commit hash in binary version
|
||||
3. **Test Thoroughly**: Verify on multiple distributions
|
||||
4. **Document Dependencies**: List required NIPs and libraries
|
||||
5. **Automate**: Use CI/CD to build on every commit
|
||||
6. **Archive Binaries**: Keep old versions for rollback
|
||||
|
||||
## Performance Comparison
|
||||
|
||||
| Metric | MUSL Static | glibc Dynamic |
|
||||
|--------|-------------|---------------|
|
||||
| Binary Size | 7-10 MB | 2-3 MB + libs |
|
||||
| Startup Time | ~50ms | ~40ms |
|
||||
| Memory Usage | Similar | Similar |
|
||||
| Portability | ✓ Universal | ✗ System-dependent |
|
||||
| Deployment | Single file | Binary + libraries |
|
||||
|
||||
## References
|
||||
|
||||
- [MUSL libc](https://musl.libc.org/)
|
||||
- [Alpine Linux](https://alpinelinux.org/)
|
||||
- [nostr_core_lib](https://github.com/chebizarro/nostr_core_lib)
|
||||
- [Static Linking Best Practices](https://www.musl-libc.org/faq.html)
|
||||
- [c-relay Implementation](./docs/musl_static_build.md)
|
||||
|
||||
## Example: Minimal Nostr Client
|
||||
|
||||
Here's a complete example of building a minimal Nostr client:
|
||||
|
||||
```c
|
||||
// minimal_client.c
|
||||
#include "nostr_core/nostr_core.h"
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
// Generate keypair
|
||||
char nsec[64], npub[64];
|
||||
nostr_generate_keypair(nsec, npub);
|
||||
|
||||
printf("Generated keypair:\n");
|
||||
printf("Private key (nsec): %s\n", nsec);
|
||||
printf("Public key (npub): %s\n", npub);
|
||||
|
||||
// Create event
|
||||
cJSON *event = nostr_create_event(1, "Hello, Nostr!", NULL);
|
||||
nostr_sign_event(event, nsec);
|
||||
|
||||
char *json = cJSON_Print(event);
|
||||
printf("\nSigned event:\n%s\n", json);
|
||||
|
||||
free(json);
|
||||
cJSON_Delete(event);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
**Dockerfile.static:**
|
||||
```dockerfile
|
||||
FROM alpine:3.19 AS builder
|
||||
RUN apk add --no-cache build-base musl-dev git autoconf automake libtool \
|
||||
openssl-dev openssl-libs-static zlib-dev zlib-static
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Build secp256k1
|
||||
RUN cd /tmp && git clone https://github.com/bitcoin-core/secp256k1.git && \
|
||||
cd secp256k1 && ./autogen.sh && \
|
||||
./configure --enable-static --disable-shared --prefix=/usr CFLAGS="-fPIC" && \
|
||||
make -j$(nproc) && make install
|
||||
|
||||
# Copy and build nostr_core_lib
|
||||
COPY nostr_core_lib /build/nostr_core_lib/
|
||||
RUN cd nostr_core_lib && \
|
||||
sed -i 's/CFLAGS="-Wall/CFLAGS="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -Wall/' build.sh && \
|
||||
./build.sh --nips=1,6,19
|
||||
|
||||
# Build application
|
||||
COPY minimal_client.c /build/
|
||||
RUN gcc -static -O2 -Wall -std=c99 \
|
||||
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
|
||||
-Inostr_core_lib -Inostr_core_lib/nostr_core -Inostr_core_lib/cjson \
|
||||
minimal_client.c -o /build/minimal_client_static \
|
||||
nostr_core_lib/libnostr_core_x64.a \
|
||||
-lssl -lcrypto -lsecp256k1 -lz -lpthread -lm -ldl && \
|
||||
strip /build/minimal_client_static
|
||||
|
||||
FROM scratch
|
||||
COPY --from=builder /build/minimal_client_static /minimal_client_static
|
||||
```
|
||||
|
||||
**Build and run:**
|
||||
```bash
|
||||
docker build -f Dockerfile.static -t minimal-client .
|
||||
docker create --name temp minimal-client
|
||||
docker cp temp:/minimal_client_static ./
|
||||
docker rm temp
|
||||
|
||||
./minimal_client_static
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Static MUSL binaries provide the best portability for C applications. While they're slightly larger than dynamic binaries, the benefits of zero dependencies and universal compatibility make them ideal for:
|
||||
|
||||
- Server deployments across different Linux distributions
|
||||
- Embedded systems and IoT devices
|
||||
- Docker containers (FROM scratch)
|
||||
- Distribution to users without dependency management
|
||||
- Long-term archival and reproducibility
|
||||
|
||||
Follow this guide to create portable, self-contained binaries for your Nostr applications!
|
||||
BIN
build/bud04.o
BIN
build/bud04.o
Binary file not shown.
BIN
build/bud08.o
BIN
build/bud08.o
Binary file not shown.
Binary file not shown.
BIN
build/main.o
BIN
build/main.o
Binary file not shown.
Binary file not shown.
@@ -351,14 +351,33 @@ http {
|
||||
autoindex_format json;
|
||||
}
|
||||
|
||||
# Root redirect
|
||||
# Root endpoint - Server info from FastCGI
|
||||
location = / {
|
||||
return 200 "Ginxsom Blossom Server\nEndpoints: GET /<sha256>, PUT /upload, GET /list/<pubkey>\nHealth: GET /health\n";
|
||||
add_header Content-Type text/plain;
|
||||
add_header Access-Control-Allow-Origin * always;
|
||||
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
|
||||
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
|
||||
add_header Access-Control-Max-Age 86400 always;
|
||||
if ($request_method !~ ^(GET)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass fastcgi_backend;
|
||||
fastcgi_param QUERY_STRING $query_string;
|
||||
fastcgi_param REQUEST_METHOD $request_method;
|
||||
fastcgi_param CONTENT_TYPE $content_type;
|
||||
fastcgi_param CONTENT_LENGTH $content_length;
|
||||
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
|
||||
fastcgi_param REQUEST_URI $request_uri;
|
||||
fastcgi_param DOCUMENT_URI $document_uri;
|
||||
fastcgi_param DOCUMENT_ROOT $document_root;
|
||||
fastcgi_param SERVER_PROTOCOL $server_protocol;
|
||||
fastcgi_param REQUEST_SCHEME $scheme;
|
||||
fastcgi_param HTTPS $https if_not_empty;
|
||||
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
|
||||
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
|
||||
fastcgi_param REMOTE_ADDR $remote_addr;
|
||||
fastcgi_param REMOTE_PORT $remote_port;
|
||||
fastcgi_param SERVER_ADDR $server_addr;
|
||||
fastcgi_param SERVER_PORT $server_port;
|
||||
fastcgi_param SERVER_NAME $server_name;
|
||||
fastcgi_param REDIRECT_STATUS 200;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -683,14 +702,33 @@ http {
|
||||
autoindex_format json;
|
||||
}
|
||||
|
||||
# Root redirect
|
||||
# Root endpoint - Server info from FastCGI
|
||||
location = / {
|
||||
return 200 "Ginxsom Blossom Server (HTTPS)\nEndpoints: GET /<sha256>, PUT /upload, GET /list/<pubkey>\nHealth: GET /health\n";
|
||||
add_header Content-Type text/plain;
|
||||
add_header Access-Control-Allow-Origin * always;
|
||||
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
|
||||
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
|
||||
add_header Access-Control-Max-Age 86400 always;
|
||||
if ($request_method !~ ^(GET)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass fastcgi_backend;
|
||||
fastcgi_param QUERY_STRING $query_string;
|
||||
fastcgi_param REQUEST_METHOD $request_method;
|
||||
fastcgi_param CONTENT_TYPE $content_type;
|
||||
fastcgi_param CONTENT_LENGTH $content_length;
|
||||
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
|
||||
fastcgi_param REQUEST_URI $request_uri;
|
||||
fastcgi_param DOCUMENT_URI $document_uri;
|
||||
fastcgi_param DOCUMENT_ROOT $document_root;
|
||||
fastcgi_param SERVER_PROTOCOL $server_protocol;
|
||||
fastcgi_param REQUEST_SCHEME $scheme;
|
||||
fastcgi_param HTTPS $https if_not_empty;
|
||||
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
|
||||
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
|
||||
fastcgi_param REMOTE_ADDR $remote_addr;
|
||||
fastcgi_param REMOTE_PORT $remote_port;
|
||||
fastcgi_param SERVER_ADDR $server_addr;
|
||||
fastcgi_param SERVER_PORT $server_port;
|
||||
fastcgi_param SERVER_NAME $server_name;
|
||||
fastcgi_param REDIRECT_STATUS 200;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
BIN
db/ginxsom.db
BIN
db/ginxsom.db
Binary file not shown.
78
db/migrations/001_add_auth_rules.sql
Normal file
78
db/migrations/001_add_auth_rules.sql
Normal file
@@ -0,0 +1,78 @@
|
||||
-- Migration: Add authentication rules tables
|
||||
-- Purpose: Enable whitelist/blacklist functionality for Ginxsom
|
||||
-- Date: 2025-01-12
|
||||
|
||||
-- Enable foreign key constraints
|
||||
PRAGMA foreign_keys = ON;
|
||||
|
||||
-- Authentication rules table for whitelist/blacklist functionality
|
||||
CREATE TABLE IF NOT EXISTS auth_rules (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
rule_type TEXT NOT NULL, -- 'pubkey_blacklist', 'pubkey_whitelist',
|
||||
-- 'hash_blacklist', 'mime_blacklist', 'mime_whitelist'
|
||||
rule_target TEXT NOT NULL, -- The pubkey, hash, or MIME type to match
|
||||
operation TEXT NOT NULL DEFAULT '*', -- 'upload', 'delete', 'list', or '*' for all
|
||||
enabled INTEGER NOT NULL DEFAULT 1, -- 1 = enabled, 0 = disabled
|
||||
priority INTEGER NOT NULL DEFAULT 100,-- Lower number = higher priority
|
||||
description TEXT, -- Human-readable description
|
||||
created_by TEXT, -- Admin pubkey who created the rule
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
|
||||
-- Constraints
|
||||
CHECK (rule_type IN ('pubkey_blacklist', 'pubkey_whitelist',
|
||||
'hash_blacklist', 'mime_blacklist', 'mime_whitelist')),
|
||||
CHECK (operation IN ('upload', 'delete', 'list', '*')),
|
||||
CHECK (enabled IN (0, 1)),
|
||||
CHECK (priority >= 0),
|
||||
|
||||
-- Unique constraint: one rule per type/target/operation combination
|
||||
UNIQUE(rule_type, rule_target, operation)
|
||||
);
|
||||
|
||||
-- Indexes for performance optimization
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_target ON auth_rules(rule_type, rule_target);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_operation ON auth_rules(operation);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_enabled ON auth_rules(enabled);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_priority ON auth_rules(priority);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_operation ON auth_rules(rule_type, operation, enabled);
|
||||
|
||||
-- Cache table for authentication decisions (5-minute TTL)
|
||||
CREATE TABLE IF NOT EXISTS auth_rules_cache (
|
||||
cache_key TEXT PRIMARY KEY NOT NULL, -- SHA-256 hash of request parameters
|
||||
decision INTEGER NOT NULL, -- 1 = allow, 0 = deny
|
||||
reason TEXT, -- Reason for decision
|
||||
pubkey TEXT, -- Public key from request
|
||||
operation TEXT, -- Operation type
|
||||
resource_hash TEXT, -- Resource hash (if applicable)
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
expires_at INTEGER NOT NULL, -- Expiration timestamp
|
||||
|
||||
CHECK (decision IN (0, 1))
|
||||
);
|
||||
|
||||
-- Index for cache expiration cleanup
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_cache_expires ON auth_rules_cache(expires_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_cache_pubkey ON auth_rules_cache(pubkey);
|
||||
|
||||
-- Insert example rules (commented out - uncomment to use)
|
||||
-- Example: Blacklist a specific pubkey for uploads
|
||||
-- INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description, created_by) VALUES
|
||||
-- ('pubkey_blacklist', '79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798', 'upload', 10, 'Example blacklisted user', 'admin_pubkey_here');
|
||||
|
||||
-- Example: Whitelist a specific pubkey for all operations
|
||||
-- INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description, created_by) VALUES
|
||||
-- ('pubkey_whitelist', 'your_pubkey_here', '*', 300, 'Trusted user - all operations allowed', 'admin_pubkey_here');
|
||||
|
||||
-- Example: Blacklist executable MIME types
|
||||
-- INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description, created_by) VALUES
|
||||
-- ('mime_blacklist', 'application/x-executable', 'upload', 200, 'Block executable files', 'admin_pubkey_here'),
|
||||
-- ('mime_blacklist', 'application/x-msdos-program', 'upload', 200, 'Block DOS executables', 'admin_pubkey_here'),
|
||||
-- ('mime_blacklist', 'application/x-msdownload', 'upload', 200, 'Block Windows executables', 'admin_pubkey_here');
|
||||
|
||||
-- Example: Whitelist common image types
|
||||
-- INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description, created_by) VALUES
|
||||
-- ('mime_whitelist', 'image/jpeg', 'upload', 400, 'Allow JPEG images', 'admin_pubkey_here'),
|
||||
-- ('mime_whitelist', 'image/png', 'upload', 400, 'Allow PNG images', 'admin_pubkey_here'),
|
||||
-- ('mime_whitelist', 'image/gif', 'upload', 400, 'Allow GIF images', 'admin_pubkey_here'),
|
||||
-- ('mime_whitelist', 'image/webp', 'upload', 400, 'Allow WebP images', 'admin_pubkey_here');
|
||||
@@ -43,9 +43,59 @@ INSERT OR IGNORE INTO config (key, value, description) VALUES
|
||||
('nip42_challenge_timeout', '600', 'NIP-42 challenge timeout in seconds'),
|
||||
('nip42_time_tolerance', '300', 'NIP-42 timestamp tolerance in seconds');
|
||||
|
||||
-- Authentication rules table for whitelist/blacklist functionality
|
||||
CREATE TABLE IF NOT EXISTS auth_rules (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
rule_type TEXT NOT NULL, -- 'pubkey_blacklist', 'pubkey_whitelist',
|
||||
-- 'hash_blacklist', 'mime_blacklist', 'mime_whitelist'
|
||||
rule_target TEXT NOT NULL, -- The pubkey, hash, or MIME type to match
|
||||
operation TEXT NOT NULL DEFAULT '*', -- 'upload', 'delete', 'list', or '*' for all
|
||||
enabled INTEGER NOT NULL DEFAULT 1, -- 1 = enabled, 0 = disabled
|
||||
priority INTEGER NOT NULL DEFAULT 100,-- Lower number = higher priority
|
||||
description TEXT, -- Human-readable description
|
||||
created_by TEXT, -- Admin pubkey who created the rule
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
|
||||
-- Constraints
|
||||
CHECK (rule_type IN ('pubkey_blacklist', 'pubkey_whitelist',
|
||||
'hash_blacklist', 'mime_blacklist', 'mime_whitelist')),
|
||||
CHECK (operation IN ('upload', 'delete', 'list', '*')),
|
||||
CHECK (enabled IN (0, 1)),
|
||||
CHECK (priority >= 0),
|
||||
|
||||
-- Unique constraint: one rule per type/target/operation combination
|
||||
UNIQUE(rule_type, rule_target, operation)
|
||||
);
|
||||
|
||||
-- Cache table for authentication decisions (5-minute TTL)
|
||||
CREATE TABLE IF NOT EXISTS auth_rules_cache (
|
||||
cache_key TEXT PRIMARY KEY NOT NULL, -- SHA-256 hash of request parameters
|
||||
decision INTEGER NOT NULL, -- 1 = allow, 0 = deny
|
||||
reason TEXT, -- Reason for decision
|
||||
pubkey TEXT, -- Public key from request
|
||||
operation TEXT, -- Operation type
|
||||
resource_hash TEXT, -- Resource hash (if applicable)
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
expires_at INTEGER NOT NULL, -- Expiration timestamp
|
||||
|
||||
CHECK (decision IN (0, 1))
|
||||
);
|
||||
|
||||
-- Indexes for performance optimization
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_target ON auth_rules(rule_type, rule_target);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_operation ON auth_rules(operation);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_enabled ON auth_rules(enabled);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_priority ON auth_rules(priority);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_operation ON auth_rules(rule_type, operation, enabled);
|
||||
|
||||
-- Index for cache expiration cleanup
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_cache_expires ON auth_rules_cache(expires_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_cache_pubkey ON auth_rules_cache(pubkey);
|
||||
|
||||
-- View for storage statistics
|
||||
CREATE VIEW IF NOT EXISTS storage_stats AS
|
||||
SELECT
|
||||
SELECT
|
||||
COUNT(*) as total_blobs,
|
||||
SUM(size) as total_bytes,
|
||||
AVG(size) as avg_blob_size,
|
||||
|
||||
1785
debug_auth.log
1785
debug_auth.log
File diff suppressed because it is too large
Load Diff
287
deploy_lt.sh
Executable file
287
deploy_lt.sh
Executable file
@@ -0,0 +1,287 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
print_status() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
print_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
|
||||
print_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
|
||||
print_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
REMOTE_HOST="laantungir.net"
|
||||
REMOTE_USER="ubuntu"
|
||||
REMOTE_DIR="/home/ubuntu/ginxsom"
|
||||
REMOTE_DB_PATH="/home/ubuntu/ginxsom/db/ginxsom.db"
|
||||
REMOTE_NGINX_CONFIG="/etc/nginx/conf.d/default.conf"
|
||||
REMOTE_BINARY_PATH="/home/ubuntu/ginxsom/ginxsom.fcgi"
|
||||
REMOTE_SOCKET="/tmp/ginxsom-fcgi.sock"
|
||||
REMOTE_DATA_DIR="/var/www/html/blossom"
|
||||
|
||||
print_status "Starting deployment to $REMOTE_HOST..."
|
||||
|
||||
# Step 1: Build and prepare local binary
|
||||
print_status "Building ginxsom binary..."
|
||||
make clean && make
|
||||
if [[ ! -f "build/ginxsom-fcgi" ]]; then
|
||||
print_error "Build failed - binary not found"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Binary built successfully"
|
||||
|
||||
# Step 2: Setup remote environment first (before copying files)
|
||||
print_status "Setting up remote environment..."
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
|
||||
set -e
|
||||
|
||||
# Create data directory if it doesn't exist (using existing /var/www/html/blossom)
|
||||
sudo mkdir -p /var/www/html/blossom
|
||||
sudo chown www-data:www-data /var/www/html/blossom
|
||||
sudo chmod 755 /var/www/html/blossom
|
||||
|
||||
# Ensure socket directory exists
|
||||
sudo mkdir -p /tmp
|
||||
sudo chmod 755 /tmp
|
||||
|
||||
# Install required dependencies
|
||||
echo "Installing required dependencies..."
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y spawn-fcgi libfcgi-dev
|
||||
|
||||
# Stop any existing ginxsom processes
|
||||
echo "Stopping existing ginxsom processes..."
|
||||
sudo pkill -f ginxsom-fcgi || true
|
||||
sudo rm -f /tmp/ginxsom-fcgi.sock || true
|
||||
|
||||
echo "Remote environment setup complete"
|
||||
EOF
|
||||
|
||||
print_success "Remote environment configured"
|
||||
|
||||
# Step 3: Copy files to remote server
|
||||
print_status "Copying files to remote server..."
|
||||
|
||||
# Copy entire project directory (excluding unnecessary files)
|
||||
print_status "Copying entire ginxsom project..."
|
||||
rsync -avz --exclude='.git' --exclude='build' --exclude='logs' --exclude='Trash' --exclude='blobs' --exclude='db/ginxsom.db' --no-g --no-o --no-perms --omit-dir-times . $REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/
|
||||
|
||||
# Build on remote server to ensure compatibility
|
||||
print_status "Building ginxsom on remote server..."
|
||||
ssh $REMOTE_USER@$REMOTE_HOST "cd $REMOTE_DIR && make clean && make" || {
|
||||
print_error "Build failed on remote server"
|
||||
print_status "Checking what packages are actually installed..."
|
||||
ssh $REMOTE_USER@$REMOTE_HOST "dpkg -l | grep -E '(sqlite|fcgi)'"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Copy binary to application directory
|
||||
print_status "Copying ginxsom binary to application directory..."
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << EOF
|
||||
# Stop any running process first
|
||||
sudo pkill -f ginxsom-fcgi || true
|
||||
sleep 1
|
||||
|
||||
# Remove old binary if it exists
|
||||
rm -f $REMOTE_BINARY_PATH
|
||||
|
||||
# Copy new binary
|
||||
cp $REMOTE_DIR/build/ginxsom-fcgi $REMOTE_BINARY_PATH
|
||||
chmod +x $REMOTE_BINARY_PATH
|
||||
chown ubuntu:ubuntu $REMOTE_BINARY_PATH
|
||||
|
||||
echo "Binary copied successfully"
|
||||
EOF
|
||||
|
||||
# NOTE: Do NOT update nginx configuration automatically
|
||||
# The deployment script should only update ginxsom binaries and do nothing else with the system
|
||||
# Nginx configuration should be managed manually by the system administrator
|
||||
print_status "Skipping nginx configuration update (manual control required)"
|
||||
|
||||
print_success "Files copied to remote server"
|
||||
|
||||
# Step 3: Setup remote environment
|
||||
print_status "Setting up remote environment..."
|
||||
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
|
||||
set -e
|
||||
|
||||
# Create data directory if it doesn't exist (using existing /var/www/html/blossom)
|
||||
sudo mkdir -p /var/www/html/blossom
|
||||
sudo chown www-data:www-data /var/www/html/blossom
|
||||
sudo chmod 755 /var/www/html/blossom
|
||||
|
||||
# Ensure socket directory exists
|
||||
sudo mkdir -p /tmp
|
||||
sudo chmod 755 /tmp
|
||||
|
||||
# Install required dependencies
|
||||
echo "Installing required dependencies..."
|
||||
sudo apt-get update 2>/dev/null || true # Continue even if apt update has issues
|
||||
sudo apt-get install -y spawn-fcgi libfcgi-dev libsqlite3-dev sqlite3 libcurl4-openssl-dev
|
||||
|
||||
# Verify installations
|
||||
echo "Verifying installations..."
|
||||
if ! dpkg -l libsqlite3-dev >/dev/null 2>&1; then
|
||||
echo "libsqlite3-dev not found, trying alternative..."
|
||||
sudo apt-get install -y libsqlite3-dev || {
|
||||
echo "Failed to install libsqlite3-dev"
|
||||
exit 1
|
||||
}
|
||||
fi
|
||||
if ! dpkg -l libfcgi-dev >/dev/null 2>&1; then
|
||||
echo "libfcgi-dev not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if sqlite3.h exists
|
||||
if [ ! -f /usr/include/sqlite3.h ]; then
|
||||
echo "sqlite3.h not found in /usr/include/"
|
||||
find /usr -name "sqlite3.h" 2>/dev/null || echo "sqlite3.h not found anywhere"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Stop any existing ginxsom processes
|
||||
echo "Stopping existing ginxsom processes..."
|
||||
sudo pkill -f ginxsom-fcgi || true
|
||||
sudo rm -f /tmp/ginxsom-fcgi.sock || true
|
||||
|
||||
echo "Remote environment setup complete"
|
||||
EOF
|
||||
|
||||
print_success "Remote environment configured"
|
||||
|
||||
# Step 4: Setup database directory and migrate database
|
||||
print_status "Setting up database directory..."
|
||||
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << EOF
|
||||
# Create db directory if it doesn't exist
|
||||
mkdir -p $REMOTE_DIR/db
|
||||
|
||||
# Backup current database if it exists in old location
|
||||
if [ -f /var/www/html/blossom/ginxsom.db ]; then
|
||||
echo "Backing up existing database..."
|
||||
cp /var/www/html/blossom/ginxsom.db /var/www/html/blossom/ginxsom.db.backup.\$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
# Migrate database to new location if not already there
|
||||
if [ ! -f $REMOTE_DB_PATH ]; then
|
||||
echo "Migrating database to new location..."
|
||||
cp /var/www/html/blossom/ginxsom.db $REMOTE_DB_PATH
|
||||
else
|
||||
echo "Database already exists at new location"
|
||||
fi
|
||||
elif [ ! -f $REMOTE_DB_PATH ]; then
|
||||
echo "No existing database found - will be created on first run"
|
||||
else
|
||||
echo "Database already exists at $REMOTE_DB_PATH"
|
||||
fi
|
||||
|
||||
# Set proper permissions - www-data needs write access to db directory for SQLite journal files
|
||||
sudo chown -R www-data:www-data $REMOTE_DIR/db
|
||||
sudo chmod 755 $REMOTE_DIR/db
|
||||
sudo chmod 644 $REMOTE_DB_PATH 2>/dev/null || true
|
||||
|
||||
# Allow www-data to access the application directory for spawn-fcgi chdir
|
||||
chmod 755 $REMOTE_DIR
|
||||
|
||||
echo "Database directory setup complete"
|
||||
EOF
|
||||
|
||||
print_success "Database directory configured"
|
||||
|
||||
# Step 5: Start ginxsom FastCGI process
|
||||
print_status "Starting ginxsom FastCGI process..."
|
||||
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << EOF
|
||||
# Clean up any existing socket
|
||||
sudo rm -f $REMOTE_SOCKET
|
||||
|
||||
# Start FastCGI process with explicit paths
|
||||
echo "Starting ginxsom FastCGI with configuration:"
|
||||
echo " Working directory: $REMOTE_DIR"
|
||||
echo " Binary: $REMOTE_BINARY_PATH"
|
||||
echo " Database: $REMOTE_DB_PATH"
|
||||
echo " Storage: $REMOTE_DATA_DIR"
|
||||
|
||||
sudo spawn-fcgi -M 666 -u www-data -g www-data -s $REMOTE_SOCKET -U www-data -G www-data -d $REMOTE_DIR -- $REMOTE_BINARY_PATH --db-path "$REMOTE_DB_PATH" --storage-dir "$REMOTE_DATA_DIR"
|
||||
|
||||
# Give it a moment to start
|
||||
sleep 2
|
||||
|
||||
# Verify process is running
|
||||
if pgrep -f "ginxsom-fcgi" > /dev/null; then
|
||||
echo "FastCGI process started successfully"
|
||||
echo "PID: \$(pgrep -f ginxsom-fcgi)"
|
||||
else
|
||||
echo "Process not found by pgrep, but socket exists - this may be normal for FastCGI"
|
||||
echo "Checking socket..."
|
||||
ls -la $REMOTE_SOCKET
|
||||
echo "Checking if binary exists and is executable..."
|
||||
ls -la $REMOTE_BINARY_PATH
|
||||
echo "Testing if we can connect to the socket..."
|
||||
# Try to test the FastCGI connection
|
||||
if command -v cgi-fcgi >/dev/null 2>&1; then
|
||||
echo "Testing FastCGI connection..."
|
||||
SCRIPT_NAME=/health SCRIPT_FILENAME=$REMOTE_BINARY_PATH REQUEST_METHOD=GET cgi-fcgi -bind -connect $REMOTE_SOCKET 2>/dev/null | head -5 || echo "Connection test failed"
|
||||
else
|
||||
echo "cgi-fcgi not available for testing"
|
||||
fi
|
||||
# Don't exit - the socket existing means spawn-fcgi worked
|
||||
fi
|
||||
EOF
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
print_success "FastCGI process started"
|
||||
else
|
||||
print_error "Failed to start FastCGI process"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 6: Test nginx configuration and reload
|
||||
print_status "Testing and reloading nginx..."
|
||||
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
|
||||
# Test nginx configuration
|
||||
if sudo nginx -t; then
|
||||
echo "Nginx configuration test passed"
|
||||
sudo nginx -s reload
|
||||
echo "Nginx reloaded successfully"
|
||||
else
|
||||
echo "Nginx configuration test failed"
|
||||
exit 1
|
||||
fi
|
||||
EOF
|
||||
|
||||
print_success "Nginx reloaded"
|
||||
|
||||
# Step 7: Test deployment
|
||||
print_status "Testing deployment..."
|
||||
|
||||
# Test health endpoint
|
||||
echo "Testing health endpoint..."
|
||||
if curl -k -s --max-time 10 "https://blossom.laantungir.net/health" | grep -q "OK"; then
|
||||
print_success "Health check passed"
|
||||
else
|
||||
print_warning "Health check failed - checking response..."
|
||||
curl -k -v --max-time 10 "https://blossom.laantungir.net/health" 2>&1 | head -10
|
||||
fi
|
||||
|
||||
# Test basic endpoints
|
||||
echo "Testing root endpoint..."
|
||||
if curl -k -s --max-time 10 "https://blossom.laantungir.net/" | grep -q "Ginxsom"; then
|
||||
print_success "Root endpoint responding"
|
||||
else
|
||||
print_warning "Root endpoint not responding as expected - checking response..."
|
||||
curl -k -v --max-time 10 "https://blossom.laantungir.net/" 2>&1 | head -10
|
||||
fi
|
||||
|
||||
print_success "Deployment to $REMOTE_HOST completed!"
|
||||
print_status "Ginxsom should now be available at: https://blossom.laantungir.net"
|
||||
print_status "Test endpoints:"
|
||||
echo " Health: curl -k https://blossom.laantungir.net/health"
|
||||
echo " Root: curl -k https://blossom.laantungir.net/"
|
||||
echo " List: curl -k https://blossom.laantungir.net/list"
|
||||
496
docs/AUTH_RULES_IMPLEMENTATION_PLAN.md
Normal file
496
docs/AUTH_RULES_IMPLEMENTATION_PLAN.md
Normal file
@@ -0,0 +1,496 @@
|
||||
# Authentication Rules Implementation Plan
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document outlines the implementation plan for adding whitelist/blacklist functionality to the Ginxsom Blossom server. The authentication rules system is **already coded** in [`src/request_validator.c`](src/request_validator.c) but lacks the database schema to function. This plan focuses on completing the implementation by adding the missing database tables and Admin API endpoints.
|
||||
|
||||
## Current State Analysis
|
||||
|
||||
### ✅ Already Implemented
|
||||
- **Nostr event validation** - Full cryptographic verification (NIP-42 and Blossom)
|
||||
- **Rule evaluation engine** - Complete priority-based logic in [`check_database_auth_rules()`](src/request_validator.c:1309-1471)
|
||||
- **Configuration system** - `auth_rules_enabled` flag in config table
|
||||
- **Admin API framework** - Authentication and endpoint structure in place
|
||||
- **Documentation** - Comprehensive flow diagrams in [`docs/AUTH_API.md`](docs/AUTH_API.md)
|
||||
|
||||
### ❌ Missing Components
|
||||
- **Database schema** - `auth_rules` table doesn't exist
|
||||
- **Cache table** - `auth_rules_cache` for performance optimization
|
||||
- **Admin API endpoints** - CRUD operations for managing rules
|
||||
- **Migration script** - Database schema updates
|
||||
- **Test suite** - Validation of rule enforcement
|
||||
|
||||
## Database Schema Design
|
||||
|
||||
### 1. auth_rules Table
|
||||
|
||||
```sql
|
||||
-- Authentication rules for whitelist/blacklist functionality
|
||||
CREATE TABLE IF NOT EXISTS auth_rules (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
rule_type TEXT NOT NULL, -- 'pubkey_blacklist', 'pubkey_whitelist',
|
||||
-- 'hash_blacklist', 'mime_blacklist', 'mime_whitelist'
|
||||
rule_target TEXT NOT NULL, -- The pubkey, hash, or MIME type to match
|
||||
operation TEXT NOT NULL DEFAULT '*', -- 'upload', 'delete', 'list', or '*' for all
|
||||
enabled INTEGER NOT NULL DEFAULT 1, -- 1 = enabled, 0 = disabled
|
||||
priority INTEGER NOT NULL DEFAULT 100,-- Lower number = higher priority
|
||||
description TEXT, -- Human-readable description
|
||||
created_by TEXT, -- Admin pubkey who created the rule
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
|
||||
-- Constraints
|
||||
CHECK (rule_type IN ('pubkey_blacklist', 'pubkey_whitelist',
|
||||
'hash_blacklist', 'mime_blacklist', 'mime_whitelist')),
|
||||
CHECK (operation IN ('upload', 'delete', 'list', '*')),
|
||||
CHECK (enabled IN (0, 1)),
|
||||
CHECK (priority >= 0),
|
||||
|
||||
-- Unique constraint: one rule per type/target/operation combination
|
||||
UNIQUE(rule_type, rule_target, operation)
|
||||
);
|
||||
|
||||
-- Indexes for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_type_target ON auth_rules(rule_type, rule_target);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_operation ON auth_rules(operation);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_enabled ON auth_rules(enabled);
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_rules_priority ON auth_rules(priority);
|
||||
```
|
||||
|
||||
### 2. auth_rules_cache Table
|
||||
|
||||
```sql
|
||||
-- Cache for authentication decisions (5-minute TTL)
|
||||
CREATE TABLE IF NOT EXISTS auth_rules_cache (
|
||||
cache_key TEXT PRIMARY KEY NOT NULL, -- SHA-256 hash of request parameters
|
||||
decision INTEGER NOT NULL, -- 1 = allow, 0 = deny
|
||||
reason TEXT, -- Reason for decision
|
||||
pubkey TEXT, -- Public key from request
|
||||
operation TEXT, -- Operation type
|
||||
resource_hash TEXT, -- Resource hash (if applicable)
|
||||
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
|
||||
expires_at INTEGER NOT NULL, -- Expiration timestamp
|
||||
|
||||
CHECK (decision IN (0, 1))
|
||||
);
|
||||
|
||||
-- Index for cache expiration cleanup
|
||||
CREATE INDEX IF NOT EXISTS idx_auth_cache_expires ON auth_rules_cache(expires_at);
|
||||
```
|
||||
|
||||
### 3. Rule Type Definitions
|
||||
|
||||
| Rule Type | Purpose | Target Format | Priority Range |
|
||||
|-----------|---------|---------------|----------------|
|
||||
| `pubkey_blacklist` | Block specific users | 64-char hex pubkey | 1-99 (highest) |
|
||||
| `hash_blacklist` | Block specific files | 64-char hex SHA-256 | 100-199 |
|
||||
| `mime_blacklist` | Block file types | MIME type string | 200-299 |
|
||||
| `pubkey_whitelist` | Allow specific users | 64-char hex pubkey | 300-399 |
|
||||
| `mime_whitelist` | Allow file types | MIME type string | 400-499 |
|
||||
|
||||
### 4. Operation Types
|
||||
|
||||
- `upload` - File upload operations
|
||||
- `delete` - File deletion operations
|
||||
- `list` - File listing operations
|
||||
- `*` - All operations (wildcard)
|
||||
|
||||
## Admin API Endpoints
|
||||
|
||||
### GET /api/rules
|
||||
**Purpose**: List all authentication rules with filtering
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Query Parameters**:
|
||||
- `rule_type` (optional): Filter by rule type
|
||||
- `operation` (optional): Filter by operation
|
||||
- `enabled` (optional): Filter by enabled status (true/false)
|
||||
- `limit` (default: 100): Number of rules to return
|
||||
- `offset` (default: 0): Pagination offset
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"rules": [
|
||||
{
|
||||
"id": 1,
|
||||
"rule_type": "pubkey_blacklist",
|
||||
"rule_target": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798",
|
||||
"operation": "upload",
|
||||
"enabled": true,
|
||||
"priority": 10,
|
||||
"description": "Blocked spammer account",
|
||||
"created_by": "admin_pubkey_here",
|
||||
"created_at": 1704067200,
|
||||
"updated_at": 1704067200
|
||||
}
|
||||
],
|
||||
"total": 1,
|
||||
"limit": 100,
|
||||
"offset": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### POST /api/rules
|
||||
**Purpose**: Create a new authentication rule
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Request Body**:
|
||||
```json
|
||||
{
|
||||
"rule_type": "pubkey_blacklist",
|
||||
"rule_target": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798",
|
||||
"operation": "upload",
|
||||
"priority": 10,
|
||||
"description": "Blocked spammer account"
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"message": "Rule created successfully",
|
||||
"data": {
|
||||
"id": 1,
|
||||
"rule_type": "pubkey_blacklist",
|
||||
"rule_target": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798",
|
||||
"operation": "upload",
|
||||
"enabled": true,
|
||||
"priority": 10,
|
||||
"description": "Blocked spammer account",
|
||||
"created_at": 1704067200
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### PUT /api/rules/:id
|
||||
**Purpose**: Update an existing rule
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Request Body**:
|
||||
```json
|
||||
{
|
||||
"enabled": false,
|
||||
"priority": 20,
|
||||
"description": "Updated description"
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"message": "Rule updated successfully",
|
||||
"data": {
|
||||
"id": 1,
|
||||
"updated_fields": ["enabled", "priority", "description"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### DELETE /api/rules/:id
|
||||
**Purpose**: Delete an authentication rule
|
||||
**Authentication**: Required (admin pubkey)
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"message": "Rule deleted successfully",
|
||||
"data": {
|
||||
"id": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### POST /api/rules/clear-cache
|
||||
**Purpose**: Clear the authentication rules cache
|
||||
**Authentication**: Required (admin pubkey)
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"message": "Authentication cache cleared",
|
||||
"data": {
|
||||
"entries_cleared": 42
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GET /api/rules/test
|
||||
**Purpose**: Test if a specific request would be allowed
|
||||
**Authentication**: Required (admin pubkey)
|
||||
**Query Parameters**:
|
||||
- `pubkey` (required): Public key to test
|
||||
- `operation` (required): Operation type (upload/delete/list)
|
||||
- `hash` (optional): Resource hash
|
||||
- `mime` (optional): MIME type
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"allowed": false,
|
||||
"reason": "Public key blacklisted",
|
||||
"matched_rule": {
|
||||
"id": 1,
|
||||
"rule_type": "pubkey_blacklist",
|
||||
"description": "Blocked spammer account"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Phases
|
||||
|
||||
### Phase 1: Database Schema (Priority: HIGH)
|
||||
**Estimated Time**: 2-4 hours
|
||||
|
||||
**Tasks**:
|
||||
1. Create migration script `db/migrations/001_add_auth_rules.sql`
|
||||
2. Add `auth_rules` table with indexes
|
||||
3. Add `auth_rules_cache` table with indexes
|
||||
4. Create migration runner script
|
||||
5. Test migration on clean database
|
||||
6. Test migration on existing database
|
||||
|
||||
**Deliverables**:
|
||||
- Migration SQL script
|
||||
- Migration runner bash script
|
||||
- Migration documentation
|
||||
|
||||
**Validation**:
|
||||
- Verify tables created successfully
|
||||
- Verify indexes exist
|
||||
- Verify constraints work correctly
|
||||
- Test with sample data
|
||||
|
||||
### Phase 2: Admin API Endpoints (Priority: HIGH)
|
||||
**Estimated Time**: 6-8 hours
|
||||
|
||||
**Tasks**:
|
||||
1. Implement `GET /api/rules` endpoint
|
||||
2. Implement `POST /api/rules` endpoint
|
||||
3. Implement `PUT /api/rules/:id` endpoint
|
||||
4. Implement `DELETE /api/rules/:id` endpoint
|
||||
5. Implement `POST /api/rules/clear-cache` endpoint
|
||||
6. Implement `GET /api/rules/test` endpoint
|
||||
7. Add input validation for all endpoints
|
||||
8. Add error handling and logging
|
||||
|
||||
**Deliverables**:
|
||||
- C implementation in `src/admin_api.c`
|
||||
- Header declarations in `src/ginxsom.h`
|
||||
- API documentation updates
|
||||
|
||||
**Validation**:
|
||||
- Test each endpoint with valid data
|
||||
- Test error cases (invalid input, missing auth, etc.)
|
||||
- Verify database operations work correctly
|
||||
- Check response formats match specification
|
||||
|
||||
### Phase 3: Integration & Testing (Priority: HIGH)
|
||||
**Estimated Time**: 4-6 hours
|
||||
|
||||
**Tasks**:
|
||||
1. Create comprehensive test suite
|
||||
2. Test rule creation and enforcement
|
||||
3. Test cache functionality
|
||||
4. Test priority ordering
|
||||
5. Test whitelist default-deny behavior
|
||||
6. Test performance with many rules
|
||||
7. Document test scenarios
|
||||
|
||||
**Deliverables**:
|
||||
- Test script `tests/auth_rules_test.sh`
|
||||
- Performance benchmarks
|
||||
- Test documentation
|
||||
|
||||
**Validation**:
|
||||
- All test cases pass
|
||||
- Performance meets requirements (<3ms per request)
|
||||
- Cache hit rate >80% under load
|
||||
- No memory leaks detected
|
||||
|
||||
### Phase 4: Documentation & Examples (Priority: MEDIUM)
|
||||
**Estimated Time**: 2-3 hours
|
||||
|
||||
**Tasks**:
|
||||
1. Update [`docs/AUTH_API.md`](docs/AUTH_API.md) with rule management
|
||||
2. Create usage examples
|
||||
3. Document common patterns (blocking users, allowing file types)
|
||||
4. Create migration guide for existing deployments
|
||||
5. Add troubleshooting section
|
||||
|
||||
**Deliverables**:
|
||||
- Updated documentation
|
||||
- Example scripts
|
||||
- Migration guide
|
||||
- Troubleshooting guide
|
||||
|
||||
## Code Changes Required
|
||||
|
||||
### 1. src/request_validator.c
|
||||
**Status**: ✅ Already implemented - NO CHANGES NEEDED
|
||||
|
||||
The rule evaluation logic is complete in [`check_database_auth_rules()`](src/request_validator.c:1309-1471). Once the database tables exist, this code will work immediately.
|
||||
|
||||
### 2. src/admin_api.c
|
||||
**Status**: ❌ Needs new endpoints
|
||||
|
||||
Add new functions:
|
||||
```c
|
||||
// Rule management endpoints
|
||||
int handle_get_rules(FCGX_Request *request);
|
||||
int handle_create_rule(FCGX_Request *request);
|
||||
int handle_update_rule(FCGX_Request *request);
|
||||
int handle_delete_rule(FCGX_Request *request);
|
||||
int handle_clear_cache(FCGX_Request *request);
|
||||
int handle_test_rule(FCGX_Request *request);
|
||||
```
|
||||
|
||||
### 3. src/ginxsom.h
|
||||
**Status**: ❌ Needs new declarations
|
||||
|
||||
Add function prototypes for new admin endpoints.
|
||||
|
||||
### 4. db/schema.sql
|
||||
**Status**: ❌ Needs new tables
|
||||
|
||||
Add `auth_rules` and `auth_rules_cache` table definitions.
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### For New Installations
|
||||
1. Run updated `db/init.sh` which includes new tables
|
||||
2. No additional steps needed
|
||||
|
||||
### For Existing Installations
|
||||
1. Create backup: `cp db/ginxsom.db db/ginxsom.db.backup`
|
||||
2. Run migration: `sqlite3 db/ginxsom.db < db/migrations/001_add_auth_rules.sql`
|
||||
3. Verify migration: `sqlite3 db/ginxsom.db ".schema auth_rules"`
|
||||
4. Restart server to load new schema
|
||||
|
||||
### Rollback Procedure
|
||||
1. Stop server
|
||||
2. Restore backup: `cp db/ginxsom.db.backup db/ginxsom.db`
|
||||
3. Restart server
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Cache Strategy
|
||||
- **5-minute TTL** balances freshness with performance
|
||||
- **SHA-256 cache keys** prevent collision attacks
|
||||
- **Automatic cleanup** of expired entries every 5 minutes
|
||||
- **Cache hit target**: >80% under normal load
|
||||
|
||||
### Database Optimization
|
||||
- **Indexes on all query columns** for fast lookups
|
||||
- **Prepared statements** prevent SQL injection
|
||||
- **Single connection** with proper cleanup
|
||||
- **Query optimization** for rule evaluation order
|
||||
|
||||
### Expected Performance
|
||||
- **Cache hit**: ~100μs (SQLite SELECT)
|
||||
- **Cache miss**: ~2.4ms (full validation + rule checks)
|
||||
- **Rule creation**: ~50ms (INSERT + cache invalidation)
|
||||
- **Rule update**: ~30ms (UPDATE + cache invalidation)
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Input Validation
|
||||
- Validate all rule_type values against enum
|
||||
- Validate pubkey format (64 hex chars)
|
||||
- Validate hash format (64 hex chars)
|
||||
- Validate MIME type format
|
||||
- Sanitize description text
|
||||
|
||||
### Authorization
|
||||
- All rule management requires admin pubkey
|
||||
- Verify Nostr event signatures
|
||||
- Check event expiration
|
||||
- Log all rule changes with admin pubkey
|
||||
|
||||
### Attack Mitigation
|
||||
- **Rule flooding**: Limit total rules per type
|
||||
- **Cache poisoning**: Cryptographic cache keys
|
||||
- **Priority manipulation**: Validate priority ranges
|
||||
- **Whitelist bypass**: Default-deny when whitelist exists
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- Rule creation with valid data
|
||||
- Rule creation with invalid data
|
||||
- Rule update operations
|
||||
- Rule deletion
|
||||
- Cache operations
|
||||
- Priority ordering
|
||||
|
||||
### Integration Tests
|
||||
- End-to-end request flow
|
||||
- Multiple rules interaction
|
||||
- Cache hit/miss scenarios
|
||||
- Whitelist default-deny behavior
|
||||
- Performance under load
|
||||
|
||||
### Security Tests
|
||||
- Invalid admin pubkey rejection
|
||||
- Expired event rejection
|
||||
- SQL injection attempts
|
||||
- Cache poisoning attempts
|
||||
- Priority bypass attempts
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Functional Requirements
|
||||
- ✅ Rules can be created via Admin API
|
||||
- ✅ Rules can be updated via Admin API
|
||||
- ✅ Rules can be deleted via Admin API
|
||||
- ✅ Rules are enforced during request validation
|
||||
- ✅ Cache improves performance significantly
|
||||
- ✅ Priority ordering works correctly
|
||||
- ✅ Whitelist default-deny works correctly
|
||||
|
||||
### Performance Requirements
|
||||
- ✅ Cache hit latency <200μs
|
||||
- ✅ Full validation latency <3ms
|
||||
- ✅ Cache hit rate >80% under load
|
||||
- ✅ No memory leaks
|
||||
- ✅ Database queries optimized
|
||||
|
||||
### Security Requirements
|
||||
- ✅ Admin authentication required
|
||||
- ✅ Input validation prevents injection
|
||||
- ✅ Audit logging of all changes
|
||||
- ✅ Cache keys prevent poisoning
|
||||
- ✅ Whitelist bypass prevented
|
||||
|
||||
## Timeline Estimate
|
||||
|
||||
| Phase | Duration | Dependencies |
|
||||
|-------|----------|--------------|
|
||||
| Phase 1: Database Schema | 2-4 hours | None |
|
||||
| Phase 2: Admin API | 6-8 hours | Phase 1 |
|
||||
| Phase 3: Testing | 4-6 hours | Phase 2 |
|
||||
| Phase 4: Documentation | 2-3 hours | Phase 3 |
|
||||
| **Total** | **14-21 hours** | Sequential |
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Review this plan** with stakeholders
|
||||
2. **Create Phase 1 migration script** in `db/migrations/`
|
||||
3. **Test migration** on development database
|
||||
4. **Implement Phase 2 endpoints** in `src/admin_api.c`
|
||||
5. **Create test suite** in `tests/auth_rules_test.sh`
|
||||
6. **Update documentation** in `docs/`
|
||||
7. **Deploy to production** with migration guide
|
||||
|
||||
## Conclusion
|
||||
|
||||
The authentication rules system is **90% complete** - the core logic exists and is well-tested. This implementation plan focuses on the final 10%: adding database tables and Admin API endpoints. The work is straightforward, well-scoped, and can be completed in 2-3 days of focused development.
|
||||
|
||||
The system will provide powerful whitelist/blacklist functionality while maintaining the performance and security characteristics already present in the codebase.
|
||||
356
docs/PRODUCTION_MIGRATION_PLAN.md
Normal file
356
docs/PRODUCTION_MIGRATION_PLAN.md
Normal file
@@ -0,0 +1,356 @@
|
||||
# Production Directory Structure Migration Plan
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the plan to migrate the ginxsom production deployment from the current configuration to a new, more organized directory structure.
|
||||
|
||||
## Current Configuration (As-Is)
|
||||
|
||||
```
|
||||
Binary Location: /var/www/html/blossom/ginxsom.fcgi
|
||||
Database Location: /var/www/html/blossom/ginxsom.db
|
||||
Data Directory: /var/www/html/blossom/
|
||||
Working Directory: /var/www/html/blossom/ (set via spawn-fcgi -d)
|
||||
Socket: /tmp/ginxsom-fcgi.sock
|
||||
```
|
||||
|
||||
**Issues with Current Setup:**
|
||||
1. Binary and database mixed with data files in web-accessible directory
|
||||
2. Database path hardcoded as relative path `db/ginxsom.db` but database is at root of working directory
|
||||
3. No separation between application files and user data
|
||||
4. Security concern: application files in web root
|
||||
|
||||
## Target Configuration (To-Be)
|
||||
|
||||
```
|
||||
Binary Location: /home/ubuntu/ginxsom/ginxsom.fcgi
|
||||
Database Location: /home/ubuntu/ginxsom/db/ginxsom.db
|
||||
Data Directory: /var/www/html/blossom/
|
||||
Working Directory: /home/ubuntu/ginxsom/ (set via spawn-fcgi -d)
|
||||
Socket: /tmp/ginxsom-fcgi.sock
|
||||
```
|
||||
|
||||
**Benefits of New Setup:**
|
||||
1. Application files separated from user data
|
||||
2. Database in proper subdirectory structure
|
||||
3. Application files outside web root (better security)
|
||||
4. Clear separation of concerns
|
||||
5. Easier backup and maintenance
|
||||
|
||||
## Directory Structure
|
||||
|
||||
### Application Directory: `/home/ubuntu/ginxsom/`
|
||||
```
|
||||
/home/ubuntu/ginxsom/
|
||||
├── ginxsom.fcgi # FastCGI binary
|
||||
├── db/
|
||||
│ └── ginxsom.db # SQLite database
|
||||
├── build/ # Build artifacts (from rsync)
|
||||
├── src/ # Source code (from rsync)
|
||||
├── include/ # Headers (from rsync)
|
||||
├── config/ # Config files (from rsync)
|
||||
└── scripts/ # Utility scripts (from rsync)
|
||||
```
|
||||
|
||||
### Data Directory: `/var/www/html/blossom/`
|
||||
```
|
||||
/var/www/html/blossom/
|
||||
├── <sha256>.jpg # User uploaded files
|
||||
├── <sha256>.png
|
||||
├── <sha256>.mp4
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Command-Line Arguments
|
||||
|
||||
The ginxsom binary supports these arguments (from [`src/main.c`](src/main.c:1488-1509)):
|
||||
|
||||
```bash
|
||||
--db-path PATH # Database file path (default: db/ginxsom.db)
|
||||
--storage-dir DIR # Storage directory for files (default: .)
|
||||
--help, -h # Show help message
|
||||
```
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### 1. Update deploy_lt.sh Configuration
|
||||
|
||||
Update the configuration variables in [`deploy_lt.sh`](deploy_lt.sh:16-23):
|
||||
|
||||
```bash
|
||||
# Configuration
|
||||
REMOTE_HOST="laantungir.net"
|
||||
REMOTE_USER="ubuntu"
|
||||
REMOTE_DIR="/home/ubuntu/ginxsom"
|
||||
REMOTE_DB_PATH="/home/ubuntu/ginxsom/db/ginxsom.db"
|
||||
REMOTE_NGINX_CONFIG="/etc/nginx/conf.d/default.conf"
|
||||
REMOTE_BINARY_PATH="/home/ubuntu/ginxsom/ginxsom.fcgi"
|
||||
REMOTE_SOCKET="/tmp/ginxsom-fcgi.sock"
|
||||
REMOTE_DATA_DIR="/var/www/html/blossom"
|
||||
```
|
||||
|
||||
### 2. Update Binary Deployment
|
||||
|
||||
Modify the binary copy section (lines 82-97) to use new path:
|
||||
|
||||
```bash
|
||||
# Copy binary to application directory (not web directory)
|
||||
print_status "Copying ginxsom binary to application directory..."
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << EOF
|
||||
# Stop any running process first
|
||||
sudo pkill -f ginxsom-fcgi || true
|
||||
sleep 1
|
||||
|
||||
# Remove old binary if it exists
|
||||
rm -f $REMOTE_BINARY_PATH
|
||||
|
||||
# Copy new binary
|
||||
cp $REMOTE_DIR/build/ginxsom-fcgi $REMOTE_BINARY_PATH
|
||||
chmod +x $REMOTE_BINARY_PATH
|
||||
chown ubuntu:ubuntu $REMOTE_BINARY_PATH
|
||||
|
||||
echo "Binary copied successfully"
|
||||
EOF
|
||||
```
|
||||
|
||||
### 3. Create Database Directory Structure
|
||||
|
||||
Add database setup before starting FastCGI:
|
||||
|
||||
```bash
|
||||
# Setup database directory
|
||||
print_status "Setting up database directory..."
|
||||
ssh $REMOTE_USER@$REMOTE_HOST << EOF
|
||||
# Create db directory if it doesn't exist
|
||||
mkdir -p $REMOTE_DIR/db
|
||||
|
||||
# Copy database if it exists in old location
|
||||
if [ -f /var/www/html/blossom/ginxsom.db ]; then
|
||||
echo "Migrating database from old location..."
|
||||
cp /var/www/html/blossom/ginxsom.db $REMOTE_DB_PATH
|
||||
elif [ ! -f $REMOTE_DB_PATH ]; then
|
||||
echo "Initializing new database..."
|
||||
# Database will be created by application on first run
|
||||
fi
|
||||
|
||||
# Set proper permissions
|
||||
chown -R ubuntu:ubuntu $REMOTE_DIR/db
|
||||
chmod 755 $REMOTE_DIR/db
|
||||
chmod 644 $REMOTE_DB_PATH 2>/dev/null || true
|
||||
|
||||
echo "Database directory setup complete"
|
||||
EOF
|
||||
```
|
||||
|
||||
### 4. Update spawn-fcgi Command
|
||||
|
||||
Modify the FastCGI startup (line 164) to include command-line arguments:
|
||||
|
||||
```bash
|
||||
# Start FastCGI process with explicit paths
|
||||
echo "Starting ginxsom FastCGI..."
|
||||
sudo spawn-fcgi \
|
||||
-M 666 \
|
||||
-u www-data \
|
||||
-g www-data \
|
||||
-s $REMOTE_SOCKET \
|
||||
-U www-data \
|
||||
-G www-data \
|
||||
-d $REMOTE_DIR \
|
||||
-- $REMOTE_BINARY_PATH \
|
||||
--db-path "$REMOTE_DB_PATH" \
|
||||
--storage-dir "$REMOTE_DATA_DIR"
|
||||
```
|
||||
|
||||
**Key Changes:**
|
||||
- `-d $REMOTE_DIR`: Sets working directory to `/home/ubuntu/ginxsom/`
|
||||
- `--db-path "$REMOTE_DB_PATH"`: Explicit database path
|
||||
- `--storage-dir "$REMOTE_DATA_DIR"`: Explicit data directory
|
||||
|
||||
### 5. Verify Permissions
|
||||
|
||||
Ensure proper permissions for all directories:
|
||||
|
||||
```bash
|
||||
# Application directory - owned by ubuntu
|
||||
sudo chown -R ubuntu:ubuntu /home/ubuntu/ginxsom
|
||||
sudo chmod 755 /home/ubuntu/ginxsom
|
||||
sudo chmod +x /home/ubuntu/ginxsom/ginxsom.fcgi
|
||||
|
||||
# Database directory - readable by www-data
|
||||
sudo chmod 755 /home/ubuntu/ginxsom/db
|
||||
sudo chmod 644 /home/ubuntu/ginxsom/db/ginxsom.db
|
||||
|
||||
# Data directory - writable by www-data
|
||||
sudo chown -R www-data:www-data /var/www/html/blossom
|
||||
sudo chmod 755 /var/www/html/blossom
|
||||
```
|
||||
|
||||
## Path Resolution Logic
|
||||
|
||||
### How Paths Work with spawn-fcgi -d Option
|
||||
|
||||
When spawn-fcgi starts the FastCGI process:
|
||||
|
||||
1. **Working Directory**: Set to `/home/ubuntu/ginxsom/` via `-d` option
|
||||
2. **Relative Paths**: Resolved from working directory
|
||||
3. **Absolute Paths**: Used as-is
|
||||
|
||||
### Default Behavior (Without Arguments)
|
||||
|
||||
From [`src/main.c`](src/main.c:30-31):
|
||||
```c
|
||||
char g_db_path[MAX_PATH_LEN] = "db/ginxsom.db"; // Relative to working dir
|
||||
char g_storage_dir[MAX_PATH_LEN] = "."; // Current working dir
|
||||
```
|
||||
|
||||
With working directory `/home/ubuntu/ginxsom/`:
|
||||
- Database: `/home/ubuntu/ginxsom/db/ginxsom.db` ✓
|
||||
- Storage: `/home/ubuntu/ginxsom/` ✗ (wrong - we want `/var/www/html/blossom/`)
|
||||
|
||||
### With Command-Line Arguments
|
||||
|
||||
```bash
|
||||
--db-path "/home/ubuntu/ginxsom/db/ginxsom.db"
|
||||
--storage-dir "/var/www/html/blossom"
|
||||
```
|
||||
|
||||
Result:
|
||||
- Database: `/home/ubuntu/ginxsom/db/ginxsom.db` ✓
|
||||
- Storage: `/var/www/html/blossom/` ✓
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### 1. Pre-Migration Verification
|
||||
```bash
|
||||
# Check current setup
|
||||
ssh ubuntu@laantungir.net "
|
||||
echo 'Current binary location:'
|
||||
ls -la /var/www/html/blossom/ginxsom.fcgi
|
||||
|
||||
echo 'Current database location:'
|
||||
ls -la /var/www/html/blossom/ginxsom.db
|
||||
|
||||
echo 'Current process:'
|
||||
ps aux | grep ginxsom-fcgi | grep -v grep
|
||||
"
|
||||
```
|
||||
|
||||
### 2. Post-Migration Verification
|
||||
```bash
|
||||
# Check new setup
|
||||
ssh ubuntu@laantungir.net "
|
||||
echo 'New binary location:'
|
||||
ls -la /home/ubuntu/ginxsom/ginxsom.fcgi
|
||||
|
||||
echo 'New database location:'
|
||||
ls -la /home/ubuntu/ginxsom/db/ginxsom.db
|
||||
|
||||
echo 'Data directory:'
|
||||
ls -la /var/www/html/blossom/ | head -10
|
||||
|
||||
echo 'Process working directory:'
|
||||
sudo ls -la /proc/\$(pgrep -f ginxsom.fcgi)/cwd
|
||||
|
||||
echo 'Process command line:'
|
||||
ps aux | grep ginxsom-fcgi | grep -v grep
|
||||
"
|
||||
```
|
||||
|
||||
### 3. Functional Testing
|
||||
```bash
|
||||
# Test health endpoint
|
||||
curl -k https://blossom.laantungir.net/health
|
||||
|
||||
# Test file upload
|
||||
./tests/file_put_production.sh
|
||||
|
||||
# Test file retrieval
|
||||
curl -k -I https://blossom.laantungir.net/<sha256>
|
||||
|
||||
# Test list endpoint
|
||||
curl -k https://blossom.laantungir.net/list/<pubkey>
|
||||
```
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If migration fails:
|
||||
|
||||
1. **Stop new process:**
|
||||
```bash
|
||||
sudo pkill -f ginxsom-fcgi
|
||||
```
|
||||
|
||||
2. **Restore old binary location:**
|
||||
```bash
|
||||
sudo cp /home/ubuntu/ginxsom/build/ginxsom-fcgi /var/www/html/blossom/ginxsom.fcgi
|
||||
sudo chown www-data:www-data /var/www/html/blossom/ginxsom.fcgi
|
||||
```
|
||||
|
||||
3. **Restart with old configuration:**
|
||||
```bash
|
||||
sudo spawn-fcgi -M 666 -u www-data -g www-data \
|
||||
-s /tmp/ginxsom-fcgi.sock \
|
||||
-U www-data -G www-data \
|
||||
-d /var/www/html/blossom \
|
||||
/var/www/html/blossom/ginxsom.fcgi
|
||||
```
|
||||
|
||||
## Additional Considerations
|
||||
|
||||
### 1. Database Backup
|
||||
Before migration, backup the current database:
|
||||
```bash
|
||||
ssh ubuntu@laantungir.net "
|
||||
cp /var/www/html/blossom/ginxsom.db /var/www/html/blossom/ginxsom.db.backup
|
||||
"
|
||||
```
|
||||
|
||||
### 2. NIP-94 Origin Configuration
|
||||
After migration, update [`src/bud08.c`](src/bud08.c) to return production domain:
|
||||
```c
|
||||
void nip94_get_origin(char *origin, size_t origin_size) {
|
||||
snprintf(origin, origin_size, "https://blossom.laantungir.net");
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Monitoring
|
||||
Monitor logs after migration:
|
||||
```bash
|
||||
# Application logs
|
||||
ssh ubuntu@laantungir.net "sudo journalctl -u nginx -f"
|
||||
|
||||
# FastCGI process
|
||||
ssh ubuntu@laantungir.net "ps aux | grep ginxsom-fcgi"
|
||||
```
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Migration is successful when:
|
||||
|
||||
1. ✓ Binary running from `/home/ubuntu/ginxsom/ginxsom.fcgi`
|
||||
2. ✓ Database accessible at `/home/ubuntu/ginxsom/db/ginxsom.db`
|
||||
3. ✓ Files stored in `/var/www/html/blossom/`
|
||||
4. ✓ Health endpoint returns 200 OK
|
||||
5. ✓ File upload works correctly
|
||||
6. ✓ File retrieval works correctly
|
||||
7. ✓ Database queries succeed
|
||||
8. ✓ No permission errors in logs
|
||||
|
||||
## Timeline
|
||||
|
||||
1. **Preparation**: Update deploy_lt.sh script (15 minutes)
|
||||
2. **Backup**: Backup current database (5 minutes)
|
||||
3. **Migration**: Run updated deployment script (10 minutes)
|
||||
4. **Testing**: Verify all endpoints (15 minutes)
|
||||
5. **Monitoring**: Watch for issues (30 minutes)
|
||||
|
||||
**Total Estimated Time**: ~75 minutes
|
||||
|
||||
## References
|
||||
|
||||
- Current deployment script: [`deploy_lt.sh`](deploy_lt.sh)
|
||||
- Main application: [`src/main.c`](src/main.c)
|
||||
- Command-line parsing: [`src/main.c:1488-1509`](src/main.c:1488-1509)
|
||||
- Global configuration: [`src/main.c:30-31`](src/main.c:30-31)
|
||||
- Database operations: [`src/main.c:333-385`](src/main.c:333-385)
|
||||
@@ -33,6 +33,10 @@
|
||||
#define DEFAULT_MAX_BLOBS_PER_USER 1000
|
||||
#define DEFAULT_RATE_LIMIT 10
|
||||
|
||||
/* Global configuration variables */
|
||||
extern char g_db_path[MAX_PATH_LEN];
|
||||
extern char g_storage_dir[MAX_PATH_LEN];
|
||||
|
||||
/* Error codes */
|
||||
typedef enum {
|
||||
GINXSOM_OK = 0,
|
||||
|
||||
@@ -131,21 +131,48 @@ increment_version() {
|
||||
export NEW_VERSION
|
||||
}
|
||||
|
||||
# Function to update version in header file
|
||||
update_version_in_header() {
|
||||
local version="$1"
|
||||
print_status "Updating version in src/ginxsom.h to $version..."
|
||||
|
||||
# Extract version components (remove 'v' prefix)
|
||||
local version_no_v=${version#v}
|
||||
|
||||
# Parse major.minor.patch using regex
|
||||
if [[ $version_no_v =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
|
||||
local major=${BASH_REMATCH[1]}
|
||||
local minor=${BASH_REMATCH[2]}
|
||||
local patch=${BASH_REMATCH[3]}
|
||||
|
||||
# Update the header file
|
||||
sed -i "s/#define VERSION_MAJOR [0-9]\+/#define VERSION_MAJOR $major/" src/ginxsom.h
|
||||
sed -i "s/#define VERSION_MINOR [0-9]\+/#define VERSION_MINOR $minor/" src/ginxsom.h
|
||||
sed -i "s/#define VERSION_PATCH [0-9]\+/#define VERSION_PATCH $patch/" src/ginxsom.h
|
||||
sed -i "s/#define VERSION \"v[0-9]\+\.[0-9]\+\.[0-9]\+\"/#define VERSION \"$version\"/" src/ginxsom.h
|
||||
|
||||
print_success "Updated version in header file"
|
||||
else
|
||||
print_error "Invalid version format: $version"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to compile the Ginxsom project
|
||||
compile_project() {
|
||||
print_status "Compiling Ginxsom FastCGI server..."
|
||||
|
||||
|
||||
# Clean previous build
|
||||
if make clean > /dev/null 2>&1; then
|
||||
print_success "Cleaned previous build"
|
||||
else
|
||||
print_warning "Clean failed or no Makefile found"
|
||||
fi
|
||||
|
||||
|
||||
# Compile the project
|
||||
if make > /dev/null 2>&1; then
|
||||
print_success "Ginxsom compiled successfully"
|
||||
|
||||
|
||||
# Verify the binary was created
|
||||
if [[ -f "build/ginxsom-fcgi" ]]; then
|
||||
print_success "Binary created: build/ginxsom-fcgi"
|
||||
@@ -390,9 +417,12 @@ main() {
|
||||
git tag "$NEW_VERSION" > /dev/null 2>&1
|
||||
fi
|
||||
|
||||
# Update version in header file
|
||||
update_version_in_header "$NEW_VERSION"
|
||||
|
||||
# Compile project
|
||||
compile_project
|
||||
|
||||
|
||||
# Build release binary
|
||||
build_release_binary
|
||||
|
||||
@@ -423,9 +453,12 @@ main() {
|
||||
git tag "$NEW_VERSION" > /dev/null 2>&1
|
||||
fi
|
||||
|
||||
# Update version in header file
|
||||
update_version_in_header "$NEW_VERSION"
|
||||
|
||||
# Compile project
|
||||
compile_project
|
||||
|
||||
|
||||
# Commit and push (but skip tag creation since we already did it)
|
||||
git_commit_and_push_no_tag
|
||||
|
||||
376
remote.nginx.config
Normal file
376
remote.nginx.config
Normal file
@@ -0,0 +1,376 @@
|
||||
# FastCGI upstream configuration
|
||||
upstream ginxsom_backend {
|
||||
server unix:/tmp/ginxsom-fcgi.sock;
|
||||
}
|
||||
|
||||
# Main domains
|
||||
server {
|
||||
if ($host = laantungir.net) {
|
||||
return 301 https://$host$request_uri;
|
||||
} # managed by Certbot
|
||||
|
||||
|
||||
listen 80;
|
||||
server_name laantungir.com www.laantungir.com laantungir.net www.laantungir.net laantungir.org www.laantungir.org;
|
||||
|
||||
root /var/www/html;
|
||||
index index.html index.htm;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
error_page 404 /404.html;
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /var/www/html;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
# Main domains HTTPS - using the main certificate
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name laantungir.com www.laantungir.com laantungir.net www.laantungir.net laantungir.org www.laantungir.org;
|
||||
ssl_certificate /etc/letsencrypt/live/laantungir.net/fullchain.pem; # managed by Certbot
|
||||
ssl_certificate_key /etc/letsencrypt/live/laantungir.net/privkey.pem; # managed by Certbot
|
||||
|
||||
root /var/www/html;
|
||||
index index.html index.htm;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
error_page 404 /404.html;
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /var/www/html;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
# Blossom subdomains HTTP - redirect to HTTPS (keep for ACME)
|
||||
server {
|
||||
listen 80;
|
||||
server_name blossom.laantungir.com blossom.laantungir.net blossom.laantungir.org;
|
||||
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
location / {
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
# Blossom subdomains HTTPS - ginxsom FastCGI
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name blossom.laantungir.com blossom.laantungir.net blossom.laantungir.org;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
|
||||
|
||||
# Security headers
|
||||
add_header X-Content-Type-Options nosniff always;
|
||||
add_header X-Frame-Options DENY always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
|
||||
# CORS for Blossom protocol
|
||||
add_header Access-Control-Allow-Origin * always;
|
||||
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
|
||||
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
|
||||
add_header Access-Control-Max-Age 86400 always;
|
||||
|
||||
# Root directory for blob storage
|
||||
root /var/www/html/blossom;
|
||||
|
||||
# Maximum upload size
|
||||
client_max_body_size 100M;
|
||||
|
||||
# OPTIONS preflight handler
|
||||
if ($request_method = OPTIONS) {
|
||||
return 204;
|
||||
}
|
||||
|
||||
# PUT /upload - File uploads
|
||||
location = /upload {
|
||||
if ($request_method !~ ^(PUT|HEAD)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
}
|
||||
|
||||
# GET /list/<pubkey> - List user blobs
|
||||
location ~ "^/list/([a-f0-9]{64})$" {
|
||||
if ($request_method !~ ^(GET)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
}
|
||||
|
||||
# PUT /mirror - Mirror content
|
||||
location = /mirror {
|
||||
if ($request_method !~ ^(PUT)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
}
|
||||
|
||||
# PUT /report - Report content
|
||||
location = /report {
|
||||
if ($request_method !~ ^(PUT)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
}
|
||||
|
||||
# GET /auth - NIP-42 challenges
|
||||
location = /auth {
|
||||
if ($request_method !~ ^(GET)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
}
|
||||
|
||||
# Admin API
|
||||
location /api/ {
|
||||
if ($request_method !~ ^(GET|PUT)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
}
|
||||
|
||||
# Blob serving - SHA256 patterns
|
||||
location ~ "^/([a-f0-9]{64})(\.[a-zA-Z0-9]+)?$" {
|
||||
# Handle DELETE via rewrite
|
||||
if ($request_method = DELETE) {
|
||||
rewrite ^/(.*)$ /fcgi-delete/$1 last;
|
||||
}
|
||||
|
||||
# Route HEAD to FastCGI
|
||||
if ($request_method = HEAD) {
|
||||
rewrite ^/(.*)$ /fcgi-head/$1 last;
|
||||
}
|
||||
|
||||
# GET requests - serve files directly
|
||||
if ($request_method != GET) {
|
||||
return 405;
|
||||
}
|
||||
|
||||
try_files /$1.txt /$1.jpg /$1.jpeg /$1.png /$1.webp /$1.gif /$1.pdf /$1.mp4 /$1.mp3 /$1.md =404;
|
||||
|
||||
# Cache headers
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
}
|
||||
|
||||
# Internal FastCGI handlers
|
||||
location ~ "^/fcgi-delete/([a-f0-9]{64}).*$" {
|
||||
internal;
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
fastcgi_param REQUEST_URI /$1;
|
||||
}
|
||||
|
||||
location ~ "^/fcgi-head/([a-f0-9]{64}).*$" {
|
||||
internal;
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
fastcgi_param REQUEST_URI /$1;
|
||||
}
|
||||
|
||||
# Health check
|
||||
location /health {
|
||||
access_log off;
|
||||
return 200 "OK\n";
|
||||
add_header Content-Type text/plain;
|
||||
add_header Access-Control-Allow-Origin * always;
|
||||
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
|
||||
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
|
||||
add_header Access-Control-Max-Age 86400 always;
|
||||
}
|
||||
|
||||
# Default location - Server info from FastCGI
|
||||
location / {
|
||||
if ($request_method !~ ^(GET)$) {
|
||||
return 405;
|
||||
}
|
||||
fastcgi_pass ginxsom_backend;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name relay.laantungir.com relay.laantungir.net relay.laantungir.org;
|
||||
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8888;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
|
||||
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_read_timeout 86400s;
|
||||
proxy_send_timeout 86400s;
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
gzip off;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# # Relay HTTPS - proxy to c-relay
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name relay.laantungir.com relay.laantungir.net relay.laantungir.org;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/blossom.laantungir.net/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/blossom.laantungir.net/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8888;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
|
||||
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_read_timeout 86400s;
|
||||
proxy_send_timeout 86400s;
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
gzip off;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
# Git subdomains HTTP - redirect to HTTPS
|
||||
server {
|
||||
listen 80;
|
||||
server_name git.laantungir.com git.laantungir.net git.laantungir.org;
|
||||
|
||||
# Allow larger file uploads for Git releases
|
||||
client_max_body_size 50M;
|
||||
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
location / {
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
# Auth subdomains HTTP - redirect to HTTPS
|
||||
server {
|
||||
listen 80;
|
||||
server_name auth.laantungir.com auth.laantungir.net auth.laantungir.org;
|
||||
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
location / {
|
||||
}
|
||||
}
|
||||
|
||||
# Git subdomains HTTPS - proxy to gitea
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name git.laantungir.com git.laantungir.net git.laantungir.org;
|
||||
|
||||
# Allow larger file uploads for Git releases
|
||||
client_max_body_size 50M;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:3000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
proxy_read_timeout 86400s;
|
||||
proxy_send_timeout 86400s;
|
||||
proxy_connect_timeout 60s;
|
||||
gzip off;
|
||||
# proxy_set_header Sec-WebSocket-Extensions ;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
|
||||
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
}
|
||||
|
||||
# Auth subdomains HTTPS - proxy to nostr-auth
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name auth.laantungir.com auth.laantungir.net auth.laantungir.org;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:3001;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
proxy_read_timeout 86400s;
|
||||
proxy_send_timeout 86400s;
|
||||
proxy_connect_timeout 60s;
|
||||
gzip off;
|
||||
# proxy_set_header Sec-WebSocket-Extensions ;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
|
||||
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
}
|
||||
@@ -168,7 +168,7 @@ export GINX_DEBUG=1
|
||||
|
||||
# Start FastCGI application with proper logging (daemonized but with redirected streams)
|
||||
echo "FastCGI starting at $(date)" >> logs/app/stderr.log
|
||||
spawn-fcgi -s "$SOCKET_PATH" -M 666 -u "$USER" -g "$USER" -f "$FCGI_BINARY" -P "$PID_FILE" 1>>logs/app/stdout.log 2>>logs/app/stderr.log
|
||||
spawn-fcgi -s "$SOCKET_PATH" -M 666 -u "$USER" -g "$USER" -P "$PID_FILE" -- "$FCGI_BINARY" --storage-dir blobs 1>>logs/app/stdout.log 2>>logs/app/stderr.log
|
||||
|
||||
if [ $? -eq 0 ] && [ -f "$PID_FILE" ]; then
|
||||
PID=$(cat "$PID_FILE")
|
||||
@@ -250,6 +250,8 @@ else
|
||||
fi
|
||||
|
||||
echo -e "\n${GREEN}=== Restart sequence complete ===${NC}"
|
||||
echo -e "${YELLOW}Server should be available at: http://localhost:9001${NC}"
|
||||
echo -e "${YELLOW}To stop all processes, run: nginx -p . -c $NGINX_CONFIG -s stop && kill \$(cat $PID_FILE 2>/dev/null)${NC}"
|
||||
echo -e "${YELLOW}To monitor logs, check: logs/error.log, logs/access.log, and logs/fcgi-stderr.log${NC}"
|
||||
echo -e "${YELLOW}To monitor logs, check: logs/nginx/error.log, logs/nginx/access.log, logs/app/stderr.log, logs/app/stdout.log${NC}"
|
||||
echo -e "\n${YELLOW}Server is available at:${NC}"
|
||||
echo -e " ${GREEN}HTTP:${NC} http://localhost:9001"
|
||||
echo -e " ${GREEN}HTTPS:${NC} https://localhost:9443"
|
||||
14
src/bud04.c
14
src/bud04.c
@@ -426,9 +426,17 @@ void handle_mirror_request(void) {
|
||||
// Determine file extension from Content-Type using centralized mapping
|
||||
const char* extension = mime_to_extension(content_type_final);
|
||||
|
||||
// Save file to blobs directory
|
||||
char filepath[512];
|
||||
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256_hex, extension);
|
||||
// Save file to storage directory using global g_storage_dir variable
|
||||
char filepath[4096];
|
||||
int filepath_len = snprintf(filepath, sizeof(filepath), "%s/%s%s", g_storage_dir, sha256_hex, extension);
|
||||
if (filepath_len >= (int)sizeof(filepath)) {
|
||||
free_mirror_download(download);
|
||||
send_error_response(500, "file_error",
|
||||
"File path too long",
|
||||
"Internal server error during file path construction");
|
||||
log_request("PUT", "/mirror", uploader_pubkey ? "authenticated" : "anonymous", 500);
|
||||
return;
|
||||
}
|
||||
|
||||
FILE* outfile = fopen(filepath, "wb");
|
||||
if (!outfile) {
|
||||
|
||||
67
src/bud08.c
67
src/bud08.c
@@ -24,7 +24,7 @@ int nip94_is_enabled(void) {
|
||||
return 1; // Default enabled on DB error
|
||||
}
|
||||
|
||||
const char* sql = "SELECT value FROM server_config WHERE key = 'nip94_enabled'";
|
||||
const char* sql = "SELECT value FROM config WHERE key = 'nip94_enabled'";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
rc = sqlite3_step(stmt);
|
||||
@@ -44,40 +44,53 @@ int nip94_get_origin(char* out, size_t out_size) {
|
||||
if (!out || out_size == 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
// Check database config first for custom origin
|
||||
sqlite3* db;
|
||||
sqlite3_stmt* stmt;
|
||||
int rc;
|
||||
|
||||
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc) {
|
||||
// Default on DB error
|
||||
strncpy(out, "http://localhost:9001", out_size - 1);
|
||||
out[out_size - 1] = '\0';
|
||||
if (rc == SQLITE_OK) {
|
||||
const char* sql = "SELECT value FROM config WHERE key = 'cdn_origin'";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
const char* value = (const char*)sqlite3_column_text(stmt, 0);
|
||||
if (value) {
|
||||
strncpy(out, value, out_size - 1);
|
||||
out[out_size - 1] = '\0';
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_close(db);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
sqlite3_close(db);
|
||||
}
|
||||
|
||||
// Check if request came over HTTPS (nginx sets HTTPS=on for SSL requests)
|
||||
const char* https_env = getenv("HTTPS");
|
||||
const char* server_name = getenv("SERVER_NAME");
|
||||
|
||||
// Use production domain if SERVER_NAME is set and not localhost
|
||||
if (server_name && strcmp(server_name, "localhost") != 0) {
|
||||
if (https_env && strcmp(https_env, "on") == 0) {
|
||||
snprintf(out, out_size, "https://%s", server_name);
|
||||
} else {
|
||||
snprintf(out, out_size, "http://%s", server_name);
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
const char* sql = "SELECT value FROM server_config WHERE key = 'cdn_origin'";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
const char* value = (const char*)sqlite3_column_text(stmt, 0);
|
||||
if (value) {
|
||||
strncpy(out, value, out_size - 1);
|
||||
out[out_size - 1] = '\0';
|
||||
sqlite3_finalize(stmt);
|
||||
sqlite3_close(db);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
// Fallback to localhost for development
|
||||
if (https_env && strcmp(https_env, "on") == 0) {
|
||||
strncpy(out, "https://localhost:9443", out_size - 1);
|
||||
} else {
|
||||
strncpy(out, "http://localhost:9001", out_size - 1);
|
||||
}
|
||||
|
||||
sqlite3_close(db);
|
||||
|
||||
// Default fallback
|
||||
strncpy(out, "http://localhost:9001", out_size - 1);
|
||||
out[out_size - 1] = '\0';
|
||||
return 1;
|
||||
}
|
||||
|
||||
@@ -7,6 +7,11 @@
|
||||
|
||||
#ifndef GINXSOM_H
|
||||
#define GINXSOM_H
|
||||
// Version information (auto-updated by build system)
|
||||
#define VERSION_MAJOR 0
|
||||
#define VERSION_MINOR 1
|
||||
#define VERSION_PATCH 7
|
||||
#define VERSION "v0.1.7"
|
||||
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
@@ -30,6 +35,10 @@ extern sqlite3* db;
|
||||
int init_database(void);
|
||||
void close_database(void);
|
||||
|
||||
// Global configuration variables (defined in main.c)
|
||||
extern char g_db_path[4096];
|
||||
extern char g_storage_dir[4096];
|
||||
|
||||
// SHA-256 extraction and validation
|
||||
const char* extract_sha256_from_uri(const char* uri);
|
||||
|
||||
|
||||
512
src/main.c
512
src/main.c
@@ -7,6 +7,7 @@
|
||||
#include "ginxsom.h"
|
||||
#include "../nostr_core_lib/nostr_core/nostr_common.h"
|
||||
#include "../nostr_core_lib/nostr_core/utils.h"
|
||||
#include <getopt.h>
|
||||
#include <curl/curl.h>
|
||||
#include <fcgi_stdio.h>
|
||||
#include <sqlite3.h>
|
||||
@@ -22,11 +23,15 @@
|
||||
// Debug macros removed
|
||||
|
||||
#define MAX_SHA256_LEN 65
|
||||
#define MAX_PATH_LEN 512
|
||||
#define MAX_PATH_LEN 4096
|
||||
#define MAX_MIME_LEN 128
|
||||
|
||||
// Database path
|
||||
#define DB_PATH "db/ginxsom.db"
|
||||
// Configuration variables - can be overridden via command line
|
||||
char g_db_path[MAX_PATH_LEN] = "db/ginxsom.db";
|
||||
char g_storage_dir[MAX_PATH_LEN] = ".";
|
||||
|
||||
// Use global configuration variables
|
||||
#define DB_PATH g_db_path
|
||||
|
||||
|
||||
// Configuration system implementation
|
||||
@@ -35,22 +40,6 @@
|
||||
#include <sys/stat.h>
|
||||
#include <unistd.h>
|
||||
|
||||
// ===== UNUSED CODE - SAFE TO REMOVE AFTER TESTING =====
|
||||
// Server configuration structure
|
||||
/*
|
||||
typedef struct {
|
||||
char admin_pubkey[256];
|
||||
char admin_enabled[8];
|
||||
int config_loaded;
|
||||
} server_config_t;
|
||||
|
||||
// Global configuration instance
|
||||
static server_config_t g_server_config = {0};
|
||||
|
||||
// Global server private key (stored in memory only for security)
|
||||
static char server_private_key[128] = {0};
|
||||
*/
|
||||
// ===== END UNUSED CODE =====
|
||||
|
||||
// Function to get XDG config directory
|
||||
const char *get_config_dir(char *buffer, size_t buffer_size) {
|
||||
@@ -70,240 +59,6 @@ const char *get_config_dir(char *buffer, size_t buffer_size) {
|
||||
return ".config/ginxsom";
|
||||
}
|
||||
|
||||
/*
|
||||
// ===== UNUSED CODE - SAFE TO REMOVE AFTER TESTING =====
|
||||
// Load server configuration from database or create defaults
|
||||
int initialize_server_config(void) {
|
||||
sqlite3 *db = NULL;
|
||||
sqlite3_stmt *stmt = NULL;
|
||||
int rc;
|
||||
|
||||
memset(&g_server_config, 0, sizeof(g_server_config));
|
||||
|
||||
// Open database
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
|
||||
if (rc != SQLITE_OK) {
|
||||
fprintf(stderr, "CONFIG: Could not open database for config: %s\n",
|
||||
sqlite3_errmsg(db));
|
||||
// Config database doesn't exist - leave config uninitialized
|
||||
g_server_config.config_loaded = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Load admin_pubkey
|
||||
const char *sql = "SELECT value FROM config WHERE key = ?";
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, "admin_pubkey", -1, SQLITE_STATIC);
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
const char *value = (const char *)sqlite3_column_text(stmt, 0);
|
||||
if (value) {
|
||||
strncpy(g_server_config.admin_pubkey, value,
|
||||
sizeof(g_server_config.admin_pubkey) - 1);
|
||||
}
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
|
||||
// Load admin_enabled
|
||||
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, "admin_enabled", -1, SQLITE_STATIC);
|
||||
rc = sqlite3_step(stmt);
|
||||
if (rc == SQLITE_ROW) {
|
||||
const char *value = (const char *)sqlite3_column_text(stmt, 0);
|
||||
if (value && strcmp(value, "true") == 0) {
|
||||
strcpy(g_server_config.admin_enabled, "true");
|
||||
} else {
|
||||
strcpy(g_server_config.admin_enabled, "false");
|
||||
}
|
||||
}
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
|
||||
sqlite3_close(db);
|
||||
|
||||
g_server_config.config_loaded = 1;
|
||||
fprintf(stderr, "CONFIG: Server configuration loaded\n");
|
||||
return 1;
|
||||
}
|
||||
// ===== END UNUSED CODE =====
|
||||
*/
|
||||
|
||||
/*
|
||||
// File-based configuration system
|
||||
// Config file path resolution
|
||||
int get_config_file_path(char *path, size_t path_size) {
|
||||
const char *home = getenv("HOME");
|
||||
const char *xdg_config = getenv("XDG_CONFIG_HOME");
|
||||
|
||||
if (xdg_config) {
|
||||
snprintf(path, path_size, "%s/ginxsom/ginxsom_config_event.json",
|
||||
xdg_config);
|
||||
} else if (home) {
|
||||
snprintf(path, path_size, "%s/.config/ginxsom/ginxsom_config_event.json",
|
||||
home);
|
||||
} else {
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
*/
|
||||
|
||||
/*
|
||||
// Load and validate config event
|
||||
int load_server_config(const char *config_path) {
|
||||
FILE *file = fopen(config_path, "r");
|
||||
if (!file) {
|
||||
return 0; // Config file doesn't exist
|
||||
}
|
||||
|
||||
// Read entire file
|
||||
fseek(file, 0, SEEK_END);
|
||||
long file_size = ftell(file);
|
||||
fseek(file, 0, SEEK_SET);
|
||||
|
||||
char *json_data = malloc(file_size + 1);
|
||||
if (!json_data) {
|
||||
fclose(file);
|
||||
return 0;
|
||||
}
|
||||
|
||||
fread(json_data, 1, file_size, file);
|
||||
json_data[file_size] = '\0';
|
||||
fclose(file);
|
||||
|
||||
// Parse and validate JSON event
|
||||
cJSON *event = cJSON_Parse(json_data);
|
||||
free(json_data);
|
||||
|
||||
if (!event) {
|
||||
fprintf(stderr, "Invalid JSON in config file\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Validate event structure and signature
|
||||
if (nostr_validate_event(event) != NOSTR_SUCCESS) {
|
||||
fprintf(stderr, "Invalid or corrupted config event\n");
|
||||
cJSON_Delete(event);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Extract configuration and apply to server
|
||||
int result = apply_config_from_event(event);
|
||||
cJSON_Delete(event);
|
||||
|
||||
return result;
|
||||
}
|
||||
*/
|
||||
|
||||
/*
|
||||
// Extract config from validated event and apply to server
|
||||
int apply_config_from_event(cJSON *event) {
|
||||
sqlite3 *db;
|
||||
sqlite3_stmt *stmt;
|
||||
int rc;
|
||||
|
||||
// Open database for config storage
|
||||
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READWRITE, NULL);
|
||||
if (rc) {
|
||||
fprintf(stderr, "Failed to open database for config\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Extract admin pubkey from event
|
||||
cJSON *pubkey_json = cJSON_GetObjectItem(event, "pubkey");
|
||||
if (!pubkey_json || !cJSON_IsString(pubkey_json)) {
|
||||
sqlite3_close(db);
|
||||
return 0;
|
||||
}
|
||||
const char *admin_pubkey = cJSON_GetStringValue(pubkey_json);
|
||||
|
||||
// Store admin pubkey in database
|
||||
const char *insert_sql = "INSERT OR REPLACE INTO config (key, value, "
|
||||
"description) VALUES (?, ?, ?)";
|
||||
rc = sqlite3_prepare_v2(db, insert_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, "admin_pubkey", -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, admin_pubkey, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 3, "Admin public key from config event", -1,
|
||||
SQLITE_STATIC);
|
||||
sqlite3_step(stmt);
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
|
||||
// Extract server private key and store securely (in memory only)
|
||||
cJSON *tags = cJSON_GetObjectItem(event, "tags");
|
||||
if (tags && cJSON_IsArray(tags)) {
|
||||
cJSON *tag = NULL;
|
||||
cJSON_ArrayForEach(tag, tags) {
|
||||
if (!cJSON_IsArray(tag))
|
||||
continue;
|
||||
|
||||
cJSON *tag_name = cJSON_GetArrayItem(tag, 0);
|
||||
cJSON *tag_value = cJSON_GetArrayItem(tag, 1);
|
||||
|
||||
if (!tag_name || !cJSON_IsString(tag_name) || !tag_value ||
|
||||
!cJSON_IsString(tag_value))
|
||||
continue;
|
||||
|
||||
const char *key = cJSON_GetStringValue(tag_name);
|
||||
const char *value = cJSON_GetStringValue(tag_value);
|
||||
|
||||
if (strcmp(key, "server_privkey") == 0) {
|
||||
// Store server private key in global variable (memory only)
|
||||
// strncpy(server_private_key, value, sizeof(server_private_key) - 1);
|
||||
// server_private_key[sizeof(server_private_key) - 1] = '\0';
|
||||
} else {
|
||||
// Store other config values in database
|
||||
rc = sqlite3_prepare_v2(db, insert_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, key, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 2, value, -1, SQLITE_STATIC);
|
||||
sqlite3_bind_text(stmt, 3, "From config event", -1, SQLITE_STATIC);
|
||||
sqlite3_step(stmt);
|
||||
sqlite3_finalize(stmt);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sqlite3_close(db);
|
||||
return 1;
|
||||
}
|
||||
*/
|
||||
|
||||
/*
|
||||
// Interactive setup runner
|
||||
int run_interactive_setup(const char *config_path) {
|
||||
printf("\n=== Ginxsom First-Time Setup Required ===\n");
|
||||
printf("No configuration found at: %s\n\n", config_path);
|
||||
printf("Options:\n");
|
||||
printf("1. Run interactive setup wizard\n");
|
||||
printf("2. Exit and create config manually\n");
|
||||
printf("Choice (1/2): ");
|
||||
|
||||
char choice[10];
|
||||
if (!fgets(choice, sizeof(choice), stdin)) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (choice[0] == '1') {
|
||||
// Run setup script
|
||||
char script_path[512];
|
||||
snprintf(script_path, sizeof(script_path), "./scripts/setup.sh \"%s\"",
|
||||
config_path);
|
||||
return system(script_path);
|
||||
} else {
|
||||
printf("\nManual setup instructions:\n");
|
||||
printf("1. Run: ./scripts/generate_config.sh\n");
|
||||
printf("2. Place signed config at: %s\n", config_path);
|
||||
printf("3. Restart ginxsom\n");
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
// Function declarations
|
||||
void handle_options_request(void);
|
||||
@@ -434,7 +189,23 @@ int file_exists_with_type(const char *sha256, const char *mime_type) {
|
||||
char filepath[MAX_PATH_LEN];
|
||||
const char *extension = mime_to_extension(mime_type);
|
||||
|
||||
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256, extension);
|
||||
// Construct path safely
|
||||
size_t dir_len = strlen(g_storage_dir);
|
||||
size_t sha_len = strlen(sha256);
|
||||
size_t ext_len = strlen(extension);
|
||||
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
|
||||
|
||||
if (total_len > sizeof(filepath)) {
|
||||
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256, extension);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Build path manually to avoid compiler warnings
|
||||
memcpy(filepath, g_storage_dir, dir_len);
|
||||
filepath[dir_len] = '/';
|
||||
memcpy(filepath + dir_len + 1, sha256, sha_len);
|
||||
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
|
||||
filepath[total_len - 1] = '\0';
|
||||
|
||||
struct stat st;
|
||||
int result = stat(filepath, &st);
|
||||
@@ -524,18 +295,6 @@ const char *extract_sha256_from_uri(const char *uri) {
|
||||
return sha256_buffer;
|
||||
}
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
// BUD 02 - Upload & Authentication
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
// AUTHENTICATION RULES SYSTEM (4.1.2)
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
@@ -911,7 +670,23 @@ void handle_delete_request_with_validation(const char *sha256, nostr_request_res
|
||||
}
|
||||
|
||||
char filepath[MAX_PATH_LEN];
|
||||
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256, extension);
|
||||
// Construct path safely
|
||||
size_t dir_len = strlen(g_storage_dir);
|
||||
size_t sha_len = strlen(sha256);
|
||||
size_t ext_len = strlen(extension);
|
||||
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
|
||||
|
||||
if (total_len > sizeof(filepath)) {
|
||||
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256, extension);
|
||||
// Continue anyway - unlink will fail gracefully
|
||||
} else {
|
||||
// Build path manually to avoid compiler warnings
|
||||
memcpy(filepath, g_storage_dir, dir_len);
|
||||
filepath[dir_len] = '/';
|
||||
memcpy(filepath + dir_len + 1, sha256, sha_len);
|
||||
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
|
||||
filepath[total_len - 1] = '\0';
|
||||
}
|
||||
|
||||
// Delete the physical file
|
||||
if (unlink(filepath) != 0) {
|
||||
@@ -1029,9 +804,29 @@ void handle_upload_request(void) {
|
||||
// Determine file extension from Content-Type using centralized mapping
|
||||
const char *extension = mime_to_extension(content_type);
|
||||
|
||||
// Save file to blobs directory with SHA-256 + extension
|
||||
// Save file to storage directory with SHA-256 + extension
|
||||
char filepath[MAX_PATH_LEN];
|
||||
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256_hex, extension);
|
||||
// Construct path safely
|
||||
size_t dir_len = strlen(g_storage_dir);
|
||||
size_t sha_len = strlen(sha256_hex);
|
||||
size_t ext_len = strlen(extension);
|
||||
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
|
||||
|
||||
if (total_len > sizeof(filepath)) {
|
||||
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256_hex, extension);
|
||||
printf("Status: 500 Internal Server Error\r\n");
|
||||
|
||||
printf("Content-Type: text/plain\r\n\r\n");
|
||||
printf("File path too long\n");
|
||||
return;
|
||||
}
|
||||
|
||||
// Build path manually to avoid compiler warnings
|
||||
memcpy(filepath, g_storage_dir, dir_len);
|
||||
filepath[dir_len] = '/';
|
||||
memcpy(filepath + dir_len + 1, sha256_hex, sha_len);
|
||||
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
|
||||
filepath[total_len - 1] = '\0';
|
||||
|
||||
FILE *outfile = fopen(filepath, "wb");
|
||||
if (!outfile) {
|
||||
@@ -1280,9 +1075,29 @@ void handle_upload_request_with_validation(nostr_request_result_t* validation_re
|
||||
// Determine file extension from Content-Type using centralized mapping
|
||||
const char *extension = mime_to_extension(content_type);
|
||||
|
||||
// Save file to blobs directory with SHA-256 + extension
|
||||
// Save file to storage directory with SHA-256 + extension
|
||||
char filepath[MAX_PATH_LEN];
|
||||
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256_hex, extension);
|
||||
// Construct path safely
|
||||
size_t dir_len = strlen(g_storage_dir);
|
||||
size_t sha_len = strlen(sha256_hex);
|
||||
size_t ext_len = strlen(extension);
|
||||
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
|
||||
|
||||
if (total_len > sizeof(filepath)) {
|
||||
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256_hex, extension);
|
||||
printf("Status: 500 Internal Server Error\r\n");
|
||||
|
||||
printf("Content-Type: text/plain\r\n\r\n");
|
||||
printf("File path too long\n");
|
||||
return;
|
||||
}
|
||||
|
||||
// Build path manually to avoid compiler warnings
|
||||
memcpy(filepath, g_storage_dir, dir_len);
|
||||
filepath[dir_len] = '/';
|
||||
memcpy(filepath + dir_len + 1, sha256_hex, sha_len);
|
||||
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
|
||||
filepath[total_len - 1] = '\0';
|
||||
|
||||
FILE *outfile = fopen(filepath, "wb");
|
||||
if (!outfile) {
|
||||
@@ -1480,7 +1295,31 @@ void handle_auth_challenge_request(void) {
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
/////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
int main(void) {
|
||||
int main(int argc, char *argv[]) {
|
||||
// Parse command line arguments
|
||||
for (int i = 1; i < argc; i++) {
|
||||
if (strcmp(argv[i], "--db-path") == 0 && i + 1 < argc) {
|
||||
strncpy(g_db_path, argv[i + 1], sizeof(g_db_path) - 1);
|
||||
i++; // Skip next argument
|
||||
} else if (strcmp(argv[i], "--storage-dir") == 0 && i + 1 < argc) {
|
||||
strncpy(g_storage_dir, argv[i + 1], sizeof(g_storage_dir) - 1);
|
||||
i++; // Skip next argument
|
||||
} else if (strcmp(argv[i], "--help") == 0 || strcmp(argv[i], "-h") == 0) {
|
||||
printf("Usage: %s [options]\n", argv[0]);
|
||||
printf("Options:\n");
|
||||
printf(" --db-path PATH Database file path (default: db/ginxsom.db)\n");
|
||||
printf(" --storage-dir DIR Storage directory for files (default: blobs)\n");
|
||||
printf(" --help, -h Show this help message\n");
|
||||
return 0;
|
||||
} else {
|
||||
fprintf(stderr, "Unknown option: %s\n", argv[i]);
|
||||
fprintf(stderr, "Use --help for usage information\n");
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
fprintf(stderr, "STARTUP: Using database path: %s\n", g_db_path);
|
||||
fprintf(stderr, "STARTUP: Using storage directory: %s\n", g_storage_dir);
|
||||
|
||||
// Initialize server configuration and identity
|
||||
// Try file-based config first, then fall back to database config
|
||||
@@ -1551,11 +1390,66 @@ if (!config_loaded /* && !initialize_server_config() */) {
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
// CENTRALIZED REQUEST VALIDATION SYSTEM
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
||||
// Special case: Root endpoint is public and doesn't require authentication
|
||||
if (strcmp(request_method, "GET") == 0 && strcmp(request_uri, "/") == 0) {
|
||||
// Handle GET / requests - Server info endpoint
|
||||
printf("Status: 200 OK\r\n");
|
||||
printf("Content-Type: application/json\r\n\r\n");
|
||||
printf("{\n");
|
||||
printf(" \"server\": \"ginxsom\",\n");
|
||||
printf(" \"version\": \"%s\",\n", VERSION);
|
||||
printf(" \"description\": \"Ginxsom Blossom Server\",\n");
|
||||
printf(" \"endpoints\": {\n");
|
||||
printf(" \"blob_get\": \"GET /<sha256>\",\n");
|
||||
printf(" \"blob_head\": \"HEAD /<sha256>\",\n");
|
||||
printf(" \"upload\": \"PUT /upload\",\n");
|
||||
printf(" \"upload_requirements\": \"HEAD /upload\",\n");
|
||||
printf(" \"list\": \"GET /list/<pubkey>\",\n");
|
||||
printf(" \"delete\": \"DELETE /<sha256>\",\n");
|
||||
printf(" \"mirror\": \"PUT /mirror\",\n");
|
||||
printf(" \"report\": \"PUT /report\",\n");
|
||||
printf(" \"health\": \"GET /health\",\n");
|
||||
printf(" \"auth\": \"GET /auth\"\n");
|
||||
printf(" },\n");
|
||||
printf(" \"supported_buds\": [\n");
|
||||
printf(" \"BUD-01\",\n");
|
||||
printf(" \"BUD-02\",\n");
|
||||
printf(" \"BUD-04\",\n");
|
||||
printf(" \"BUD-06\",\n");
|
||||
printf(" \"BUD-08\",\n");
|
||||
printf(" \"BUD-09\"\n");
|
||||
printf(" ],\n");
|
||||
printf(" \"limits\": {\n");
|
||||
printf(" \"max_upload_size\": 104857600,\n");
|
||||
printf(" \"supported_mime_types\": [\n");
|
||||
printf(" \"image/jpeg\",\n");
|
||||
printf(" \"image/png\",\n");
|
||||
printf(" \"image/webp\",\n");
|
||||
printf(" \"image/gif\",\n");
|
||||
printf(" \"video/mp4\",\n");
|
||||
printf(" \"video/webm\",\n");
|
||||
printf(" \"audio/mpeg\",\n");
|
||||
printf(" \"audio/ogg\",\n");
|
||||
printf(" \"text/plain\",\n");
|
||||
printf(" \"application/pdf\"\n");
|
||||
printf(" ]\n");
|
||||
printf(" },\n");
|
||||
printf(" \"authentication\": {\n");
|
||||
printf(" \"required_for_upload\": false,\n");
|
||||
printf(" \"required_for_delete\": true,\n");
|
||||
printf(" \"required_for_list\": false,\n");
|
||||
printf(" \"nip42_enabled\": true\n");
|
||||
printf(" }\n");
|
||||
printf("}\n");
|
||||
log_request("GET", "/", "server_info", 200);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Determine operation from request method and URI
|
||||
const char *operation = "unknown";
|
||||
const char *resource_hash = NULL;
|
||||
|
||||
|
||||
if (strcmp(request_method, "HEAD") == 0 && strcmp(request_uri, "/upload") == 0) {
|
||||
operation = "head_upload";
|
||||
} else if (strcmp(request_method, "HEAD") == 0) {
|
||||
@@ -1614,6 +1508,20 @@ if (!config_loaded /* && !initialize_server_config() */) {
|
||||
// For other operations, validation failure means auth failure
|
||||
const char *message = result.reason[0] ? result.reason : "Authentication failed";
|
||||
const char *details = "Authentication validation failed";
|
||||
|
||||
// Determine appropriate status code based on violation type
|
||||
int status_code = 401; // Default: Unauthorized (no auth or invalid auth)
|
||||
const char *violation_type = nostr_request_validator_get_last_violation_type();
|
||||
|
||||
// If auth rules denied the request, use 403 Forbidden instead of 401 Unauthorized
|
||||
if (violation_type && (
|
||||
strcmp(violation_type, "pubkey_blacklist") == 0 ||
|
||||
strcmp(violation_type, "hash_blacklist") == 0 ||
|
||||
strcmp(violation_type, "whitelist_violation") == 0 ||
|
||||
strcmp(violation_type, "mime_blacklist") == 0 ||
|
||||
strcmp(violation_type, "mime_whitelist_violation") == 0)) {
|
||||
status_code = 403; // Forbidden: authenticated but not authorized
|
||||
}
|
||||
|
||||
// Always include event JSON in details when auth header is provided for debugging
|
||||
if (auth_header) {
|
||||
@@ -1632,8 +1540,8 @@ if (!config_loaded /* && !initialize_server_config() */) {
|
||||
}
|
||||
}
|
||||
|
||||
send_error_response(401, "authentication_failed", message, details);
|
||||
log_request(request_method, request_uri, "auth_failed", 401);
|
||||
send_error_response(status_code, "authentication_failed", message, details);
|
||||
log_request(request_method, request_uri, "auth_failed", status_code);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@@ -1723,6 +1631,58 @@ if (!config_loaded /* && !initialize_server_config() */) {
|
||||
"Pubkey must be 64 hex characters");
|
||||
log_request("GET", request_uri, "none", 400);
|
||||
}
|
||||
} else if (strcmp(request_method, "GET") == 0 &&
|
||||
strcmp(request_uri, "/") == 0) {
|
||||
// Handle GET / requests - Server info endpoint
|
||||
printf("Status: 200 OK\r\n");
|
||||
printf("Content-Type: application/json\r\n\r\n");
|
||||
printf("{\n");
|
||||
printf(" \"server\": \"ginxsom\",\n");
|
||||
printf(" \"version\": \"%s\",\n", VERSION);
|
||||
printf(" \"description\": \"Ginxsom Blossom Server\",\n");
|
||||
printf(" \"endpoints\": {\n");
|
||||
printf(" \"blob_get\": \"GET /<sha256>\",\n");
|
||||
printf(" \"blob_head\": \"HEAD /<sha256>\",\n");
|
||||
printf(" \"upload\": \"PUT /upload\",\n");
|
||||
printf(" \"upload_requirements\": \"HEAD /upload\",\n");
|
||||
printf(" \"list\": \"GET /list/<pubkey>\",\n");
|
||||
printf(" \"delete\": \"DELETE /<sha256>\",\n");
|
||||
printf(" \"mirror\": \"PUT /mirror\",\n");
|
||||
printf(" \"report\": \"PUT /report\",\n");
|
||||
printf(" \"health\": \"GET /health\",\n");
|
||||
printf(" \"auth\": \"GET /auth\"\n");
|
||||
printf(" },\n");
|
||||
printf(" \"supported_buds\": [\n");
|
||||
printf(" \"BUD-01\",\n");
|
||||
printf(" \"BUD-02\",\n");
|
||||
printf(" \"BUD-04\",\n");
|
||||
printf(" \"BUD-06\",\n");
|
||||
printf(" \"BUD-08\",\n");
|
||||
printf(" \"BUD-09\"\n");
|
||||
printf(" ],\n");
|
||||
printf(" \"limits\": {\n");
|
||||
printf(" \"max_upload_size\": 104857600,\n");
|
||||
printf(" \"supported_mime_types\": [\n");
|
||||
printf(" \"image/jpeg\",\n");
|
||||
printf(" \"image/png\",\n");
|
||||
printf(" \"image/webp\",\n");
|
||||
printf(" \"image/gif\",\n");
|
||||
printf(" \"video/mp4\",\n");
|
||||
printf(" \"video/webm\",\n");
|
||||
printf(" \"audio/mpeg\",\n");
|
||||
printf(" \"audio/ogg\",\n");
|
||||
printf(" \"text/plain\",\n");
|
||||
printf(" \"application/pdf\"\n");
|
||||
printf(" ]\n");
|
||||
printf(" },\n");
|
||||
printf(" \"authentication\": {\n");
|
||||
printf(" \"required_for_upload\": false,\n");
|
||||
printf(" \"required_for_delete\": true,\n");
|
||||
printf(" \"required_for_list\": false,\n");
|
||||
printf(" \"nip42_enabled\": true\n");
|
||||
printf(" }\n");
|
||||
printf("}\n");
|
||||
log_request("GET", "/", "server_info", 200);
|
||||
} else if (strcmp(request_method, "GET") == 0 &&
|
||||
strcmp(request_uri, "/auth") == 0) {
|
||||
// Handle GET /auth requests using the existing handler
|
||||
|
||||
@@ -810,8 +810,17 @@ int nostr_validate_unified_request(const nostr_unified_request_t *request,
|
||||
"checking database rules\n");
|
||||
|
||||
// Check database rules for authorization
|
||||
// For Blossom uploads, use hash from event 'x' tag instead of URI
|
||||
const char *hash_for_rules = request->resource_hash;
|
||||
if (event_kind == 24242 && strlen(expected_hash_from_event) == 64) {
|
||||
hash_for_rules = expected_hash_from_event;
|
||||
char hash_msg[256];
|
||||
sprintf(hash_msg, "VALIDATOR_DEBUG: Using hash from Blossom event for rules: %.16s...\n", hash_for_rules);
|
||||
validator_debug_log(hash_msg);
|
||||
}
|
||||
|
||||
int rules_result = check_database_auth_rules(
|
||||
extracted_pubkey, request->operation, request->resource_hash);
|
||||
extracted_pubkey, request->operation, hash_for_rules);
|
||||
if (rules_result != NOSTR_SUCCESS) {
|
||||
validator_debug_log(
|
||||
"VALIDATOR_DEBUG: STEP 14 FAILED - Database rules denied request\n");
|
||||
@@ -1334,9 +1343,10 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
|
||||
}
|
||||
|
||||
// Step 1: Check pubkey blacklist (highest priority)
|
||||
// Match both exact operation and wildcard '*'
|
||||
const char *blacklist_sql =
|
||||
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
|
||||
"'pubkey_blacklist' AND rule_target = ? AND operation = ? AND enabled = "
|
||||
"'pubkey_blacklist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
|
||||
"1 ORDER BY priority LIMIT 1";
|
||||
rc = sqlite3_prepare_v2(db, blacklist_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
@@ -1369,9 +1379,10 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
|
||||
|
||||
// Step 2: Check hash blacklist
|
||||
if (resource_hash) {
|
||||
// Match both exact operation and wildcard '*'
|
||||
const char *hash_blacklist_sql =
|
||||
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
|
||||
"'hash_blacklist' AND rule_target = ? AND operation = ? AND enabled = "
|
||||
"'hash_blacklist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
|
||||
"1 ORDER BY priority LIMIT 1";
|
||||
rc = sqlite3_prepare_v2(db, hash_blacklist_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
@@ -1408,9 +1419,10 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
|
||||
}
|
||||
|
||||
// Step 3: Check pubkey whitelist
|
||||
// Match both exact operation and wildcard '*'
|
||||
const char *whitelist_sql =
|
||||
"SELECT rule_type, description FROM auth_rules WHERE rule_type = "
|
||||
"'pubkey_whitelist' AND rule_target = ? AND operation = ? AND enabled = "
|
||||
"'pubkey_whitelist' AND rule_target = ? AND (operation = ? OR operation = '*') AND enabled = "
|
||||
"1 ORDER BY priority LIMIT 1";
|
||||
rc = sqlite3_prepare_v2(db, whitelist_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
@@ -1436,9 +1448,10 @@ static int check_database_auth_rules(const char *pubkey, const char *operation,
|
||||
"not whitelisted\n");
|
||||
|
||||
// Step 4: Check if any whitelist rules exist - if yes, deny by default
|
||||
// Match both exact operation and wildcard '*'
|
||||
const char *whitelist_exists_sql =
|
||||
"SELECT COUNT(*) FROM auth_rules WHERE rule_type = 'pubkey_whitelist' "
|
||||
"AND operation = ? AND enabled = 1 LIMIT 1";
|
||||
"AND (operation = ? OR operation = '*') AND enabled = 1 LIMIT 1";
|
||||
rc = sqlite3_prepare_v2(db, whitelist_exists_sql, -1, &stmt, NULL);
|
||||
if (rc == SQLITE_OK) {
|
||||
sqlite3_bind_text(stmt, 1, operation ? operation : "", -1, SQLITE_STATIC);
|
||||
|
||||
7
test_blob_1762770636.txt
Normal file
7
test_blob_1762770636.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
Test blob content for Ginxsom Blossom server (PRODUCTION)
|
||||
Timestamp: 2025-11-10T06:30:36-04:00
|
||||
Random data: bb7d8d5206aadf4ecd48829f9674454c160bae68e98c6ce5f7f678f997dbe86a
|
||||
Test message: Hello from production test!
|
||||
|
||||
This file is used to test the upload functionality
|
||||
of the Ginxsom Blossom server on blossom.laantungir.net
|
||||
1
test_file.txt
Normal file
1
test_file.txt
Normal file
@@ -0,0 +1 @@
|
||||
test file content
|
||||
1
tests/auth_test_tmp/blacklist_test1.txt
Normal file
1
tests/auth_test_tmp/blacklist_test1.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from blacklisted user
|
||||
1
tests/auth_test_tmp/blacklist_test2.txt
Normal file
1
tests/auth_test_tmp/blacklist_test2.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from allowed user
|
||||
1
tests/auth_test_tmp/cache_test1.txt
Normal file
1
tests/auth_test_tmp/cache_test1.txt
Normal file
@@ -0,0 +1 @@
|
||||
First request - cache miss
|
||||
1
tests/auth_test_tmp/cache_test2.txt
Normal file
1
tests/auth_test_tmp/cache_test2.txt
Normal file
@@ -0,0 +1 @@
|
||||
Second request - cache hit
|
||||
1
tests/auth_test_tmp/cleanup_test.txt
Normal file
1
tests/auth_test_tmp/cleanup_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
Testing after cleanup
|
||||
1
tests/auth_test_tmp/disabled_rule_test.txt
Normal file
1
tests/auth_test_tmp/disabled_rule_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
Testing disabled rule
|
||||
1
tests/auth_test_tmp/enabled_rule_test.txt
Normal file
1
tests/auth_test_tmp/enabled_rule_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
Testing enabled rule
|
||||
1
tests/auth_test_tmp/hash_blacklist_test.txt
Normal file
1
tests/auth_test_tmp/hash_blacklist_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
This specific file is blacklisted
|
||||
1
tests/auth_test_tmp/hash_blacklist_test2.txt
Normal file
1
tests/auth_test_tmp/hash_blacklist_test2.txt
Normal file
@@ -0,0 +1 @@
|
||||
This file is allowed
|
||||
1
tests/auth_test_tmp/mime_test1.txt
Normal file
1
tests/auth_test_tmp/mime_test1.txt
Normal file
@@ -0,0 +1 @@
|
||||
Plain text file
|
||||
1
tests/auth_test_tmp/mime_whitelist_test.txt
Normal file
1
tests/auth_test_tmp/mime_whitelist_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
Text file with whitelist active
|
||||
1
tests/auth_test_tmp/operation_test.txt
Normal file
1
tests/auth_test_tmp/operation_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
Testing operation-specific rules
|
||||
1
tests/auth_test_tmp/priority_test.txt
Normal file
1
tests/auth_test_tmp/priority_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
Testing priority ordering
|
||||
1
tests/auth_test_tmp/test.txt
Normal file
1
tests/auth_test_tmp/test.txt
Normal file
@@ -0,0 +1 @@
|
||||
test content
|
||||
1
tests/auth_test_tmp/whitelist_test1.txt
Normal file
1
tests/auth_test_tmp/whitelist_test1.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from whitelisted user
|
||||
1
tests/auth_test_tmp/whitelist_test2.txt
Normal file
1
tests/auth_test_tmp/whitelist_test2.txt
Normal file
@@ -0,0 +1 @@
|
||||
Content from non-whitelisted user
|
||||
1
tests/auth_test_tmp/wildcard_test.txt
Normal file
1
tests/auth_test_tmp/wildcard_test.txt
Normal file
@@ -0,0 +1 @@
|
||||
Testing wildcard operation
|
||||
@@ -6,7 +6,8 @@
|
||||
set -e # Exit on any error
|
||||
|
||||
# Configuration
|
||||
SERVER_URL="http://localhost:9001"
|
||||
# SERVER_URL="http://localhost:9001"
|
||||
SERVER_URL="https://localhost:9443"
|
||||
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
|
||||
TEST_FILE="test_blob_$(date +%s).txt"
|
||||
CLEANUP_FILES=()
|
||||
@@ -87,7 +88,7 @@ check_prerequisites() {
|
||||
check_server() {
|
||||
log_info "Checking if server is running..."
|
||||
|
||||
if curl -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
|
||||
if curl -k -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
|
||||
log_success "Server is running at ${SERVER_URL}"
|
||||
else
|
||||
log_error "Server is not responding at ${SERVER_URL}"
|
||||
@@ -168,7 +169,7 @@ perform_upload() {
|
||||
CLEANUP_FILES+=("${RESPONSE_FILE}")
|
||||
|
||||
# Perform the upload with verbose output
|
||||
HTTP_STATUS=$(curl -s -w "%{http_code}" \
|
||||
HTTP_STATUS=$(curl -k -s -w "%{http_code}" \
|
||||
-X PUT \
|
||||
-H "Authorization: ${AUTH_HEADER}" \
|
||||
-H "Content-Type: text/plain" \
|
||||
@@ -217,7 +218,7 @@ test_retrieval() {
|
||||
|
||||
RETRIEVAL_URL="${SERVER_URL}/${HASH}"
|
||||
|
||||
if curl -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
|
||||
if curl -k -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
|
||||
log_success "File can be retrieved at: ${RETRIEVAL_URL}"
|
||||
else
|
||||
log_warning "File not yet available for retrieval (expected if upload processing not implemented)"
|
||||
|
||||
265
tests/file_put_production.sh
Executable file
265
tests/file_put_production.sh
Executable file
@@ -0,0 +1,265 @@
|
||||
#!/bin/bash
|
||||
|
||||
# file_put_production.sh - Test script for production Ginxsom Blossom server
|
||||
# Tests upload functionality on blossom.laantungir.net
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Configuration
|
||||
SERVER_URL="https://blossom.laantungir.net"
|
||||
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
|
||||
TEST_FILE="test_blob_$(date +%s).txt"
|
||||
CLEANUP_FILES=()
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
echo -e "${YELLOW}Cleaning up temporary files...${NC}"
|
||||
for file in "${CLEANUP_FILES[@]}"; do
|
||||
if [[ -f "$file" ]]; then
|
||||
rm -f "$file"
|
||||
echo "Removed: $file"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Set up cleanup on exit
|
||||
trap cleanup EXIT
|
||||
|
||||
# Helper functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
# Check prerequisites
|
||||
check_prerequisites() {
|
||||
log_info "Checking prerequisites..."
|
||||
|
||||
# Check if nak is installed
|
||||
if ! command -v nak &> /dev/null; then
|
||||
log_error "nak command not found. Please install nak first."
|
||||
log_info "Install with: go install github.com/fiatjaf/nak@latest"
|
||||
exit 1
|
||||
fi
|
||||
log_success "nak is installed"
|
||||
|
||||
# Check if curl is available
|
||||
if ! command -v curl &> /dev/null; then
|
||||
log_error "curl command not found. Please install curl."
|
||||
exit 1
|
||||
fi
|
||||
log_success "curl is available"
|
||||
|
||||
# Check if sha256sum is available
|
||||
if ! command -v sha256sum &> /dev/null; then
|
||||
log_error "sha256sum command not found."
|
||||
exit 1
|
||||
fi
|
||||
log_success "sha256sum is available"
|
||||
|
||||
# Check if base64 is available
|
||||
if ! command -v base64 &> /dev/null; then
|
||||
log_error "base64 command not found."
|
||||
exit 1
|
||||
fi
|
||||
log_success "base64 is available"
|
||||
}
|
||||
|
||||
# Check if server is running
|
||||
check_server() {
|
||||
log_info "Checking if server is running..."
|
||||
|
||||
if curl -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
|
||||
log_success "Server is running at ${SERVER_URL}"
|
||||
else
|
||||
log_error "Server is not responding at ${SERVER_URL}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Create test file
|
||||
create_test_file() {
|
||||
log_info "Creating test file: ${TEST_FILE}"
|
||||
|
||||
# Create test content with timestamp and random data
|
||||
cat > "${TEST_FILE}" << EOF
|
||||
Test blob content for Ginxsom Blossom server (PRODUCTION)
|
||||
Timestamp: $(date -Iseconds)
|
||||
Random data: $(openssl rand -hex 32)
|
||||
Test message: Hello from production test!
|
||||
|
||||
This file is used to test the upload functionality
|
||||
of the Ginxsom Blossom server on blossom.laantungir.net
|
||||
EOF
|
||||
|
||||
CLEANUP_FILES+=("${TEST_FILE}")
|
||||
log_success "Created test file with $(wc -c < "${TEST_FILE}") bytes"
|
||||
}
|
||||
|
||||
# Calculate file hash
|
||||
calculate_hash() {
|
||||
log_info "Calculating SHA-256 hash..."
|
||||
|
||||
HASH=$(sha256sum "${TEST_FILE}" | cut -d' ' -f1)
|
||||
|
||||
log_success "Data to hash: ${TEST_FILE}"
|
||||
log_success "File hash: ${HASH}"
|
||||
}
|
||||
|
||||
# Generate nostr event
|
||||
generate_nostr_event() {
|
||||
log_info "Generating kind 24242 nostr event with nak..."
|
||||
|
||||
# Calculate expiration time (1 hour from now)
|
||||
EXPIRATION=$(date -d '+1 hour' +%s)
|
||||
|
||||
# Generate the event using nak
|
||||
EVENT_JSON=$(nak event -k 24242 -c "" \
|
||||
-t "t=upload" \
|
||||
-t "x=${HASH}" \
|
||||
-t "expiration=${EXPIRATION}")
|
||||
|
||||
if [[ -z "$EVENT_JSON" ]]; then
|
||||
log_error "Failed to generate nostr event"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Generated nostr event"
|
||||
echo "Event JSON: $EVENT_JSON"
|
||||
}
|
||||
|
||||
# Create authorization header
|
||||
create_auth_header() {
|
||||
log_info "Creating authorization header..."
|
||||
|
||||
# Base64 encode the event (without newlines)
|
||||
AUTH_B64=$(echo -n "$EVENT_JSON" | base64 -w 0)
|
||||
AUTH_HEADER="Nostr ${AUTH_B64}"
|
||||
|
||||
log_success "Created authorization header"
|
||||
echo "Auth header length: ${#AUTH_HEADER} characters"
|
||||
}
|
||||
|
||||
# Perform upload
|
||||
perform_upload() {
|
||||
log_info "Performing upload to ${UPLOAD_ENDPOINT}..."
|
||||
|
||||
# Create temporary file for response
|
||||
RESPONSE_FILE=$(mktemp)
|
||||
CLEANUP_FILES+=("${RESPONSE_FILE}")
|
||||
|
||||
# Perform the upload with verbose output
|
||||
HTTP_STATUS=$(curl -s -w "%{http_code}" \
|
||||
-X PUT \
|
||||
-H "Authorization: ${AUTH_HEADER}" \
|
||||
-H "Content-Type: text/plain" \
|
||||
-H "Content-Disposition: attachment; filename=\"${TEST_FILE}\"" \
|
||||
--data-binary "@${TEST_FILE}" \
|
||||
"${UPLOAD_ENDPOINT}" \
|
||||
-o "${RESPONSE_FILE}")
|
||||
|
||||
echo "HTTP Status: ${HTTP_STATUS}"
|
||||
echo "Response body:"
|
||||
cat "${RESPONSE_FILE}"
|
||||
echo
|
||||
|
||||
# Check response
|
||||
case "${HTTP_STATUS}" in
|
||||
200)
|
||||
log_success "Upload successful!"
|
||||
;;
|
||||
201)
|
||||
log_success "Upload successful (created)!"
|
||||
;;
|
||||
400)
|
||||
log_error "Bad request - check the event format"
|
||||
;;
|
||||
401)
|
||||
log_error "Unauthorized - authentication failed"
|
||||
;;
|
||||
405)
|
||||
log_error "Method not allowed - check nginx configuration"
|
||||
;;
|
||||
413)
|
||||
log_error "Payload too large"
|
||||
;;
|
||||
501)
|
||||
log_warning "Upload endpoint not yet implemented (expected for now)"
|
||||
;;
|
||||
*)
|
||||
log_error "Upload failed with HTTP status: ${HTTP_STATUS}"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Test file retrieval
|
||||
test_retrieval() {
|
||||
log_info "Testing file retrieval..."
|
||||
|
||||
RETRIEVAL_URL="${SERVER_URL}/${HASH}"
|
||||
|
||||
if curl -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
|
||||
log_success "File can be retrieved at: ${RETRIEVAL_URL}"
|
||||
|
||||
# Download and verify
|
||||
DOWNLOADED_FILE=$(mktemp)
|
||||
CLEANUP_FILES+=("${DOWNLOADED_FILE}")
|
||||
curl -s "${RETRIEVAL_URL}" -o "${DOWNLOADED_FILE}"
|
||||
|
||||
DOWNLOADED_HASH=$(sha256sum "${DOWNLOADED_FILE}" | cut -d' ' -f1)
|
||||
if [[ "${DOWNLOADED_HASH}" == "${HASH}" ]]; then
|
||||
log_success "Downloaded file hash matches! Verification successful."
|
||||
else
|
||||
log_error "Hash mismatch! Expected: ${HASH}, Got: ${DOWNLOADED_HASH}"
|
||||
fi
|
||||
else
|
||||
log_warning "File not yet available for retrieval"
|
||||
fi
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
echo "=== Ginxsom Blossom Production Upload Test ==="
|
||||
echo "Server: ${SERVER_URL}"
|
||||
echo "Timestamp: $(date -Iseconds)"
|
||||
echo
|
||||
|
||||
check_prerequisites
|
||||
check_server
|
||||
create_test_file
|
||||
calculate_hash
|
||||
generate_nostr_event
|
||||
create_auth_header
|
||||
perform_upload
|
||||
test_retrieval
|
||||
|
||||
echo
|
||||
log_info "Test completed!"
|
||||
echo "Summary:"
|
||||
echo " Test file: ${TEST_FILE}"
|
||||
echo " File hash: ${HASH}"
|
||||
echo " Server: ${SERVER_URL}"
|
||||
echo " Upload endpoint: ${UPLOAD_ENDPOINT}"
|
||||
echo " Retrieval URL: ${SERVER_URL}/${HASH}"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
@@ -3,18 +3,31 @@
|
||||
# Mirror Test Script for BUD-04
|
||||
# Tests the PUT /mirror endpoint with a sample PNG file and NIP-42 authentication
|
||||
|
||||
# ============================================================================
|
||||
# CONFIGURATION - Choose your target Blossom server
|
||||
# ============================================================================
|
||||
|
||||
# Local server (uncomment to use)
|
||||
BLOSSOM_SERVER="http://localhost:9001"
|
||||
|
||||
# Remote server (uncomment to use)
|
||||
#BLOSSOM_SERVER="https://blossom.laantungir.net"
|
||||
|
||||
# ============================================================================
|
||||
|
||||
# Test URL - PNG file with known SHA-256 hash
|
||||
TEST_URL="https://laantungir.github.io/img_repo/24308d48eb498b593e55a87b6300ccffdea8432babc0bb898b1eff21ebbb72de.png"
|
||||
EXPECTED_HASH="24308d48eb498b593e55a87b6300ccffdea8432babc0bb898b1eff21ebbb72de"
|
||||
|
||||
echo "=== BUD-04 Mirror Endpoint Test with Authentication ==="
|
||||
echo "Blossom Server: $BLOSSOM_SERVER"
|
||||
echo "Target URL: $TEST_URL"
|
||||
echo "Expected Hash: $EXPECTED_HASH"
|
||||
echo ""
|
||||
|
||||
# Get a fresh challenge from the server
|
||||
echo "=== Getting Authentication Challenge ==="
|
||||
challenge=$(curl -s "http://localhost:9001/auth" | jq -r '.challenge')
|
||||
challenge=$(curl -s "$BLOSSOM_SERVER/auth" | jq -r '.challenge')
|
||||
if [ "$challenge" = "null" ] || [ -z "$challenge" ]; then
|
||||
echo "❌ Failed to get challenge from server"
|
||||
exit 1
|
||||
@@ -48,7 +61,7 @@ RESPONSE=$(curl -s -w "\nHTTP_CODE:%{http_code}\n" \
|
||||
-H "Authorization: $auth_header" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$JSON_BODY" \
|
||||
http://localhost:9001/mirror)
|
||||
"$BLOSSOM_SERVER/mirror")
|
||||
|
||||
echo "Response:"
|
||||
echo "$RESPONSE"
|
||||
@@ -65,9 +78,9 @@ if [ "$HTTP_CODE" = "200" ]; then
|
||||
# Try to access the mirrored blob
|
||||
echo ""
|
||||
echo "=== Verifying Mirrored Blob ==="
|
||||
echo "Attempting to fetch: http://localhost:9001/$EXPECTED_HASH.png"
|
||||
echo "Attempting to fetch: $BLOSSOM_SERVER/$EXPECTED_HASH.png"
|
||||
|
||||
BLOB_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I "http://localhost:9001/$EXPECTED_HASH.png")
|
||||
BLOB_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I "$BLOSSOM_SERVER/$EXPECTED_HASH.png")
|
||||
BLOB_HTTP_CODE=$(echo "$BLOB_RESPONSE" | grep "HTTP_CODE:" | cut -d: -f2)
|
||||
|
||||
if [ "$BLOB_HTTP_CODE" = "200" ]; then
|
||||
@@ -82,7 +95,7 @@ if [ "$HTTP_CODE" = "200" ]; then
|
||||
# Test HEAD request for metadata
|
||||
echo ""
|
||||
echo "=== Testing HEAD Request ==="
|
||||
HEAD_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I -X HEAD "http://localhost:9001/$EXPECTED_HASH")
|
||||
HEAD_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I -X HEAD "$BLOSSOM_SERVER/$EXPECTED_HASH")
|
||||
HEAD_HTTP_CODE=$(echo "$HEAD_RESPONSE" | grep "HTTP_CODE:" | cut -d: -f2)
|
||||
|
||||
if [ "$HEAD_HTTP_CODE" = "200" ]; then
|
||||
|
||||
392
tests/white_black_list_test.sh
Executable file
392
tests/white_black_list_test.sh
Executable file
@@ -0,0 +1,392 @@
|
||||
#!/bin/bash
|
||||
|
||||
# white_black_list_test.sh - Whitelist/Blacklist Rules Test Suite
|
||||
# Tests the auth_rules table functionality for pubkey and MIME type filtering
|
||||
|
||||
# Configuration
|
||||
SERVER_URL="http://localhost:9001"
|
||||
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
|
||||
DB_PATH="db/ginxsom.db"
|
||||
TEST_DIR="tests/auth_test_tmp"
|
||||
|
||||
# Test results tracking
|
||||
TESTS_PASSED=0
|
||||
TESTS_FAILED=0
|
||||
TOTAL_TESTS=0
|
||||
|
||||
# Test keys for different scenarios
|
||||
# Generated using: nak key public <privkey>
|
||||
TEST_USER1_PRIVKEY="5c0c523f52a5b6fad39ed2403092df8cebc36318b39383bca6c00808626fab3a"
|
||||
TEST_USER1_PUBKEY="87d3561f19b74adbe8bf840682992466068830a9d8c36b4a0c99d36f826cb6cb"
|
||||
|
||||
TEST_USER2_PRIVKEY="182c3a5e3b7a1b7e4f5c6b7c8b4a5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2"
|
||||
TEST_USER2_PUBKEY="0396b426090284a28294078dce53fe73791ab623c3fc46ab4409fea05109a6db"
|
||||
|
||||
TEST_USER3_PRIVKEY="abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234"
|
||||
TEST_USER3_PUBKEY="769a740386211c76f81bb235de50a5e6fa463cb4fae25e62625607fc2cfc0f28"
|
||||
|
||||
# Helper function to record test results
|
||||
record_test_result() {
|
||||
local test_name="$1"
|
||||
local expected="$2"
|
||||
local actual="$3"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
if [[ "$actual" == "$expected" ]]; then
|
||||
echo "✅ $test_name - PASSED"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo "❌ $test_name - FAILED (Expected: $expected, Got: $actual)"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# Check prerequisites
|
||||
for cmd in nak curl jq sqlite3; do
|
||||
if ! command -v $cmd &> /dev/null; then
|
||||
echo "❌ $cmd command not found"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Check if server is running
|
||||
if ! curl -s -f "${SERVER_URL}/" > /dev/null 2>&1; then
|
||||
echo "❌ Server not running at $SERVER_URL"
|
||||
echo "Start with: ./restart-all.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if database exists
|
||||
if [[ ! -f "$DB_PATH" ]]; then
|
||||
echo "❌ Database not found at $DB_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Setup test environment
|
||||
mkdir -p "$TEST_DIR"
|
||||
|
||||
echo "=========================================="
|
||||
echo " WHITELIST/BLACKLIST RULES TEST SUITE"
|
||||
echo "=========================================="
|
||||
echo
|
||||
|
||||
# Helper functions
|
||||
create_test_file() {
|
||||
local filename="$1"
|
||||
local content="${2:-test content for $filename}"
|
||||
local filepath="$TEST_DIR/$filename"
|
||||
echo "$content" > "$filepath"
|
||||
echo "$filepath"
|
||||
}
|
||||
|
||||
create_auth_event() {
|
||||
local privkey="$1"
|
||||
local operation="$2"
|
||||
local hash="$3"
|
||||
local expiration_offset="${4:-3600}" # 1 hour default
|
||||
|
||||
local expiration=$(date -d "+${expiration_offset} seconds" +%s)
|
||||
|
||||
local event_args=(-k 24242 -c "" --tag "t=$operation" --tag "expiration=$expiration" --sec "$privkey")
|
||||
|
||||
if [[ -n "$hash" ]]; then
|
||||
event_args+=(--tag "x=$hash")
|
||||
fi
|
||||
|
||||
nak event "${event_args[@]}"
|
||||
}
|
||||
|
||||
test_upload() {
|
||||
local test_name="$1"
|
||||
local privkey="$2"
|
||||
local file_path="$3"
|
||||
local expected_status="${4:-200}"
|
||||
|
||||
local file_hash=$(sha256sum "$file_path" | cut -d' ' -f1)
|
||||
|
||||
# Create auth event
|
||||
local event=$(create_auth_event "$privkey" "upload" "$file_hash")
|
||||
local auth_header="Nostr $(echo "$event" | base64 -w 0)"
|
||||
|
||||
# Make upload request
|
||||
local response_file=$(mktemp)
|
||||
local http_status=$(curl -s -w "%{http_code}" \
|
||||
-H "Authorization: $auth_header" \
|
||||
-H "Content-Type: text/plain" \
|
||||
--data-binary "@$file_path" \
|
||||
-X PUT "$UPLOAD_ENDPOINT" \
|
||||
-o "$response_file" 2>/dev/null)
|
||||
|
||||
# Show response if test fails
|
||||
if [[ "$http_status" != "$expected_status" ]]; then
|
||||
echo " Response: $(cat "$response_file")"
|
||||
fi
|
||||
|
||||
rm -f "$response_file"
|
||||
|
||||
# Record result
|
||||
record_test_result "$test_name" "$expected_status" "$http_status"
|
||||
}
|
||||
|
||||
# Clean up any existing rules from previous tests
|
||||
echo "Cleaning up existing auth rules..."
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;" 2>/dev/null
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;" 2>/dev/null
|
||||
|
||||
# Enable authentication rules
|
||||
echo "Enabling authentication rules..."
|
||||
sqlite3 "$DB_PATH" "UPDATE config SET value = 'true' WHERE key = 'auth_rules_enabled';"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 1: PUBKEY BLACKLIST TESTS ==="
|
||||
echo
|
||||
|
||||
# Test 1: Add pubkey blacklist rule
|
||||
echo "Adding blacklist rule for TEST_USER3..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER3_PUBKEY', 'upload', 10, 'Test blacklist');"
|
||||
|
||||
# Test 1a: Blacklisted user should be denied
|
||||
test_file1=$(create_test_file "blacklist_test1.txt" "Content from blacklisted user")
|
||||
test_upload "Test 1a: Blacklisted Pubkey Upload" "$TEST_USER3_PRIVKEY" "$test_file1" "403"
|
||||
|
||||
# Test 1b: Non-blacklisted user should succeed
|
||||
test_file2=$(create_test_file "blacklist_test2.txt" "Content from allowed user")
|
||||
test_upload "Test 1b: Non-Blacklisted Pubkey Upload" "$TEST_USER1_PRIVKEY" "$test_file2" "200"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 2: PUBKEY WHITELIST TESTS ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 2: Add pubkey whitelist rule
|
||||
echo "Adding whitelist rule for TEST_USER1..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_whitelist', '$TEST_USER1_PUBKEY', 'upload', 300, 'Test whitelist');"
|
||||
|
||||
# Test 2a: Whitelisted user should succeed
|
||||
test_file3=$(create_test_file "whitelist_test1.txt" "Content from whitelisted user")
|
||||
test_upload "Test 2a: Whitelisted Pubkey Upload" "$TEST_USER1_PRIVKEY" "$test_file3" "200"
|
||||
|
||||
# Test 2b: Non-whitelisted user should be denied (whitelist default-deny)
|
||||
test_file4=$(create_test_file "whitelist_test2.txt" "Content from non-whitelisted user")
|
||||
test_upload "Test 2b: Non-Whitelisted Pubkey Upload" "$TEST_USER2_PRIVKEY" "$test_file4" "403"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 3: HASH BLACKLIST TESTS ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 3: Create a file and blacklist its hash
|
||||
test_file5=$(create_test_file "hash_blacklist_test.txt" "This specific file is blacklisted")
|
||||
BLACKLISTED_HASH=$(sha256sum "$test_file5" | cut -d' ' -f1)
|
||||
|
||||
echo "Adding hash blacklist rule for $BLACKLISTED_HASH..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('hash_blacklist', '$BLACKLISTED_HASH', 'upload', 100, 'Test hash blacklist');"
|
||||
|
||||
# Test 3a: Blacklisted hash should be denied
|
||||
test_upload "Test 3a: Blacklisted Hash Upload" "$TEST_USER1_PRIVKEY" "$test_file5" "403"
|
||||
|
||||
# Test 3b: Different file should succeed
|
||||
test_file6=$(create_test_file "hash_blacklist_test2.txt" "This file is allowed")
|
||||
test_upload "Test 3b: Non-Blacklisted Hash Upload" "$TEST_USER1_PRIVKEY" "$test_file6" "200"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 4: MIME TYPE BLACKLIST TESTS ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 4: Blacklist executable MIME types
|
||||
echo "Adding MIME type blacklist rules..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_blacklist', 'application/x-executable', 'upload', 200, 'Block executables');"
|
||||
|
||||
# Note: This test would require the server to detect MIME types from file content
|
||||
# For now, we'll test with text/plain which should be allowed
|
||||
test_file7=$(create_test_file "mime_test1.txt" "Plain text file")
|
||||
test_upload "Test 4a: Allowed MIME Type Upload" "$TEST_USER1_PRIVKEY" "$test_file7" "200"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 5: MIME TYPE WHITELIST TESTS ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 5: Whitelist only image MIME types
|
||||
echo "Adding MIME type whitelist rules..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_whitelist', 'image/jpeg', 'upload', 400, 'Allow JPEG');"
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('mime_whitelist', 'image/png', 'upload', 400, 'Allow PNG');"
|
||||
|
||||
# Note: MIME type detection would need to be implemented in the server
|
||||
# For now, text/plain should be denied if whitelist exists
|
||||
test_file8=$(create_test_file "mime_whitelist_test.txt" "Text file with whitelist active")
|
||||
test_upload "Test 5a: Non-Whitelisted MIME Type Upload" "$TEST_USER1_PRIVKEY" "$test_file8" "403"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 6: PRIORITY ORDERING TESTS ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 6: Blacklist should override whitelist (priority ordering)
|
||||
echo "Adding both blacklist (priority 10) and whitelist (priority 300) for same pubkey..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER1_PUBKEY', 'upload', 10, 'Blacklist priority test');"
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_whitelist', '$TEST_USER1_PUBKEY', 'upload', 300, 'Whitelist priority test');"
|
||||
|
||||
# Test 6a: Blacklist should win (lower priority number = higher priority)
|
||||
test_file9=$(create_test_file "priority_test.txt" "Testing priority ordering")
|
||||
test_upload "Test 6a: Blacklist Priority Over Whitelist" "$TEST_USER1_PRIVKEY" "$test_file9" "403"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 7: OPERATION-SPECIFIC RULES ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 7: Blacklist only for upload operation
|
||||
echo "Adding blacklist rule for upload operation only..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER2_PUBKEY', 'upload', 10, 'Upload-only blacklist');"
|
||||
|
||||
# Test 7a: Upload should be denied
|
||||
test_file10=$(create_test_file "operation_test.txt" "Testing operation-specific rules")
|
||||
test_upload "Test 7a: Operation-Specific Blacklist" "$TEST_USER2_PRIVKEY" "$test_file10" "403"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 8: WILDCARD OPERATION TESTS ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 8: Blacklist for all operations using wildcard
|
||||
echo "Adding blacklist rule for all operations (*)..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER3_PUBKEY', '*', 10, 'All operations blacklist');"
|
||||
|
||||
# Test 8a: Upload should be denied
|
||||
test_file11=$(create_test_file "wildcard_test.txt" "Testing wildcard operation")
|
||||
test_upload "Test 8a: Wildcard Operation Blacklist" "$TEST_USER3_PRIVKEY" "$test_file11" "403"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 9: ENABLED/DISABLED RULES ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 9: Disabled rule should not be enforced
|
||||
echo "Adding disabled blacklist rule..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, enabled, description) VALUES ('pubkey_blacklist', '$TEST_USER1_PUBKEY', 'upload', 10, 0, 'Disabled blacklist');"
|
||||
|
||||
# Test 9a: Upload should succeed (rule is disabled)
|
||||
test_file12=$(create_test_file "disabled_rule_test.txt" "Testing disabled rule")
|
||||
test_upload "Test 9a: Disabled Rule Not Enforced" "$TEST_USER1_PRIVKEY" "$test_file12" "200"
|
||||
|
||||
# Test 9b: Enable the rule
|
||||
echo "Enabling the blacklist rule..."
|
||||
sqlite3 "$DB_PATH" "UPDATE auth_rules SET enabled = 1 WHERE rule_target = '$TEST_USER1_PUBKEY';"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;" # Clear cache
|
||||
|
||||
# Test 9c: Upload should now be denied
|
||||
test_file13=$(create_test_file "enabled_rule_test.txt" "Testing enabled rule")
|
||||
test_upload "Test 9c: Enabled Rule Enforced" "$TEST_USER1_PRIVKEY" "$test_file13" "403"
|
||||
|
||||
echo
|
||||
echo "=== SECTION 10: CACHE FUNCTIONALITY ==="
|
||||
echo
|
||||
|
||||
# Clean rules
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Test 10: Add a blacklist rule and verify cache is populated
|
||||
echo "Adding blacklist rule to test caching..."
|
||||
sqlite3 "$DB_PATH" "INSERT INTO auth_rules (rule_type, rule_target, operation, priority, description) VALUES ('pubkey_blacklist', '$TEST_USER2_PUBKEY', 'upload', 10, 'Cache test');"
|
||||
|
||||
# Test 10a: First request (cache miss)
|
||||
test_file14=$(create_test_file "cache_test1.txt" "First request - cache miss")
|
||||
test_upload "Test 10a: First Request (Cache Miss)" "$TEST_USER2_PRIVKEY" "$test_file14" "403"
|
||||
|
||||
# Test 10b: Second request (should hit cache)
|
||||
test_file15=$(create_test_file "cache_test2.txt" "Second request - cache hit")
|
||||
test_upload "Test 10b: Second Request (Cache Hit)" "$TEST_USER2_PRIVKEY" "$test_file15" "403"
|
||||
|
||||
# Test 10c: Verify cache entry exists
|
||||
CACHE_COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM auth_rules_cache WHERE pubkey = '$TEST_USER2_PUBKEY';" 2>/dev/null)
|
||||
if [[ "$CACHE_COUNT" -gt 0 ]]; then
|
||||
record_test_result "Test 10c: Cache Entry Created" "1" "1"
|
||||
else
|
||||
record_test_result "Test 10c: Cache Entry Created" "1" "0"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "=== SECTION 11: CLEANUP AND RESET ==="
|
||||
echo
|
||||
|
||||
# Clean up all test rules
|
||||
echo "Cleaning up test rules..."
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules;"
|
||||
sqlite3 "$DB_PATH" "DELETE FROM auth_rules_cache;"
|
||||
|
||||
# Verify cleanup
|
||||
RULE_COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM auth_rules;" 2>/dev/null)
|
||||
if [[ "$RULE_COUNT" -eq 0 ]]; then
|
||||
record_test_result "Test 11a: Rules Cleanup" "0" "0"
|
||||
else
|
||||
record_test_result "Test 11a: Rules Cleanup" "0" "$RULE_COUNT"
|
||||
fi
|
||||
|
||||
CACHE_COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM auth_rules_cache;" 2>/dev/null)
|
||||
if [[ "$CACHE_COUNT" -eq 0 ]]; then
|
||||
record_test_result "Test 11b: Cache Cleanup" "0" "0"
|
||||
else
|
||||
record_test_result "Test 11b: Cache Cleanup" "0" "$CACHE_COUNT"
|
||||
fi
|
||||
|
||||
# Test that uploads work again after cleanup
|
||||
test_file16=$(create_test_file "cleanup_test.txt" "Testing after cleanup")
|
||||
test_upload "Test 11c: Upload After Cleanup" "$TEST_USER1_PRIVKEY" "$test_file16" "200"
|
||||
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " TEST SUITE RESULTS"
|
||||
echo "=========================================="
|
||||
echo
|
||||
echo "Total Tests: $TOTAL_TESTS"
|
||||
echo "✅ Passed: $TESTS_PASSED"
|
||||
echo "❌ Failed: $TESTS_FAILED"
|
||||
echo
|
||||
if [[ $TESTS_FAILED -eq 0 ]]; then
|
||||
echo "🎉 ALL TESTS PASSED!"
|
||||
echo
|
||||
echo "Whitelist/Blacklist functionality verified:"
|
||||
echo "- Pubkey blacklist: Working"
|
||||
echo "- Pubkey whitelist: Working"
|
||||
echo "- Hash blacklist: Working"
|
||||
echo "- MIME type rules: Working"
|
||||
echo "- Priority ordering: Working"
|
||||
echo "- Operation-specific rules: Working"
|
||||
echo "- Wildcard operations: Working"
|
||||
echo "- Enable/disable rules: Working"
|
||||
echo "- Cache functionality: Working"
|
||||
else
|
||||
echo "⚠️ Some tests failed. Check output above for details."
|
||||
echo "Success rate: $(( (TESTS_PASSED * 100) / TOTAL_TESTS ))%"
|
||||
fi
|
||||
echo
|
||||
echo "To clean up test data: rm -rf $TEST_DIR"
|
||||
echo "=========================================="
|
||||
Reference in New Issue
Block a user