6 Commits

Author SHA1 Message Date
Your Name
533c7f29f2 v0.1.6 - Just catching up 2025-11-11 17:02:14 -04:00
Your Name
35f8385508 v0.1.5 - Make versioning system 2025-11-11 07:16:33 -04:00
Your Name
fe2495f897 v0.1.4 - Make response at root JSON 2025-11-11 07:08:27 -04:00
Your Name
30e4408b28 v0.1.3 - Implement https 2025-11-09 19:57:45 -04:00
Your Name
e43dd5c64f v0.1.2 - . 2025-10-18 17:38:56 -04:00
Your Name
bb18ffcdce v0.1.1 - Cleaning things up. 2025-10-16 15:24:41 -04:00
39 changed files with 2306 additions and 2229 deletions

109
42.md
View File

@@ -1,109 +0,0 @@
NIP-42
======
Authentication of clients to relays
-----------------------------------
`draft` `optional`
This NIP defines a way for clients to authenticate to relays by signing an ephemeral event.
## Motivation
A relay may want to require clients to authenticate to access restricted resources. For example,
- A relay may request payment or other forms of whitelisting to publish events -- this can naïvely be achieved by limiting publication to events signed by the whitelisted key, but with this NIP they may choose to accept any events as long as they are published from an authenticated user;
- A relay may limit access to `kind: 4` DMs to only the parties involved in the chat exchange, and for that it may require authentication before clients can query for that kind.
- A relay may limit subscriptions of any kind to paying users or users whitelisted through any other means, and require authentication.
## Definitions
### New client-relay protocol messages
This NIP defines a new message, `AUTH`, which relays CAN send when they support authentication and clients can send to relays when they want to authenticate. When sent by relays the message has the following form:
```
["AUTH", <challenge-string>]
```
And, when sent by clients, the following form:
```
["AUTH", <signed-event-json>]
```
Clients MAY provide signed events from multiple pubkeys in a sequence of `AUTH` messages. Relays MUST treat all pubkeys as authenticated accordingly.
`AUTH` messages sent by clients MUST be answered with an `OK` message, like any `EVENT` message.
### Canonical authentication event
The signed event is an ephemeral event not meant to be published or queried, it must be of `kind: 22242` and it should have at least two tags, one for the relay URL and one for the challenge string as received from the relay. Relays MUST exclude `kind: 22242` events from being broadcasted to any client. `created_at` should be the current time. Example:
```jsonc
{
"kind": 22242,
"tags": [
["relay", "wss://relay.example.com/"],
["challenge", "challengestringhere"]
],
// other fields...
}
```
### `OK` and `CLOSED` machine-readable prefixes
This NIP defines two new prefixes that can be used in `OK` (in response to event writes by clients) and `CLOSED` (in response to rejected subscriptions by clients):
- `"auth-required: "` - for when a client has not performed `AUTH` and the relay requires that to fulfill the query or write the event.
- `"restricted: "` - for when a client has already performed `AUTH` but the key used to perform it is still not allowed by the relay or is exceeding its authorization.
## Protocol flow
At any moment the relay may send an `AUTH` message to the client containing a challenge. The challenge is valid for the duration of the connection or until another challenge is sent by the relay. The client MAY decide to send its `AUTH` event at any point and the authenticated session is valid afterwards for the duration of the connection.
### `auth-required` in response to a `REQ` message
Given that a relay is likely to require clients to perform authentication only for certain jobs, like answering a `REQ` or accepting an `EVENT` write, these are some expected common flows:
```
relay: ["AUTH", "<challenge>"]
client: ["REQ", "sub_1", {"kinds": [4]}]
relay: ["CLOSED", "sub_1", "auth-required: we can't serve DMs to unauthenticated users"]
client: ["AUTH", {"id": "abcdef...", ...}]
client: ["AUTH", {"id": "abcde2...", ...}]
relay: ["OK", "abcdef...", true, ""]
relay: ["OK", "abcde2...", true, ""]
client: ["REQ", "sub_1", {"kinds": [4]}]
relay: ["EVENT", "sub_1", {...}]
relay: ["EVENT", "sub_1", {...}]
relay: ["EVENT", "sub_1", {...}]
relay: ["EVENT", "sub_1", {...}]
...
```
In this case, the `AUTH` message from the relay could be sent right as the client connects or it can be sent immediately before the `CLOSED` is sent. The only requirement is that _the client must have a stored challenge associated with that relay_ so it can act upon that in response to the `auth-required` `CLOSED` message.
### `auth-required` in response to an `EVENT` message
The same flow is valid for when a client wants to write an `EVENT` to the relay, except now the relay sends back an `OK` message instead of a `CLOSED` message:
```
relay: ["AUTH", "<challenge>"]
client: ["EVENT", {"id": "012345...", ...}]
relay: ["OK", "012345...", false, "auth-required: we only accept events from registered users"]
client: ["AUTH", {"id": "abcdef...", ...}]
relay: ["OK", "abcdef...", true, ""]
client: ["EVENT", {"id": "012345...", ...}]
relay: ["OK", "012345...", true, ""]
```
## Signed Event Verification
To verify `AUTH` messages, relays must ensure:
- that the `kind` is `22242`;
- that the event `created_at` is close (e.g. within ~10 minutes) of the current time;
- that the `"challenge"` tag matches the challenge sent before;
- that the `"relay"` tag matches the relay URL:
- URL normalization techniques can be applied. For most cases just checking if the domain name is correct should be enough.

View File

View File

612
STATIC_MUSL_GUIDE.md Normal file
View File

@@ -0,0 +1,612 @@
# Static MUSL Build Guide for C Programs
## Overview
This guide explains how to build truly portable static binaries using Alpine Linux and MUSL libc. These binaries have **zero runtime dependencies** and work on any Linux distribution without modification.
This guide is specifically tailored for C programs that use:
- **nostr_core_lib** - Nostr protocol implementation
- **nostr_login_lite** - Nostr authentication library
- Common dependencies: libwebsockets, OpenSSL, SQLite, curl, secp256k1
## Why MUSL Static Binaries?
### Advantages Over glibc
| Feature | MUSL Static | glibc Static | glibc Dynamic |
|---------|-------------|--------------|---------------|
| **Portability** | ✓ Any Linux | ⚠ glibc only | ✗ Requires matching libs |
| **Binary Size** | ~7-10 MB | ~12-15 MB | ~2-3 MB |
| **Dependencies** | None | NSS libs | Many system libs |
| **Deployment** | Single file | Single file + NSS | Binary + libraries |
| **Compatibility** | Universal | glibc version issues | Library version hell |
### Key Benefits
1. **True Portability**: Works on Alpine, Ubuntu, Debian, CentOS, Arch, etc.
2. **No Library Hell**: No `GLIBC_2.XX not found` errors
3. **Simple Deployment**: Just copy one file
4. **Reproducible Builds**: Same Docker image = same binary
5. **Security**: No dependency on system libraries with vulnerabilities
## Quick Start
### Prerequisites
- Docker installed and running
- Your C project with source code
- Internet connection for downloading dependencies
### Basic Build Process
```bash
# 1. Copy the Dockerfile template (see below)
cp /path/to/c-relay/Dockerfile.alpine-musl ./Dockerfile.static
# 2. Customize for your project (see Customization section)
vim Dockerfile.static
# 3. Build the static binary
docker build --platform linux/amd64 -f Dockerfile.static -t my-app-builder .
# 4. Extract the binary
docker create --name temp-container my-app-builder
docker cp temp-container:/build/my_app_static ./my_app_static
docker rm temp-container
# 5. Verify it's static
ldd ./my_app_static # Should show "not a dynamic executable"
```
## Dockerfile Template
Here's a complete Dockerfile template you can customize for your project:
```dockerfile
# Alpine-based MUSL static binary builder
# Produces truly portable binaries with zero runtime dependencies
FROM alpine:3.19 AS builder
# Install build dependencies
RUN apk add --no-cache \
build-base \
musl-dev \
git \
cmake \
pkgconfig \
autoconf \
automake \
libtool \
openssl-dev \
openssl-libs-static \
zlib-dev \
zlib-static \
curl-dev \
curl-static \
sqlite-dev \
sqlite-static \
linux-headers \
wget \
bash
WORKDIR /build
# Build libsecp256k1 static (required for Nostr)
RUN cd /tmp && \
git clone https://github.com/bitcoin-core/secp256k1.git && \
cd secp256k1 && \
./autogen.sh && \
./configure --enable-static --disable-shared --prefix=/usr \
CFLAGS="-fPIC" && \
make -j$(nproc) && \
make install && \
rm -rf /tmp/secp256k1
# Build libwebsockets static (if needed for WebSocket support)
RUN cd /tmp && \
git clone --depth 1 --branch v4.3.3 https://github.com/warmcat/libwebsockets.git && \
cd libwebsockets && \
mkdir build && cd build && \
cmake .. \
-DLWS_WITH_STATIC=ON \
-DLWS_WITH_SHARED=OFF \
-DLWS_WITH_SSL=ON \
-DLWS_WITHOUT_TESTAPPS=ON \
-DLWS_WITHOUT_TEST_SERVER=ON \
-DLWS_WITHOUT_TEST_CLIENT=ON \
-DLWS_WITHOUT_TEST_PING=ON \
-DLWS_WITH_HTTP2=OFF \
-DLWS_WITH_LIBUV=OFF \
-DLWS_WITH_LIBEVENT=OFF \
-DLWS_IPV6=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/usr \
-DCMAKE_C_FLAGS="-fPIC" && \
make -j$(nproc) && \
make install && \
rm -rf /tmp/libwebsockets
# Copy git configuration for submodules
COPY .gitmodules /build/.gitmodules
COPY .git /build/.git
# Initialize submodules
RUN git submodule update --init --recursive
# Copy and build nostr_core_lib
COPY nostr_core_lib /build/nostr_core_lib/
RUN cd nostr_core_lib && \
chmod +x build.sh && \
sed -i 's/CFLAGS="-Wall -Wextra -std=c99 -fPIC -O2"/CFLAGS="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -Wall -Wextra -std=c99 -fPIC -O2"/' build.sh && \
rm -f *.o *.a 2>/dev/null || true && \
./build.sh --nips=1,6,13,17,19,44,59
# Copy and build nostr_login_lite (if used)
# COPY nostr_login_lite /build/nostr_login_lite/
# RUN cd nostr_login_lite && make static
# Copy your application source
COPY src/ /build/src/
COPY Makefile /build/Makefile
# Build your application with full static linking
RUN gcc -static -O2 -Wall -Wextra -std=c99 \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
-I. -Inostr_core_lib -Inostr_core_lib/nostr_core \
-Inostr_core_lib/cjson -Inostr_core_lib/nostr_websocket \
src/*.c \
-o /build/my_app_static \
nostr_core_lib/libnostr_core_x64.a \
-lwebsockets -lssl -lcrypto -lsqlite3 -lsecp256k1 \
-lcurl -lz -lpthread -lm -ldl && \
strip /build/my_app_static
# Verify it's truly static
RUN echo "=== Binary Information ===" && \
file /build/my_app_static && \
ls -lh /build/my_app_static && \
echo "=== Checking for dynamic dependencies ===" && \
(ldd /build/my_app_static 2>&1 || echo "Binary is static")
# Output stage - just the binary
FROM scratch AS output
COPY --from=builder /build/my_app_static /my_app_static
```
## Customization Guide
### 1. Adjust Dependencies
**Add dependencies** by modifying the `apk add` section:
```dockerfile
RUN apk add --no-cache \
build-base \
musl-dev \
# Add your dependencies here:
libpng-dev \
libpng-static \
libjpeg-turbo-dev \
libjpeg-turbo-static
```
**Remove unused dependencies** to speed up builds:
- Remove `libwebsockets` section if you don't need WebSocket support
- Remove `sqlite` if you don't use databases
- Remove `curl` if you don't make HTTP requests
### 2. Configure nostr_core_lib NIPs
Specify which NIPs your application needs:
```bash
./build.sh --nips=1,6,19 # Minimal: Basic protocol, keys, bech32
./build.sh --nips=1,6,13,17,19,44,59 # Full: All common NIPs
./build.sh --nips=all # Everything available
```
**Common NIP combinations:**
- **Basic client**: `1,6,19` (events, keys, bech32)
- **With encryption**: `1,6,19,44` (add modern encryption)
- **With DMs**: `1,6,17,19,44,59` (add private messages)
- **Relay/server**: `1,6,13,17,19,42,44,59` (add PoW, auth)
### 3. Modify Compilation Flags
**For your application:**
```dockerfile
RUN gcc -static -O2 -Wall -Wextra -std=c99 \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \ # REQUIRED for MUSL
-I. -Inostr_core_lib \ # Include paths
src/*.c \ # Your source files
-o /build/my_app_static \ # Output binary
nostr_core_lib/libnostr_core_x64.a \ # Nostr library
-lwebsockets -lssl -lcrypto \ # Link libraries
-lsqlite3 -lsecp256k1 -lcurl \
-lz -lpthread -lm -ldl
```
**Debug build** (with symbols, no optimization):
```dockerfile
RUN gcc -static -g -O0 -DDEBUG \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
# ... rest of flags
```
### 4. Multi-Architecture Support
Build for different architectures:
```bash
# x86_64 (Intel/AMD)
docker build --platform linux/amd64 -f Dockerfile.static -t my-app-x86 .
# ARM64 (Apple Silicon, Raspberry Pi 4+)
docker build --platform linux/arm64 -f Dockerfile.static -t my-app-arm64 .
```
## Build Script Template
Create a `build_static.sh` script for convenience:
```bash
#!/bin/bash
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
BUILD_DIR="$SCRIPT_DIR/build"
DOCKERFILE="$SCRIPT_DIR/Dockerfile.static"
# Detect architecture
ARCH=$(uname -m)
case "$ARCH" in
x86_64)
PLATFORM="linux/amd64"
OUTPUT_NAME="my_app_static_x86_64"
;;
aarch64|arm64)
PLATFORM="linux/arm64"
OUTPUT_NAME="my_app_static_arm64"
;;
*)
echo "Unknown architecture: $ARCH"
exit 1
;;
esac
echo "Building for platform: $PLATFORM"
mkdir -p "$BUILD_DIR"
# Build Docker image
docker build \
--platform "$PLATFORM" \
-f "$DOCKERFILE" \
-t my-app-builder:latest \
--progress=plain \
.
# Extract binary
CONTAINER_ID=$(docker create my-app-builder:latest)
docker cp "$CONTAINER_ID:/build/my_app_static" "$BUILD_DIR/$OUTPUT_NAME"
docker rm "$CONTAINER_ID"
chmod +x "$BUILD_DIR/$OUTPUT_NAME"
echo "✓ Build complete: $BUILD_DIR/$OUTPUT_NAME"
echo "✓ Size: $(du -h "$BUILD_DIR/$OUTPUT_NAME" | cut -f1)"
# Verify
if ldd "$BUILD_DIR/$OUTPUT_NAME" 2>&1 | grep -q "not a dynamic executable"; then
echo "✓ Binary is fully static"
else
echo "⚠ Warning: Binary may have dynamic dependencies"
fi
```
Make it executable:
```bash
chmod +x build_static.sh
./build_static.sh
```
## Common Issues and Solutions
### Issue 1: Fortification Errors
**Error:**
```
undefined reference to '__snprintf_chk'
undefined reference to '__fprintf_chk'
```
**Cause**: GCC's `-O2` enables fortification by default, which uses glibc-specific functions.
**Solution**: Add these flags to **all** compilation commands:
```bash
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0
```
This must be applied to:
1. nostr_core_lib build.sh
2. Your application compilation
3. Any other libraries you build
### Issue 2: Missing Symbols from nostr_core_lib
**Error:**
```
undefined reference to 'nostr_create_event'
undefined reference to 'nostr_sign_event'
```
**Cause**: Required NIPs not included in nostr_core_lib build.
**Solution**: Add missing NIPs:
```bash
./build.sh --nips=1,6,19 # Add the NIPs you need
```
### Issue 3: Docker Permission Denied
**Error:**
```
permission denied while trying to connect to the Docker daemon socket
```
**Solution**:
```bash
sudo usermod -aG docker $USER
newgrp docker # Or logout and login
```
### Issue 4: Binary Won't Run on Target System
**Checks**:
```bash
# 1. Verify it's static
ldd my_app_static # Should show "not a dynamic executable"
# 2. Check architecture
file my_app_static # Should match target system
# 3. Test on different distributions
docker run --rm -v $(pwd):/app alpine:latest /app/my_app_static --version
docker run --rm -v $(pwd):/app ubuntu:latest /app/my_app_static --version
```
## Project Structure Example
Organize your project for easy static builds:
```
my-nostr-app/
├── src/
│ ├── main.c
│ ├── handlers.c
│ └── utils.c
├── nostr_core_lib/ # Git submodule
├── nostr_login_lite/ # Git submodule (if used)
├── Dockerfile.static # Static build Dockerfile
├── build_static.sh # Build script
├── Makefile # Regular build
└── README.md
```
### Makefile Integration
Add static build targets to your Makefile:
```makefile
# Regular dynamic build
all: my_app
my_app: src/*.c
gcc -O2 src/*.c -o my_app \
nostr_core_lib/libnostr_core_x64.a \
-lssl -lcrypto -lsecp256k1 -lz -lpthread -lm
# Static MUSL build via Docker
static:
./build_static.sh
# Clean
clean:
rm -f my_app build/my_app_static_*
.PHONY: all static clean
```
## Deployment
### Single Binary Deployment
```bash
# Copy to server
scp build/my_app_static_x86_64 user@server:/opt/my-app/
# Run (no dependencies needed!)
ssh user@server
/opt/my-app/my_app_static_x86_64
```
### SystemD Service
```ini
[Unit]
Description=My Nostr Application
After=network.target
[Service]
Type=simple
User=myapp
WorkingDirectory=/opt/my-app
ExecStart=/opt/my-app/my_app_static_x86_64
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
```
### Docker Container (Minimal)
```dockerfile
FROM scratch
COPY my_app_static_x86_64 /app
ENTRYPOINT ["/app"]
```
Build and run:
```bash
docker build -t my-app:latest .
docker run --rm my-app:latest --help
```
## Reusing c-relay Files
You can directly copy these files from c-relay:
### 1. Dockerfile.alpine-musl
```bash
cp /path/to/c-relay/Dockerfile.alpine-musl ./Dockerfile.static
```
Then customize:
- Change binary name (line 125)
- Adjust source files (line 122-124)
- Modify include paths (line 120-121)
### 2. build_static.sh
```bash
cp /path/to/c-relay/build_static.sh ./
```
Then customize:
- Change `OUTPUT_NAME` variable (lines 66, 70)
- Update Docker image name (line 98)
- Modify verification commands (lines 180-184)
### 3. .dockerignore (Optional)
```bash
cp /path/to/c-relay/.dockerignore ./
```
Helps speed up Docker builds by excluding unnecessary files.
## Best Practices
1. **Version Control**: Commit your Dockerfile and build script
2. **Tag Builds**: Include git commit hash in binary version
3. **Test Thoroughly**: Verify on multiple distributions
4. **Document Dependencies**: List required NIPs and libraries
5. **Automate**: Use CI/CD to build on every commit
6. **Archive Binaries**: Keep old versions for rollback
## Performance Comparison
| Metric | MUSL Static | glibc Dynamic |
|--------|-------------|---------------|
| Binary Size | 7-10 MB | 2-3 MB + libs |
| Startup Time | ~50ms | ~40ms |
| Memory Usage | Similar | Similar |
| Portability | ✓ Universal | ✗ System-dependent |
| Deployment | Single file | Binary + libraries |
## References
- [MUSL libc](https://musl.libc.org/)
- [Alpine Linux](https://alpinelinux.org/)
- [nostr_core_lib](https://github.com/chebizarro/nostr_core_lib)
- [Static Linking Best Practices](https://www.musl-libc.org/faq.html)
- [c-relay Implementation](./docs/musl_static_build.md)
## Example: Minimal Nostr Client
Here's a complete example of building a minimal Nostr client:
```c
// minimal_client.c
#include "nostr_core/nostr_core.h"
#include <stdio.h>
int main() {
// Generate keypair
char nsec[64], npub[64];
nostr_generate_keypair(nsec, npub);
printf("Generated keypair:\n");
printf("Private key (nsec): %s\n", nsec);
printf("Public key (npub): %s\n", npub);
// Create event
cJSON *event = nostr_create_event(1, "Hello, Nostr!", NULL);
nostr_sign_event(event, nsec);
char *json = cJSON_Print(event);
printf("\nSigned event:\n%s\n", json);
free(json);
cJSON_Delete(event);
return 0;
}
```
**Dockerfile.static:**
```dockerfile
FROM alpine:3.19 AS builder
RUN apk add --no-cache build-base musl-dev git autoconf automake libtool \
openssl-dev openssl-libs-static zlib-dev zlib-static
WORKDIR /build
# Build secp256k1
RUN cd /tmp && git clone https://github.com/bitcoin-core/secp256k1.git && \
cd secp256k1 && ./autogen.sh && \
./configure --enable-static --disable-shared --prefix=/usr CFLAGS="-fPIC" && \
make -j$(nproc) && make install
# Copy and build nostr_core_lib
COPY nostr_core_lib /build/nostr_core_lib/
RUN cd nostr_core_lib && \
sed -i 's/CFLAGS="-Wall/CFLAGS="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -Wall/' build.sh && \
./build.sh --nips=1,6,19
# Build application
COPY minimal_client.c /build/
RUN gcc -static -O2 -Wall -std=c99 \
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 \
-Inostr_core_lib -Inostr_core_lib/nostr_core -Inostr_core_lib/cjson \
minimal_client.c -o /build/minimal_client_static \
nostr_core_lib/libnostr_core_x64.a \
-lssl -lcrypto -lsecp256k1 -lz -lpthread -lm -ldl && \
strip /build/minimal_client_static
FROM scratch
COPY --from=builder /build/minimal_client_static /minimal_client_static
```
**Build and run:**
```bash
docker build -f Dockerfile.static -t minimal-client .
docker create --name temp minimal-client
docker cp temp:/minimal_client_static ./
docker rm temp
./minimal_client_static
```
## Conclusion
Static MUSL binaries provide the best portability for C applications. While they're slightly larger than dynamic binaries, the benefits of zero dependencies and universal compatibility make them ideal for:
- Server deployments across different Linux distributions
- Embedded systems and IoT devices
- Docker containers (FROM scratch)
- Distribution to users without dependency management
- Long-term archival and reproducibility
Follow this guide to create portable, self-contained binaries for your Nostr applications!

View File

View File

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -131,21 +131,48 @@ increment_version() {
export NEW_VERSION
}
# Function to update version in header file
update_version_in_header() {
local version="$1"
print_status "Updating version in src/ginxsom.h to $version..."
# Extract version components (remove 'v' prefix)
local version_no_v=${version#v}
# Parse major.minor.patch using regex
if [[ $version_no_v =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
local major=${BASH_REMATCH[1]}
local minor=${BASH_REMATCH[2]}
local patch=${BASH_REMATCH[3]}
# Update the header file
sed -i "s/#define VERSION_MAJOR [0-9]\+/#define VERSION_MAJOR $major/" src/ginxsom.h
sed -i "s/#define VERSION_MINOR [0-9]\+/#define VERSION_MINOR $minor/" src/ginxsom.h
sed -i "s/#define VERSION_PATCH [0-9]\+/#define VERSION_PATCH $patch/" src/ginxsom.h
sed -i "s/#define VERSION \"v[0-9]\+\.[0-9]\+\.[0-9]\+\"/#define VERSION \"$version\"/" src/ginxsom.h
print_success "Updated version in header file"
else
print_error "Invalid version format: $version"
exit 1
fi
}
# Function to compile the Ginxsom project
compile_project() {
print_status "Compiling Ginxsom FastCGI server..."
# Clean previous build
if make clean > /dev/null 2>&1; then
print_success "Cleaned previous build"
else
print_warning "Clean failed or no Makefile found"
fi
# Compile the project
if make > /dev/null 2>&1; then
print_success "Ginxsom compiled successfully"
# Verify the binary was created
if [[ -f "build/ginxsom-fcgi" ]]; then
print_success "Binary created: build/ginxsom-fcgi"
@@ -390,9 +417,12 @@ main() {
git tag "$NEW_VERSION" > /dev/null 2>&1
fi
# Update version in header file
update_version_in_header "$NEW_VERSION"
# Compile project
compile_project
# Build release binary
build_release_binary
@@ -423,9 +453,12 @@ main() {
git tag "$NEW_VERSION" > /dev/null 2>&1
fi
# Update version in header file
update_version_in_header "$NEW_VERSION"
# Compile project
compile_project
# Commit and push (but skip tag creation since we already did it)
git_commit_and_push_no_tag

View File

@@ -351,14 +351,33 @@ http {
autoindex_format json;
}
# Root redirect
# Root endpoint - Server info from FastCGI
location = / {
return 200 "Ginxsom Blossom Server\nEndpoints: GET /<sha256>, PUT /upload, GET /list/<pubkey>\nHealth: GET /health\n";
add_header Content-Type text/plain;
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass fastcgi_backend;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param REDIRECT_STATUS 200;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
}
}
@@ -683,14 +702,33 @@ http {
autoindex_format json;
}
# Root redirect
# Root endpoint - Server info from FastCGI
location = / {
return 200 "Ginxsom Blossom Server (HTTPS)\nEndpoints: GET /<sha256>, PUT /upload, GET /list/<pubkey>\nHealth: GET /health\n";
add_header Content-Type text/plain;
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass fastcgi_backend;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param REDIRECT_STATUS 200;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
}
}
}

Binary file not shown.

File diff suppressed because it is too large Load Diff

287
deploy_lt.sh Executable file
View File

@@ -0,0 +1,287 @@
#!/bin/bash
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
print_status() { echo -e "${BLUE}[INFO]${NC} $1"; }
print_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
print_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
print_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Configuration
REMOTE_HOST="laantungir.net"
REMOTE_USER="ubuntu"
REMOTE_DIR="/home/ubuntu/ginxsom"
REMOTE_DB_PATH="/home/ubuntu/ginxsom/db/ginxsom.db"
REMOTE_NGINX_CONFIG="/etc/nginx/conf.d/default.conf"
REMOTE_BINARY_PATH="/home/ubuntu/ginxsom/ginxsom.fcgi"
REMOTE_SOCKET="/tmp/ginxsom-fcgi.sock"
REMOTE_DATA_DIR="/var/www/html/blossom"
print_status "Starting deployment to $REMOTE_HOST..."
# Step 1: Build and prepare local binary
print_status "Building ginxsom binary..."
make clean && make
if [[ ! -f "build/ginxsom-fcgi" ]]; then
print_error "Build failed - binary not found"
exit 1
fi
print_success "Binary built successfully"
# Step 2: Setup remote environment first (before copying files)
print_status "Setting up remote environment..."
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
set -e
# Create data directory if it doesn't exist (using existing /var/www/html/blossom)
sudo mkdir -p /var/www/html/blossom
sudo chown www-data:www-data /var/www/html/blossom
sudo chmod 755 /var/www/html/blossom
# Ensure socket directory exists
sudo mkdir -p /tmp
sudo chmod 755 /tmp
# Install required dependencies
echo "Installing required dependencies..."
sudo apt-get update
sudo apt-get install -y spawn-fcgi libfcgi-dev
# Stop any existing ginxsom processes
echo "Stopping existing ginxsom processes..."
sudo pkill -f ginxsom-fcgi || true
sudo rm -f /tmp/ginxsom-fcgi.sock || true
echo "Remote environment setup complete"
EOF
print_success "Remote environment configured"
# Step 3: Copy files to remote server
print_status "Copying files to remote server..."
# Copy entire project directory (excluding unnecessary files)
print_status "Copying entire ginxsom project..."
rsync -avz --exclude='.git' --exclude='build' --exclude='logs' --exclude='Trash' --exclude='blobs' --exclude='db/ginxsom.db' --no-g --no-o --no-perms --omit-dir-times . $REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/
# Build on remote server to ensure compatibility
print_status "Building ginxsom on remote server..."
ssh $REMOTE_USER@$REMOTE_HOST "cd $REMOTE_DIR && make clean && make" || {
print_error "Build failed on remote server"
print_status "Checking what packages are actually installed..."
ssh $REMOTE_USER@$REMOTE_HOST "dpkg -l | grep -E '(sqlite|fcgi)'"
exit 1
}
# Copy binary to application directory
print_status "Copying ginxsom binary to application directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Stop any running process first
sudo pkill -f ginxsom-fcgi || true
sleep 1
# Remove old binary if it exists
rm -f $REMOTE_BINARY_PATH
# Copy new binary
cp $REMOTE_DIR/build/ginxsom-fcgi $REMOTE_BINARY_PATH
chmod +x $REMOTE_BINARY_PATH
chown ubuntu:ubuntu $REMOTE_BINARY_PATH
echo "Binary copied successfully"
EOF
# NOTE: Do NOT update nginx configuration automatically
# The deployment script should only update ginxsom binaries and do nothing else with the system
# Nginx configuration should be managed manually by the system administrator
print_status "Skipping nginx configuration update (manual control required)"
print_success "Files copied to remote server"
# Step 3: Setup remote environment
print_status "Setting up remote environment..."
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
set -e
# Create data directory if it doesn't exist (using existing /var/www/html/blossom)
sudo mkdir -p /var/www/html/blossom
sudo chown www-data:www-data /var/www/html/blossom
sudo chmod 755 /var/www/html/blossom
# Ensure socket directory exists
sudo mkdir -p /tmp
sudo chmod 755 /tmp
# Install required dependencies
echo "Installing required dependencies..."
sudo apt-get update 2>/dev/null || true # Continue even if apt update has issues
sudo apt-get install -y spawn-fcgi libfcgi-dev libsqlite3-dev sqlite3 libcurl4-openssl-dev
# Verify installations
echo "Verifying installations..."
if ! dpkg -l libsqlite3-dev >/dev/null 2>&1; then
echo "libsqlite3-dev not found, trying alternative..."
sudo apt-get install -y libsqlite3-dev || {
echo "Failed to install libsqlite3-dev"
exit 1
}
fi
if ! dpkg -l libfcgi-dev >/dev/null 2>&1; then
echo "libfcgi-dev not found"
exit 1
fi
# Check if sqlite3.h exists
if [ ! -f /usr/include/sqlite3.h ]; then
echo "sqlite3.h not found in /usr/include/"
find /usr -name "sqlite3.h" 2>/dev/null || echo "sqlite3.h not found anywhere"
exit 1
fi
# Stop any existing ginxsom processes
echo "Stopping existing ginxsom processes..."
sudo pkill -f ginxsom-fcgi || true
sudo rm -f /tmp/ginxsom-fcgi.sock || true
echo "Remote environment setup complete"
EOF
print_success "Remote environment configured"
# Step 4: Setup database directory and migrate database
print_status "Setting up database directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Create db directory if it doesn't exist
mkdir -p $REMOTE_DIR/db
# Backup current database if it exists in old location
if [ -f /var/www/html/blossom/ginxsom.db ]; then
echo "Backing up existing database..."
cp /var/www/html/blossom/ginxsom.db /var/www/html/blossom/ginxsom.db.backup.\$(date +%Y%m%d_%H%M%S)
# Migrate database to new location if not already there
if [ ! -f $REMOTE_DB_PATH ]; then
echo "Migrating database to new location..."
cp /var/www/html/blossom/ginxsom.db $REMOTE_DB_PATH
else
echo "Database already exists at new location"
fi
elif [ ! -f $REMOTE_DB_PATH ]; then
echo "No existing database found - will be created on first run"
else
echo "Database already exists at $REMOTE_DB_PATH"
fi
# Set proper permissions - www-data needs write access to db directory for SQLite journal files
sudo chown -R www-data:www-data $REMOTE_DIR/db
sudo chmod 755 $REMOTE_DIR/db
sudo chmod 644 $REMOTE_DB_PATH 2>/dev/null || true
# Allow www-data to access the application directory for spawn-fcgi chdir
chmod 755 $REMOTE_DIR
echo "Database directory setup complete"
EOF
print_success "Database directory configured"
# Step 5: Start ginxsom FastCGI process
print_status "Starting ginxsom FastCGI process..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Clean up any existing socket
sudo rm -f $REMOTE_SOCKET
# Start FastCGI process with explicit paths
echo "Starting ginxsom FastCGI with configuration:"
echo " Working directory: $REMOTE_DIR"
echo " Binary: $REMOTE_BINARY_PATH"
echo " Database: $REMOTE_DB_PATH"
echo " Storage: $REMOTE_DATA_DIR"
sudo spawn-fcgi -M 666 -u www-data -g www-data -s $REMOTE_SOCKET -U www-data -G www-data -d $REMOTE_DIR -- $REMOTE_BINARY_PATH --db-path "$REMOTE_DB_PATH" --storage-dir "$REMOTE_DATA_DIR"
# Give it a moment to start
sleep 2
# Verify process is running
if pgrep -f "ginxsom-fcgi" > /dev/null; then
echo "FastCGI process started successfully"
echo "PID: \$(pgrep -f ginxsom-fcgi)"
else
echo "Process not found by pgrep, but socket exists - this may be normal for FastCGI"
echo "Checking socket..."
ls -la $REMOTE_SOCKET
echo "Checking if binary exists and is executable..."
ls -la $REMOTE_BINARY_PATH
echo "Testing if we can connect to the socket..."
# Try to test the FastCGI connection
if command -v cgi-fcgi >/dev/null 2>&1; then
echo "Testing FastCGI connection..."
SCRIPT_NAME=/health SCRIPT_FILENAME=$REMOTE_BINARY_PATH REQUEST_METHOD=GET cgi-fcgi -bind -connect $REMOTE_SOCKET 2>/dev/null | head -5 || echo "Connection test failed"
else
echo "cgi-fcgi not available for testing"
fi
# Don't exit - the socket existing means spawn-fcgi worked
fi
EOF
if [ $? -eq 0 ]; then
print_success "FastCGI process started"
else
print_error "Failed to start FastCGI process"
exit 1
fi
# Step 6: Test nginx configuration and reload
print_status "Testing and reloading nginx..."
ssh $REMOTE_USER@$REMOTE_HOST << 'EOF'
# Test nginx configuration
if sudo nginx -t; then
echo "Nginx configuration test passed"
sudo nginx -s reload
echo "Nginx reloaded successfully"
else
echo "Nginx configuration test failed"
exit 1
fi
EOF
print_success "Nginx reloaded"
# Step 7: Test deployment
print_status "Testing deployment..."
# Test health endpoint
echo "Testing health endpoint..."
if curl -k -s --max-time 10 "https://blossom.laantungir.net/health" | grep -q "OK"; then
print_success "Health check passed"
else
print_warning "Health check failed - checking response..."
curl -k -v --max-time 10 "https://blossom.laantungir.net/health" 2>&1 | head -10
fi
# Test basic endpoints
echo "Testing root endpoint..."
if curl -k -s --max-time 10 "https://blossom.laantungir.net/" | grep -q "Ginxsom"; then
print_success "Root endpoint responding"
else
print_warning "Root endpoint not responding as expected - checking response..."
curl -k -v --max-time 10 "https://blossom.laantungir.net/" 2>&1 | head -10
fi
print_success "Deployment to $REMOTE_HOST completed!"
print_status "Ginxsom should now be available at: https://blossom.laantungir.net"
print_status "Test endpoints:"
echo " Health: curl -k https://blossom.laantungir.net/health"
echo " Root: curl -k https://blossom.laantungir.net/"
echo " List: curl -k https://blossom.laantungir.net/list"

View File

@@ -0,0 +1,356 @@
# Production Directory Structure Migration Plan
## Overview
This document outlines the plan to migrate the ginxsom production deployment from the current configuration to a new, more organized directory structure.
## Current Configuration (As-Is)
```
Binary Location: /var/www/html/blossom/ginxsom.fcgi
Database Location: /var/www/html/blossom/ginxsom.db
Data Directory: /var/www/html/blossom/
Working Directory: /var/www/html/blossom/ (set via spawn-fcgi -d)
Socket: /tmp/ginxsom-fcgi.sock
```
**Issues with Current Setup:**
1. Binary and database mixed with data files in web-accessible directory
2. Database path hardcoded as relative path `db/ginxsom.db` but database is at root of working directory
3. No separation between application files and user data
4. Security concern: application files in web root
## Target Configuration (To-Be)
```
Binary Location: /home/ubuntu/ginxsom/ginxsom.fcgi
Database Location: /home/ubuntu/ginxsom/db/ginxsom.db
Data Directory: /var/www/html/blossom/
Working Directory: /home/ubuntu/ginxsom/ (set via spawn-fcgi -d)
Socket: /tmp/ginxsom-fcgi.sock
```
**Benefits of New Setup:**
1. Application files separated from user data
2. Database in proper subdirectory structure
3. Application files outside web root (better security)
4. Clear separation of concerns
5. Easier backup and maintenance
## Directory Structure
### Application Directory: `/home/ubuntu/ginxsom/`
```
/home/ubuntu/ginxsom/
├── ginxsom.fcgi # FastCGI binary
├── db/
│ └── ginxsom.db # SQLite database
├── build/ # Build artifacts (from rsync)
├── src/ # Source code (from rsync)
├── include/ # Headers (from rsync)
├── config/ # Config files (from rsync)
└── scripts/ # Utility scripts (from rsync)
```
### Data Directory: `/var/www/html/blossom/`
```
/var/www/html/blossom/
├── <sha256>.jpg # User uploaded files
├── <sha256>.png
├── <sha256>.mp4
└── ...
```
## Command-Line Arguments
The ginxsom binary supports these arguments (from [`src/main.c`](src/main.c:1488-1509)):
```bash
--db-path PATH # Database file path (default: db/ginxsom.db)
--storage-dir DIR # Storage directory for files (default: .)
--help, -h # Show help message
```
## Migration Steps
### 1. Update deploy_lt.sh Configuration
Update the configuration variables in [`deploy_lt.sh`](deploy_lt.sh:16-23):
```bash
# Configuration
REMOTE_HOST="laantungir.net"
REMOTE_USER="ubuntu"
REMOTE_DIR="/home/ubuntu/ginxsom"
REMOTE_DB_PATH="/home/ubuntu/ginxsom/db/ginxsom.db"
REMOTE_NGINX_CONFIG="/etc/nginx/conf.d/default.conf"
REMOTE_BINARY_PATH="/home/ubuntu/ginxsom/ginxsom.fcgi"
REMOTE_SOCKET="/tmp/ginxsom-fcgi.sock"
REMOTE_DATA_DIR="/var/www/html/blossom"
```
### 2. Update Binary Deployment
Modify the binary copy section (lines 82-97) to use new path:
```bash
# Copy binary to application directory (not web directory)
print_status "Copying ginxsom binary to application directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Stop any running process first
sudo pkill -f ginxsom-fcgi || true
sleep 1
# Remove old binary if it exists
rm -f $REMOTE_BINARY_PATH
# Copy new binary
cp $REMOTE_DIR/build/ginxsom-fcgi $REMOTE_BINARY_PATH
chmod +x $REMOTE_BINARY_PATH
chown ubuntu:ubuntu $REMOTE_BINARY_PATH
echo "Binary copied successfully"
EOF
```
### 3. Create Database Directory Structure
Add database setup before starting FastCGI:
```bash
# Setup database directory
print_status "Setting up database directory..."
ssh $REMOTE_USER@$REMOTE_HOST << EOF
# Create db directory if it doesn't exist
mkdir -p $REMOTE_DIR/db
# Copy database if it exists in old location
if [ -f /var/www/html/blossom/ginxsom.db ]; then
echo "Migrating database from old location..."
cp /var/www/html/blossom/ginxsom.db $REMOTE_DB_PATH
elif [ ! -f $REMOTE_DB_PATH ]; then
echo "Initializing new database..."
# Database will be created by application on first run
fi
# Set proper permissions
chown -R ubuntu:ubuntu $REMOTE_DIR/db
chmod 755 $REMOTE_DIR/db
chmod 644 $REMOTE_DB_PATH 2>/dev/null || true
echo "Database directory setup complete"
EOF
```
### 4. Update spawn-fcgi Command
Modify the FastCGI startup (line 164) to include command-line arguments:
```bash
# Start FastCGI process with explicit paths
echo "Starting ginxsom FastCGI..."
sudo spawn-fcgi \
-M 666 \
-u www-data \
-g www-data \
-s $REMOTE_SOCKET \
-U www-data \
-G www-data \
-d $REMOTE_DIR \
-- $REMOTE_BINARY_PATH \
--db-path "$REMOTE_DB_PATH" \
--storage-dir "$REMOTE_DATA_DIR"
```
**Key Changes:**
- `-d $REMOTE_DIR`: Sets working directory to `/home/ubuntu/ginxsom/`
- `--db-path "$REMOTE_DB_PATH"`: Explicit database path
- `--storage-dir "$REMOTE_DATA_DIR"`: Explicit data directory
### 5. Verify Permissions
Ensure proper permissions for all directories:
```bash
# Application directory - owned by ubuntu
sudo chown -R ubuntu:ubuntu /home/ubuntu/ginxsom
sudo chmod 755 /home/ubuntu/ginxsom
sudo chmod +x /home/ubuntu/ginxsom/ginxsom.fcgi
# Database directory - readable by www-data
sudo chmod 755 /home/ubuntu/ginxsom/db
sudo chmod 644 /home/ubuntu/ginxsom/db/ginxsom.db
# Data directory - writable by www-data
sudo chown -R www-data:www-data /var/www/html/blossom
sudo chmod 755 /var/www/html/blossom
```
## Path Resolution Logic
### How Paths Work with spawn-fcgi -d Option
When spawn-fcgi starts the FastCGI process:
1. **Working Directory**: Set to `/home/ubuntu/ginxsom/` via `-d` option
2. **Relative Paths**: Resolved from working directory
3. **Absolute Paths**: Used as-is
### Default Behavior (Without Arguments)
From [`src/main.c`](src/main.c:30-31):
```c
char g_db_path[MAX_PATH_LEN] = "db/ginxsom.db"; // Relative to working dir
char g_storage_dir[MAX_PATH_LEN] = "."; // Current working dir
```
With working directory `/home/ubuntu/ginxsom/`:
- Database: `/home/ubuntu/ginxsom/db/ginxsom.db`
- Storage: `/home/ubuntu/ginxsom/` ✗ (wrong - we want `/var/www/html/blossom/`)
### With Command-Line Arguments
```bash
--db-path "/home/ubuntu/ginxsom/db/ginxsom.db"
--storage-dir "/var/www/html/blossom"
```
Result:
- Database: `/home/ubuntu/ginxsom/db/ginxsom.db`
- Storage: `/var/www/html/blossom/`
## Testing Plan
### 1. Pre-Migration Verification
```bash
# Check current setup
ssh ubuntu@laantungir.net "
echo 'Current binary location:'
ls -la /var/www/html/blossom/ginxsom.fcgi
echo 'Current database location:'
ls -la /var/www/html/blossom/ginxsom.db
echo 'Current process:'
ps aux | grep ginxsom-fcgi | grep -v grep
"
```
### 2. Post-Migration Verification
```bash
# Check new setup
ssh ubuntu@laantungir.net "
echo 'New binary location:'
ls -la /home/ubuntu/ginxsom/ginxsom.fcgi
echo 'New database location:'
ls -la /home/ubuntu/ginxsom/db/ginxsom.db
echo 'Data directory:'
ls -la /var/www/html/blossom/ | head -10
echo 'Process working directory:'
sudo ls -la /proc/\$(pgrep -f ginxsom.fcgi)/cwd
echo 'Process command line:'
ps aux | grep ginxsom-fcgi | grep -v grep
"
```
### 3. Functional Testing
```bash
# Test health endpoint
curl -k https://blossom.laantungir.net/health
# Test file upload
./tests/file_put_production.sh
# Test file retrieval
curl -k -I https://blossom.laantungir.net/<sha256>
# Test list endpoint
curl -k https://blossom.laantungir.net/list/<pubkey>
```
## Rollback Plan
If migration fails:
1. **Stop new process:**
```bash
sudo pkill -f ginxsom-fcgi
```
2. **Restore old binary location:**
```bash
sudo cp /home/ubuntu/ginxsom/build/ginxsom-fcgi /var/www/html/blossom/ginxsom.fcgi
sudo chown www-data:www-data /var/www/html/blossom/ginxsom.fcgi
```
3. **Restart with old configuration:**
```bash
sudo spawn-fcgi -M 666 -u www-data -g www-data \
-s /tmp/ginxsom-fcgi.sock \
-U www-data -G www-data \
-d /var/www/html/blossom \
/var/www/html/blossom/ginxsom.fcgi
```
## Additional Considerations
### 1. Database Backup
Before migration, backup the current database:
```bash
ssh ubuntu@laantungir.net "
cp /var/www/html/blossom/ginxsom.db /var/www/html/blossom/ginxsom.db.backup
"
```
### 2. NIP-94 Origin Configuration
After migration, update [`src/bud08.c`](src/bud08.c) to return production domain:
```c
void nip94_get_origin(char *origin, size_t origin_size) {
snprintf(origin, origin_size, "https://blossom.laantungir.net");
}
```
### 3. Monitoring
Monitor logs after migration:
```bash
# Application logs
ssh ubuntu@laantungir.net "sudo journalctl -u nginx -f"
# FastCGI process
ssh ubuntu@laantungir.net "ps aux | grep ginxsom-fcgi"
```
## Success Criteria
Migration is successful when:
1. ✓ Binary running from `/home/ubuntu/ginxsom/ginxsom.fcgi`
2. ✓ Database accessible at `/home/ubuntu/ginxsom/db/ginxsom.db`
3. ✓ Files stored in `/var/www/html/blossom/`
4. ✓ Health endpoint returns 200 OK
5. ✓ File upload works correctly
6. ✓ File retrieval works correctly
7. ✓ Database queries succeed
8. ✓ No permission errors in logs
## Timeline
1. **Preparation**: Update deploy_lt.sh script (15 minutes)
2. **Backup**: Backup current database (5 minutes)
3. **Migration**: Run updated deployment script (10 minutes)
4. **Testing**: Verify all endpoints (15 minutes)
5. **Monitoring**: Watch for issues (30 minutes)
**Total Estimated Time**: ~75 minutes
## References
- Current deployment script: [`deploy_lt.sh`](deploy_lt.sh)
- Main application: [`src/main.c`](src/main.c)
- Command-line parsing: [`src/main.c:1488-1509`](src/main.c:1488-1509)
- Global configuration: [`src/main.c:30-31`](src/main.c:30-31)
- Database operations: [`src/main.c:333-385`](src/main.c:333-385)

View File

@@ -33,6 +33,10 @@
#define DEFAULT_MAX_BLOBS_PER_USER 1000
#define DEFAULT_RATE_LIMIT 10
/* Global configuration variables */
extern char g_db_path[MAX_PATH_LEN];
extern char g_storage_dir[MAX_PATH_LEN];
/* Error codes */
typedef enum {
GINXSOM_OK = 0,

376
remote.nginx.config Normal file
View File

@@ -0,0 +1,376 @@
# FastCGI upstream configuration
upstream ginxsom_backend {
server unix:/tmp/ginxsom-fcgi.sock;
}
# Main domains
server {
if ($host = laantungir.net) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name laantungir.com www.laantungir.com laantungir.net www.laantungir.net laantungir.org www.laantungir.org;
root /var/www/html;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html;
}
}
# Main domains HTTPS - using the main certificate
server {
listen 443 ssl;
server_name laantungir.com www.laantungir.com laantungir.net www.laantungir.net laantungir.org www.laantungir.org;
ssl_certificate /etc/letsencrypt/live/laantungir.net/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/laantungir.net/privkey.pem; # managed by Certbot
root /var/www/html;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html;
}
}
# Blossom subdomains HTTP - redirect to HTTPS (keep for ACME)
server {
listen 80;
server_name blossom.laantungir.com blossom.laantungir.net blossom.laantungir.org;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
# Blossom subdomains HTTPS - ginxsom FastCGI
server {
listen 443 ssl;
server_name blossom.laantungir.com blossom.laantungir.net blossom.laantungir.org;
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
# Security headers
add_header X-Content-Type-Options nosniff always;
add_header X-Frame-Options DENY always;
add_header X-XSS-Protection "1; mode=block" always;
# CORS for Blossom protocol
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
# Root directory for blob storage
root /var/www/html/blossom;
# Maximum upload size
client_max_body_size 100M;
# OPTIONS preflight handler
if ($request_method = OPTIONS) {
return 204;
}
# PUT /upload - File uploads
location = /upload {
if ($request_method !~ ^(PUT|HEAD)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# GET /list/<pubkey> - List user blobs
location ~ "^/list/([a-f0-9]{64})$" {
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# PUT /mirror - Mirror content
location = /mirror {
if ($request_method !~ ^(PUT)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# PUT /report - Report content
location = /report {
if ($request_method !~ ^(PUT)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# GET /auth - NIP-42 challenges
location = /auth {
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# Admin API
location /api/ {
if ($request_method !~ ^(GET|PUT)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
# Blob serving - SHA256 patterns
location ~ "^/([a-f0-9]{64})(\.[a-zA-Z0-9]+)?$" {
# Handle DELETE via rewrite
if ($request_method = DELETE) {
rewrite ^/(.*)$ /fcgi-delete/$1 last;
}
# Route HEAD to FastCGI
if ($request_method = HEAD) {
rewrite ^/(.*)$ /fcgi-head/$1 last;
}
# GET requests - serve files directly
if ($request_method != GET) {
return 405;
}
try_files /$1.txt /$1.jpg /$1.jpeg /$1.png /$1.webp /$1.gif /$1.pdf /$1.mp4 /$1.mp3 /$1.md =404;
# Cache headers
add_header Cache-Control "public, max-age=31536000, immutable";
}
# Internal FastCGI handlers
location ~ "^/fcgi-delete/([a-f0-9]{64}).*$" {
internal;
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param REQUEST_URI /$1;
}
location ~ "^/fcgi-head/([a-f0-9]{64}).*$" {
internal;
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
fastcgi_param REQUEST_URI /$1;
}
# Health check
location /health {
access_log off;
return 200 "OK\n";
add_header Content-Type text/plain;
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, Content-Length, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since, *" always;
add_header Access-Control-Max-Age 86400 always;
}
# Default location - Server info from FastCGI
location / {
if ($request_method !~ ^(GET)$) {
return 405;
}
fastcgi_pass ginxsom_backend;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/ginxsom.fcgi;
}
}
server {
listen 80;
server_name relay.laantungir.com relay.laantungir.net relay.laantungir.org;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
proxy_buffering off;
proxy_request_buffering off;
gzip off;
}
}
# # Relay HTTPS - proxy to c-relay
server {
listen 443 ssl;
server_name relay.laantungir.com relay.laantungir.net relay.laantungir.org;
ssl_certificate /etc/letsencrypt/live/blossom.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/blossom.laantungir.net/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
proxy_buffering off;
proxy_request_buffering off;
gzip off;
}
}
# Git subdomains HTTP - redirect to HTTPS
server {
listen 80;
server_name git.laantungir.com git.laantungir.net git.laantungir.org;
# Allow larger file uploads for Git releases
client_max_body_size 50M;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
# Auth subdomains HTTP - redirect to HTTPS
server {
listen 80;
server_name auth.laantungir.com auth.laantungir.net auth.laantungir.org;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
}
}
# Git subdomains HTTPS - proxy to gitea
server {
listen 443 ssl;
server_name git.laantungir.com git.laantungir.net git.laantungir.org;
# Allow larger file uploads for Git releases
client_max_body_size 50M;
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
proxy_request_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
gzip off;
# proxy_set_header Sec-WebSocket-Extensions ;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
# Auth subdomains HTTPS - proxy to nostr-auth
server {
listen 443 ssl;
server_name auth.laantungir.com auth.laantungir.net auth.laantungir.org;
ssl_certificate /etc/letsencrypt/live/git.laantungir.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.laantungir.net/privkey.pem;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
proxy_request_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
gzip off;
# proxy_set_header Sec-WebSocket-Extensions ;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}

View File

@@ -168,7 +168,7 @@ export GINX_DEBUG=1
# Start FastCGI application with proper logging (daemonized but with redirected streams)
echo "FastCGI starting at $(date)" >> logs/app/stderr.log
spawn-fcgi -s "$SOCKET_PATH" -M 666 -u "$USER" -g "$USER" -f "$FCGI_BINARY" -P "$PID_FILE" 1>>logs/app/stdout.log 2>>logs/app/stderr.log
spawn-fcgi -s "$SOCKET_PATH" -M 666 -u "$USER" -g "$USER" -P "$PID_FILE" -- "$FCGI_BINARY" --storage-dir blobs 1>>logs/app/stdout.log 2>>logs/app/stderr.log
if [ $? -eq 0 ] && [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
@@ -250,6 +250,8 @@ else
fi
echo -e "\n${GREEN}=== Restart sequence complete ===${NC}"
echo -e "${YELLOW}Server should be available at: http://localhost:9001${NC}"
echo -e "${YELLOW}To stop all processes, run: nginx -p . -c $NGINX_CONFIG -s stop && kill \$(cat $PID_FILE 2>/dev/null)${NC}"
echo -e "${YELLOW}To monitor logs, check: logs/error.log, logs/access.log, and logs/fcgi-stderr.log${NC}"
echo -e "${YELLOW}To monitor logs, check: logs/nginx/error.log, logs/nginx/access.log, logs/app/stderr.log, logs/app/stdout.log${NC}"
echo -e "\n${YELLOW}Server is available at:${NC}"
echo -e " ${GREEN}HTTP:${NC} http://localhost:9001"
echo -e " ${GREEN}HTTPS:${NC} https://localhost:9443"

View File

@@ -426,9 +426,17 @@ void handle_mirror_request(void) {
// Determine file extension from Content-Type using centralized mapping
const char* extension = mime_to_extension(content_type_final);
// Save file to blobs directory
char filepath[512];
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256_hex, extension);
// Save file to storage directory using global g_storage_dir variable
char filepath[4096];
int filepath_len = snprintf(filepath, sizeof(filepath), "%s/%s%s", g_storage_dir, sha256_hex, extension);
if (filepath_len >= (int)sizeof(filepath)) {
free_mirror_download(download);
send_error_response(500, "file_error",
"File path too long",
"Internal server error during file path construction");
log_request("PUT", "/mirror", uploader_pubkey ? "authenticated" : "anonymous", 500);
return;
}
FILE* outfile = fopen(filepath, "wb");
if (!outfile) {

View File

@@ -24,7 +24,7 @@ int nip94_is_enabled(void) {
return 1; // Default enabled on DB error
}
const char* sql = "SELECT value FROM server_config WHERE key = 'nip94_enabled'";
const char* sql = "SELECT value FROM config WHERE key = 'nip94_enabled'";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
rc = sqlite3_step(stmt);
@@ -44,40 +44,53 @@ int nip94_get_origin(char* out, size_t out_size) {
if (!out || out_size == 0) {
return 0;
}
// Check database config first for custom origin
sqlite3* db;
sqlite3_stmt* stmt;
int rc;
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
if (rc) {
// Default on DB error
strncpy(out, "http://localhost:9001", out_size - 1);
out[out_size - 1] = '\0';
if (rc == SQLITE_OK) {
const char* sql = "SELECT value FROM config WHERE key = 'cdn_origin'";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
rc = sqlite3_step(stmt);
if (rc == SQLITE_ROW) {
const char* value = (const char*)sqlite3_column_text(stmt, 0);
if (value) {
strncpy(out, value, out_size - 1);
out[out_size - 1] = '\0';
sqlite3_finalize(stmt);
sqlite3_close(db);
return 1;
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
}
// Check if request came over HTTPS (nginx sets HTTPS=on for SSL requests)
const char* https_env = getenv("HTTPS");
const char* server_name = getenv("SERVER_NAME");
// Use production domain if SERVER_NAME is set and not localhost
if (server_name && strcmp(server_name, "localhost") != 0) {
if (https_env && strcmp(https_env, "on") == 0) {
snprintf(out, out_size, "https://%s", server_name);
} else {
snprintf(out, out_size, "http://%s", server_name);
}
return 1;
}
const char* sql = "SELECT value FROM server_config WHERE key = 'cdn_origin'";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
rc = sqlite3_step(stmt);
if (rc == SQLITE_ROW) {
const char* value = (const char*)sqlite3_column_text(stmt, 0);
if (value) {
strncpy(out, value, out_size - 1);
out[out_size - 1] = '\0';
sqlite3_finalize(stmt);
sqlite3_close(db);
return 1;
}
}
sqlite3_finalize(stmt);
// Fallback to localhost for development
if (https_env && strcmp(https_env, "on") == 0) {
strncpy(out, "https://localhost:9443", out_size - 1);
} else {
strncpy(out, "http://localhost:9001", out_size - 1);
}
sqlite3_close(db);
// Default fallback
strncpy(out, "http://localhost:9001", out_size - 1);
out[out_size - 1] = '\0';
return 1;
}

View File

@@ -7,6 +7,11 @@
#ifndef GINXSOM_H
#define GINXSOM_H
// Version information (auto-updated by build system)
#define VERSION_MAJOR 0
#define VERSION_MINOR 1
#define VERSION_PATCH 6
#define VERSION "v0.1.6"
#include <stddef.h>
#include <stdint.h>
@@ -30,6 +35,10 @@ extern sqlite3* db;
int init_database(void);
void close_database(void);
// Global configuration variables (defined in main.c)
extern char g_db_path[4096];
extern char g_storage_dir[4096];
// SHA-256 extraction and validation
const char* extract_sha256_from_uri(const char* uri);

View File

@@ -7,6 +7,7 @@
#include "ginxsom.h"
#include "../nostr_core_lib/nostr_core/nostr_common.h"
#include "../nostr_core_lib/nostr_core/utils.h"
#include <getopt.h>
#include <curl/curl.h>
#include <fcgi_stdio.h>
#include <sqlite3.h>
@@ -22,11 +23,15 @@
// Debug macros removed
#define MAX_SHA256_LEN 65
#define MAX_PATH_LEN 512
#define MAX_PATH_LEN 4096
#define MAX_MIME_LEN 128
// Database path
#define DB_PATH "db/ginxsom.db"
// Configuration variables - can be overridden via command line
char g_db_path[MAX_PATH_LEN] = "db/ginxsom.db";
char g_storage_dir[MAX_PATH_LEN] = ".";
// Use global configuration variables
#define DB_PATH g_db_path
// Configuration system implementation
@@ -35,22 +40,6 @@
#include <sys/stat.h>
#include <unistd.h>
// ===== UNUSED CODE - SAFE TO REMOVE AFTER TESTING =====
// Server configuration structure
/*
typedef struct {
char admin_pubkey[256];
char admin_enabled[8];
int config_loaded;
} server_config_t;
// Global configuration instance
static server_config_t g_server_config = {0};
// Global server private key (stored in memory only for security)
static char server_private_key[128] = {0};
*/
// ===== END UNUSED CODE =====
// Function to get XDG config directory
const char *get_config_dir(char *buffer, size_t buffer_size) {
@@ -70,240 +59,6 @@ const char *get_config_dir(char *buffer, size_t buffer_size) {
return ".config/ginxsom";
}
/*
// ===== UNUSED CODE - SAFE TO REMOVE AFTER TESTING =====
// Load server configuration from database or create defaults
int initialize_server_config(void) {
sqlite3 *db = NULL;
sqlite3_stmt *stmt = NULL;
int rc;
memset(&g_server_config, 0, sizeof(g_server_config));
// Open database
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
fprintf(stderr, "CONFIG: Could not open database for config: %s\n",
sqlite3_errmsg(db));
// Config database doesn't exist - leave config uninitialized
g_server_config.config_loaded = 0;
return 0;
}
// Load admin_pubkey
const char *sql = "SELECT value FROM config WHERE key = ?";
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, "admin_pubkey", -1, SQLITE_STATIC);
rc = sqlite3_step(stmt);
if (rc == SQLITE_ROW) {
const char *value = (const char *)sqlite3_column_text(stmt, 0);
if (value) {
strncpy(g_server_config.admin_pubkey, value,
sizeof(g_server_config.admin_pubkey) - 1);
}
}
sqlite3_finalize(stmt);
}
// Load admin_enabled
rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, "admin_enabled", -1, SQLITE_STATIC);
rc = sqlite3_step(stmt);
if (rc == SQLITE_ROW) {
const char *value = (const char *)sqlite3_column_text(stmt, 0);
if (value && strcmp(value, "true") == 0) {
strcpy(g_server_config.admin_enabled, "true");
} else {
strcpy(g_server_config.admin_enabled, "false");
}
}
sqlite3_finalize(stmt);
}
sqlite3_close(db);
g_server_config.config_loaded = 1;
fprintf(stderr, "CONFIG: Server configuration loaded\n");
return 1;
}
// ===== END UNUSED CODE =====
*/
/*
// File-based configuration system
// Config file path resolution
int get_config_file_path(char *path, size_t path_size) {
const char *home = getenv("HOME");
const char *xdg_config = getenv("XDG_CONFIG_HOME");
if (xdg_config) {
snprintf(path, path_size, "%s/ginxsom/ginxsom_config_event.json",
xdg_config);
} else if (home) {
snprintf(path, path_size, "%s/.config/ginxsom/ginxsom_config_event.json",
home);
} else {
return 0;
}
return 1;
}
*/
/*
// Load and validate config event
int load_server_config(const char *config_path) {
FILE *file = fopen(config_path, "r");
if (!file) {
return 0; // Config file doesn't exist
}
// Read entire file
fseek(file, 0, SEEK_END);
long file_size = ftell(file);
fseek(file, 0, SEEK_SET);
char *json_data = malloc(file_size + 1);
if (!json_data) {
fclose(file);
return 0;
}
fread(json_data, 1, file_size, file);
json_data[file_size] = '\0';
fclose(file);
// Parse and validate JSON event
cJSON *event = cJSON_Parse(json_data);
free(json_data);
if (!event) {
fprintf(stderr, "Invalid JSON in config file\n");
return 0;
}
// Validate event structure and signature
if (nostr_validate_event(event) != NOSTR_SUCCESS) {
fprintf(stderr, "Invalid or corrupted config event\n");
cJSON_Delete(event);
return 0;
}
// Extract configuration and apply to server
int result = apply_config_from_event(event);
cJSON_Delete(event);
return result;
}
*/
/*
// Extract config from validated event and apply to server
int apply_config_from_event(cJSON *event) {
sqlite3 *db;
sqlite3_stmt *stmt;
int rc;
// Open database for config storage
rc = sqlite3_open_v2(DB_PATH, &db, SQLITE_OPEN_READWRITE, NULL);
if (rc) {
fprintf(stderr, "Failed to open database for config\n");
return 0;
}
// Extract admin pubkey from event
cJSON *pubkey_json = cJSON_GetObjectItem(event, "pubkey");
if (!pubkey_json || !cJSON_IsString(pubkey_json)) {
sqlite3_close(db);
return 0;
}
const char *admin_pubkey = cJSON_GetStringValue(pubkey_json);
// Store admin pubkey in database
const char *insert_sql = "INSERT OR REPLACE INTO config (key, value, "
"description) VALUES (?, ?, ?)";
rc = sqlite3_prepare_v2(db, insert_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, "admin_pubkey", -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, admin_pubkey, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, "Admin public key from config event", -1,
SQLITE_STATIC);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
// Extract server private key and store securely (in memory only)
cJSON *tags = cJSON_GetObjectItem(event, "tags");
if (tags && cJSON_IsArray(tags)) {
cJSON *tag = NULL;
cJSON_ArrayForEach(tag, tags) {
if (!cJSON_IsArray(tag))
continue;
cJSON *tag_name = cJSON_GetArrayItem(tag, 0);
cJSON *tag_value = cJSON_GetArrayItem(tag, 1);
if (!tag_name || !cJSON_IsString(tag_name) || !tag_value ||
!cJSON_IsString(tag_value))
continue;
const char *key = cJSON_GetStringValue(tag_name);
const char *value = cJSON_GetStringValue(tag_value);
if (strcmp(key, "server_privkey") == 0) {
// Store server private key in global variable (memory only)
// strncpy(server_private_key, value, sizeof(server_private_key) - 1);
// server_private_key[sizeof(server_private_key) - 1] = '\0';
} else {
// Store other config values in database
rc = sqlite3_prepare_v2(db, insert_sql, -1, &stmt, NULL);
if (rc == SQLITE_OK) {
sqlite3_bind_text(stmt, 1, key, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 2, value, -1, SQLITE_STATIC);
sqlite3_bind_text(stmt, 3, "From config event", -1, SQLITE_STATIC);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
}
}
}
}
sqlite3_close(db);
return 1;
}
*/
/*
// Interactive setup runner
int run_interactive_setup(const char *config_path) {
printf("\n=== Ginxsom First-Time Setup Required ===\n");
printf("No configuration found at: %s\n\n", config_path);
printf("Options:\n");
printf("1. Run interactive setup wizard\n");
printf("2. Exit and create config manually\n");
printf("Choice (1/2): ");
char choice[10];
if (!fgets(choice, sizeof(choice), stdin)) {
return 1;
}
if (choice[0] == '1') {
// Run setup script
char script_path[512];
snprintf(script_path, sizeof(script_path), "./scripts/setup.sh \"%s\"",
config_path);
return system(script_path);
} else {
printf("\nManual setup instructions:\n");
printf("1. Run: ./scripts/generate_config.sh\n");
printf("2. Place signed config at: %s\n", config_path);
printf("3. Restart ginxsom\n");
return 1;
}
}
*/
// Function declarations
void handle_options_request(void);
@@ -434,7 +189,23 @@ int file_exists_with_type(const char *sha256, const char *mime_type) {
char filepath[MAX_PATH_LEN];
const char *extension = mime_to_extension(mime_type);
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256, extension);
// Construct path safely
size_t dir_len = strlen(g_storage_dir);
size_t sha_len = strlen(sha256);
size_t ext_len = strlen(extension);
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
if (total_len > sizeof(filepath)) {
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256, extension);
return 0;
}
// Build path manually to avoid compiler warnings
memcpy(filepath, g_storage_dir, dir_len);
filepath[dir_len] = '/';
memcpy(filepath + dir_len + 1, sha256, sha_len);
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
filepath[total_len - 1] = '\0';
struct stat st;
int result = stat(filepath, &st);
@@ -524,18 +295,6 @@ const char *extract_sha256_from_uri(const char *uri) {
return sha256_buffer;
}
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// BUD 02 - Upload & Authentication
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
// AUTHENTICATION RULES SYSTEM (4.1.2)
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
@@ -911,7 +670,23 @@ void handle_delete_request_with_validation(const char *sha256, nostr_request_res
}
char filepath[MAX_PATH_LEN];
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256, extension);
// Construct path safely
size_t dir_len = strlen(g_storage_dir);
size_t sha_len = strlen(sha256);
size_t ext_len = strlen(extension);
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
if (total_len > sizeof(filepath)) {
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256, extension);
// Continue anyway - unlink will fail gracefully
} else {
// Build path manually to avoid compiler warnings
memcpy(filepath, g_storage_dir, dir_len);
filepath[dir_len] = '/';
memcpy(filepath + dir_len + 1, sha256, sha_len);
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
filepath[total_len - 1] = '\0';
}
// Delete the physical file
if (unlink(filepath) != 0) {
@@ -1029,9 +804,29 @@ void handle_upload_request(void) {
// Determine file extension from Content-Type using centralized mapping
const char *extension = mime_to_extension(content_type);
// Save file to blobs directory with SHA-256 + extension
// Save file to storage directory with SHA-256 + extension
char filepath[MAX_PATH_LEN];
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256_hex, extension);
// Construct path safely
size_t dir_len = strlen(g_storage_dir);
size_t sha_len = strlen(sha256_hex);
size_t ext_len = strlen(extension);
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
if (total_len > sizeof(filepath)) {
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256_hex, extension);
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: text/plain\r\n\r\n");
printf("File path too long\n");
return;
}
// Build path manually to avoid compiler warnings
memcpy(filepath, g_storage_dir, dir_len);
filepath[dir_len] = '/';
memcpy(filepath + dir_len + 1, sha256_hex, sha_len);
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
filepath[total_len - 1] = '\0';
FILE *outfile = fopen(filepath, "wb");
if (!outfile) {
@@ -1280,9 +1075,29 @@ void handle_upload_request_with_validation(nostr_request_result_t* validation_re
// Determine file extension from Content-Type using centralized mapping
const char *extension = mime_to_extension(content_type);
// Save file to blobs directory with SHA-256 + extension
// Save file to storage directory with SHA-256 + extension
char filepath[MAX_PATH_LEN];
snprintf(filepath, sizeof(filepath), "blobs/%s%s", sha256_hex, extension);
// Construct path safely
size_t dir_len = strlen(g_storage_dir);
size_t sha_len = strlen(sha256_hex);
size_t ext_len = strlen(extension);
size_t total_len = dir_len + 1 + sha_len + ext_len + 1; // +1 for /, +1 for null
if (total_len > sizeof(filepath)) {
fprintf(stderr, "WARNING: File path too long for buffer: %s/%s%s\n", g_storage_dir, sha256_hex, extension);
printf("Status: 500 Internal Server Error\r\n");
printf("Content-Type: text/plain\r\n\r\n");
printf("File path too long\n");
return;
}
// Build path manually to avoid compiler warnings
memcpy(filepath, g_storage_dir, dir_len);
filepath[dir_len] = '/';
memcpy(filepath + dir_len + 1, sha256_hex, sha_len);
memcpy(filepath + dir_len + 1 + sha_len, extension, ext_len);
filepath[total_len - 1] = '\0';
FILE *outfile = fopen(filepath, "wb");
if (!outfile) {
@@ -1480,7 +1295,31 @@ void handle_auth_challenge_request(void) {
/////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////
int main(void) {
int main(int argc, char *argv[]) {
// Parse command line arguments
for (int i = 1; i < argc; i++) {
if (strcmp(argv[i], "--db-path") == 0 && i + 1 < argc) {
strncpy(g_db_path, argv[i + 1], sizeof(g_db_path) - 1);
i++; // Skip next argument
} else if (strcmp(argv[i], "--storage-dir") == 0 && i + 1 < argc) {
strncpy(g_storage_dir, argv[i + 1], sizeof(g_storage_dir) - 1);
i++; // Skip next argument
} else if (strcmp(argv[i], "--help") == 0 || strcmp(argv[i], "-h") == 0) {
printf("Usage: %s [options]\n", argv[0]);
printf("Options:\n");
printf(" --db-path PATH Database file path (default: db/ginxsom.db)\n");
printf(" --storage-dir DIR Storage directory for files (default: blobs)\n");
printf(" --help, -h Show this help message\n");
return 0;
} else {
fprintf(stderr, "Unknown option: %s\n", argv[i]);
fprintf(stderr, "Use --help for usage information\n");
return 1;
}
}
fprintf(stderr, "STARTUP: Using database path: %s\n", g_db_path);
fprintf(stderr, "STARTUP: Using storage directory: %s\n", g_storage_dir);
// Initialize server configuration and identity
// Try file-based config first, then fall back to database config
@@ -1551,11 +1390,66 @@ if (!config_loaded /* && !initialize_server_config() */) {
/////////////////////////////////////////////////////////////////////
// CENTRALIZED REQUEST VALIDATION SYSTEM
/////////////////////////////////////////////////////////////////////
// Special case: Root endpoint is public and doesn't require authentication
if (strcmp(request_method, "GET") == 0 && strcmp(request_uri, "/") == 0) {
// Handle GET / requests - Server info endpoint
printf("Status: 200 OK\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\n");
printf(" \"server\": \"ginxsom\",\n");
printf(" \"version\": \"%s\",\n", VERSION);
printf(" \"description\": \"Ginxsom Blossom Server\",\n");
printf(" \"endpoints\": {\n");
printf(" \"blob_get\": \"GET /<sha256>\",\n");
printf(" \"blob_head\": \"HEAD /<sha256>\",\n");
printf(" \"upload\": \"PUT /upload\",\n");
printf(" \"upload_requirements\": \"HEAD /upload\",\n");
printf(" \"list\": \"GET /list/<pubkey>\",\n");
printf(" \"delete\": \"DELETE /<sha256>\",\n");
printf(" \"mirror\": \"PUT /mirror\",\n");
printf(" \"report\": \"PUT /report\",\n");
printf(" \"health\": \"GET /health\",\n");
printf(" \"auth\": \"GET /auth\"\n");
printf(" },\n");
printf(" \"supported_buds\": [\n");
printf(" \"BUD-01\",\n");
printf(" \"BUD-02\",\n");
printf(" \"BUD-04\",\n");
printf(" \"BUD-06\",\n");
printf(" \"BUD-08\",\n");
printf(" \"BUD-09\"\n");
printf(" ],\n");
printf(" \"limits\": {\n");
printf(" \"max_upload_size\": 104857600,\n");
printf(" \"supported_mime_types\": [\n");
printf(" \"image/jpeg\",\n");
printf(" \"image/png\",\n");
printf(" \"image/webp\",\n");
printf(" \"image/gif\",\n");
printf(" \"video/mp4\",\n");
printf(" \"video/webm\",\n");
printf(" \"audio/mpeg\",\n");
printf(" \"audio/ogg\",\n");
printf(" \"text/plain\",\n");
printf(" \"application/pdf\"\n");
printf(" ]\n");
printf(" },\n");
printf(" \"authentication\": {\n");
printf(" \"required_for_upload\": false,\n");
printf(" \"required_for_delete\": true,\n");
printf(" \"required_for_list\": false,\n");
printf(" \"nip42_enabled\": true\n");
printf(" }\n");
printf("}\n");
log_request("GET", "/", "server_info", 200);
continue;
}
// Determine operation from request method and URI
const char *operation = "unknown";
const char *resource_hash = NULL;
if (strcmp(request_method, "HEAD") == 0 && strcmp(request_uri, "/upload") == 0) {
operation = "head_upload";
} else if (strcmp(request_method, "HEAD") == 0) {
@@ -1723,6 +1617,58 @@ if (!config_loaded /* && !initialize_server_config() */) {
"Pubkey must be 64 hex characters");
log_request("GET", request_uri, "none", 400);
}
} else if (strcmp(request_method, "GET") == 0 &&
strcmp(request_uri, "/") == 0) {
// Handle GET / requests - Server info endpoint
printf("Status: 200 OK\r\n");
printf("Content-Type: application/json\r\n\r\n");
printf("{\n");
printf(" \"server\": \"ginxsom\",\n");
printf(" \"version\": \"%s\",\n", VERSION);
printf(" \"description\": \"Ginxsom Blossom Server\",\n");
printf(" \"endpoints\": {\n");
printf(" \"blob_get\": \"GET /<sha256>\",\n");
printf(" \"blob_head\": \"HEAD /<sha256>\",\n");
printf(" \"upload\": \"PUT /upload\",\n");
printf(" \"upload_requirements\": \"HEAD /upload\",\n");
printf(" \"list\": \"GET /list/<pubkey>\",\n");
printf(" \"delete\": \"DELETE /<sha256>\",\n");
printf(" \"mirror\": \"PUT /mirror\",\n");
printf(" \"report\": \"PUT /report\",\n");
printf(" \"health\": \"GET /health\",\n");
printf(" \"auth\": \"GET /auth\"\n");
printf(" },\n");
printf(" \"supported_buds\": [\n");
printf(" \"BUD-01\",\n");
printf(" \"BUD-02\",\n");
printf(" \"BUD-04\",\n");
printf(" \"BUD-06\",\n");
printf(" \"BUD-08\",\n");
printf(" \"BUD-09\"\n");
printf(" ],\n");
printf(" \"limits\": {\n");
printf(" \"max_upload_size\": 104857600,\n");
printf(" \"supported_mime_types\": [\n");
printf(" \"image/jpeg\",\n");
printf(" \"image/png\",\n");
printf(" \"image/webp\",\n");
printf(" \"image/gif\",\n");
printf(" \"video/mp4\",\n");
printf(" \"video/webm\",\n");
printf(" \"audio/mpeg\",\n");
printf(" \"audio/ogg\",\n");
printf(" \"text/plain\",\n");
printf(" \"application/pdf\"\n");
printf(" ]\n");
printf(" },\n");
printf(" \"authentication\": {\n");
printf(" \"required_for_upload\": false,\n");
printf(" \"required_for_delete\": true,\n");
printf(" \"required_for_list\": false,\n");
printf(" \"nip42_enabled\": true\n");
printf(" }\n");
printf("}\n");
log_request("GET", "/", "server_info", 200);
} else if (strcmp(request_method, "GET") == 0 &&
strcmp(request_uri, "/auth") == 0) {
// Handle GET /auth requests using the existing handler

7
test_blob_1762770636.txt Normal file
View File

@@ -0,0 +1,7 @@
Test blob content for Ginxsom Blossom server (PRODUCTION)
Timestamp: 2025-11-10T06:30:36-04:00
Random data: bb7d8d5206aadf4ecd48829f9674454c160bae68e98c6ce5f7f678f997dbe86a
Test message: Hello from production test!
This file is used to test the upload functionality
of the Ginxsom Blossom server on blossom.laantungir.net

1
test_file.txt Normal file
View File

@@ -0,0 +1 @@
test file content

View File

@@ -6,7 +6,8 @@
set -e # Exit on any error
# Configuration
SERVER_URL="http://localhost:9001"
# SERVER_URL="http://localhost:9001"
SERVER_URL="https://localhost:9443"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
TEST_FILE="test_blob_$(date +%s).txt"
CLEANUP_FILES=()
@@ -87,7 +88,7 @@ check_prerequisites() {
check_server() {
log_info "Checking if server is running..."
if curl -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
if curl -k -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
log_success "Server is running at ${SERVER_URL}"
else
log_error "Server is not responding at ${SERVER_URL}"
@@ -168,7 +169,7 @@ perform_upload() {
CLEANUP_FILES+=("${RESPONSE_FILE}")
# Perform the upload with verbose output
HTTP_STATUS=$(curl -s -w "%{http_code}" \
HTTP_STATUS=$(curl -k -s -w "%{http_code}" \
-X PUT \
-H "Authorization: ${AUTH_HEADER}" \
-H "Content-Type: text/plain" \
@@ -217,7 +218,7 @@ test_retrieval() {
RETRIEVAL_URL="${SERVER_URL}/${HASH}"
if curl -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
if curl -k -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
log_success "File can be retrieved at: ${RETRIEVAL_URL}"
else
log_warning "File not yet available for retrieval (expected if upload processing not implemented)"

265
tests/file_put_production.sh Executable file
View File

@@ -0,0 +1,265 @@
#!/bin/bash
# file_put_production.sh - Test script for production Ginxsom Blossom server
# Tests upload functionality on blossom.laantungir.net
set -e # Exit on any error
# Configuration
SERVER_URL="https://blossom.laantungir.net"
UPLOAD_ENDPOINT="${SERVER_URL}/upload"
TEST_FILE="test_blob_$(date +%s).txt"
CLEANUP_FILES=()
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Cleanup function
cleanup() {
echo -e "${YELLOW}Cleaning up temporary files...${NC}"
for file in "${CLEANUP_FILES[@]}"; do
if [[ -f "$file" ]]; then
rm -f "$file"
echo "Removed: $file"
fi
done
}
# Set up cleanup on exit
trap cleanup EXIT
# Helper functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Check prerequisites
check_prerequisites() {
log_info "Checking prerequisites..."
# Check if nak is installed
if ! command -v nak &> /dev/null; then
log_error "nak command not found. Please install nak first."
log_info "Install with: go install github.com/fiatjaf/nak@latest"
exit 1
fi
log_success "nak is installed"
# Check if curl is available
if ! command -v curl &> /dev/null; then
log_error "curl command not found. Please install curl."
exit 1
fi
log_success "curl is available"
# Check if sha256sum is available
if ! command -v sha256sum &> /dev/null; then
log_error "sha256sum command not found."
exit 1
fi
log_success "sha256sum is available"
# Check if base64 is available
if ! command -v base64 &> /dev/null; then
log_error "base64 command not found."
exit 1
fi
log_success "base64 is available"
}
# Check if server is running
check_server() {
log_info "Checking if server is running..."
if curl -s -f "${SERVER_URL}/health" > /dev/null 2>&1; then
log_success "Server is running at ${SERVER_URL}"
else
log_error "Server is not responding at ${SERVER_URL}"
exit 1
fi
}
# Create test file
create_test_file() {
log_info "Creating test file: ${TEST_FILE}"
# Create test content with timestamp and random data
cat > "${TEST_FILE}" << EOF
Test blob content for Ginxsom Blossom server (PRODUCTION)
Timestamp: $(date -Iseconds)
Random data: $(openssl rand -hex 32)
Test message: Hello from production test!
This file is used to test the upload functionality
of the Ginxsom Blossom server on blossom.laantungir.net
EOF
CLEANUP_FILES+=("${TEST_FILE}")
log_success "Created test file with $(wc -c < "${TEST_FILE}") bytes"
}
# Calculate file hash
calculate_hash() {
log_info "Calculating SHA-256 hash..."
HASH=$(sha256sum "${TEST_FILE}" | cut -d' ' -f1)
log_success "Data to hash: ${TEST_FILE}"
log_success "File hash: ${HASH}"
}
# Generate nostr event
generate_nostr_event() {
log_info "Generating kind 24242 nostr event with nak..."
# Calculate expiration time (1 hour from now)
EXPIRATION=$(date -d '+1 hour' +%s)
# Generate the event using nak
EVENT_JSON=$(nak event -k 24242 -c "" \
-t "t=upload" \
-t "x=${HASH}" \
-t "expiration=${EXPIRATION}")
if [[ -z "$EVENT_JSON" ]]; then
log_error "Failed to generate nostr event"
exit 1
fi
log_success "Generated nostr event"
echo "Event JSON: $EVENT_JSON"
}
# Create authorization header
create_auth_header() {
log_info "Creating authorization header..."
# Base64 encode the event (without newlines)
AUTH_B64=$(echo -n "$EVENT_JSON" | base64 -w 0)
AUTH_HEADER="Nostr ${AUTH_B64}"
log_success "Created authorization header"
echo "Auth header length: ${#AUTH_HEADER} characters"
}
# Perform upload
perform_upload() {
log_info "Performing upload to ${UPLOAD_ENDPOINT}..."
# Create temporary file for response
RESPONSE_FILE=$(mktemp)
CLEANUP_FILES+=("${RESPONSE_FILE}")
# Perform the upload with verbose output
HTTP_STATUS=$(curl -s -w "%{http_code}" \
-X PUT \
-H "Authorization: ${AUTH_HEADER}" \
-H "Content-Type: text/plain" \
-H "Content-Disposition: attachment; filename=\"${TEST_FILE}\"" \
--data-binary "@${TEST_FILE}" \
"${UPLOAD_ENDPOINT}" \
-o "${RESPONSE_FILE}")
echo "HTTP Status: ${HTTP_STATUS}"
echo "Response body:"
cat "${RESPONSE_FILE}"
echo
# Check response
case "${HTTP_STATUS}" in
200)
log_success "Upload successful!"
;;
201)
log_success "Upload successful (created)!"
;;
400)
log_error "Bad request - check the event format"
;;
401)
log_error "Unauthorized - authentication failed"
;;
405)
log_error "Method not allowed - check nginx configuration"
;;
413)
log_error "Payload too large"
;;
501)
log_warning "Upload endpoint not yet implemented (expected for now)"
;;
*)
log_error "Upload failed with HTTP status: ${HTTP_STATUS}"
;;
esac
}
# Test file retrieval
test_retrieval() {
log_info "Testing file retrieval..."
RETRIEVAL_URL="${SERVER_URL}/${HASH}"
if curl -s -f "${RETRIEVAL_URL}" > /dev/null 2>&1; then
log_success "File can be retrieved at: ${RETRIEVAL_URL}"
# Download and verify
DOWNLOADED_FILE=$(mktemp)
CLEANUP_FILES+=("${DOWNLOADED_FILE}")
curl -s "${RETRIEVAL_URL}" -o "${DOWNLOADED_FILE}"
DOWNLOADED_HASH=$(sha256sum "${DOWNLOADED_FILE}" | cut -d' ' -f1)
if [[ "${DOWNLOADED_HASH}" == "${HASH}" ]]; then
log_success "Downloaded file hash matches! Verification successful."
else
log_error "Hash mismatch! Expected: ${HASH}, Got: ${DOWNLOADED_HASH}"
fi
else
log_warning "File not yet available for retrieval"
fi
}
# Main execution
main() {
echo "=== Ginxsom Blossom Production Upload Test ==="
echo "Server: ${SERVER_URL}"
echo "Timestamp: $(date -Iseconds)"
echo
check_prerequisites
check_server
create_test_file
calculate_hash
generate_nostr_event
create_auth_header
perform_upload
test_retrieval
echo
log_info "Test completed!"
echo "Summary:"
echo " Test file: ${TEST_FILE}"
echo " File hash: ${HASH}"
echo " Server: ${SERVER_URL}"
echo " Upload endpoint: ${UPLOAD_ENDPOINT}"
echo " Retrieval URL: ${SERVER_URL}/${HASH}"
}
# Run main function
main "$@"

View File

@@ -3,18 +3,31 @@
# Mirror Test Script for BUD-04
# Tests the PUT /mirror endpoint with a sample PNG file and NIP-42 authentication
# ============================================================================
# CONFIGURATION - Choose your target Blossom server
# ============================================================================
# Local server (uncomment to use)
BLOSSOM_SERVER="http://localhost:9001"
# Remote server (uncomment to use)
#BLOSSOM_SERVER="https://blossom.laantungir.net"
# ============================================================================
# Test URL - PNG file with known SHA-256 hash
TEST_URL="https://laantungir.github.io/img_repo/24308d48eb498b593e55a87b6300ccffdea8432babc0bb898b1eff21ebbb72de.png"
EXPECTED_HASH="24308d48eb498b593e55a87b6300ccffdea8432babc0bb898b1eff21ebbb72de"
echo "=== BUD-04 Mirror Endpoint Test with Authentication ==="
echo "Blossom Server: $BLOSSOM_SERVER"
echo "Target URL: $TEST_URL"
echo "Expected Hash: $EXPECTED_HASH"
echo ""
# Get a fresh challenge from the server
echo "=== Getting Authentication Challenge ==="
challenge=$(curl -s "http://localhost:9001/auth" | jq -r '.challenge')
challenge=$(curl -s "$BLOSSOM_SERVER/auth" | jq -r '.challenge')
if [ "$challenge" = "null" ] || [ -z "$challenge" ]; then
echo "❌ Failed to get challenge from server"
exit 1
@@ -48,7 +61,7 @@ RESPONSE=$(curl -s -w "\nHTTP_CODE:%{http_code}\n" \
-H "Authorization: $auth_header" \
-H "Content-Type: application/json" \
-d "$JSON_BODY" \
http://localhost:9001/mirror)
"$BLOSSOM_SERVER/mirror")
echo "Response:"
echo "$RESPONSE"
@@ -65,9 +78,9 @@ if [ "$HTTP_CODE" = "200" ]; then
# Try to access the mirrored blob
echo ""
echo "=== Verifying Mirrored Blob ==="
echo "Attempting to fetch: http://localhost:9001/$EXPECTED_HASH.png"
echo "Attempting to fetch: $BLOSSOM_SERVER/$EXPECTED_HASH.png"
BLOB_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I "http://localhost:9001/$EXPECTED_HASH.png")
BLOB_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I "$BLOSSOM_SERVER/$EXPECTED_HASH.png")
BLOB_HTTP_CODE=$(echo "$BLOB_RESPONSE" | grep "HTTP_CODE:" | cut -d: -f2)
if [ "$BLOB_HTTP_CODE" = "200" ]; then
@@ -82,7 +95,7 @@ if [ "$HTTP_CODE" = "200" ]; then
# Test HEAD request for metadata
echo ""
echo "=== Testing HEAD Request ==="
HEAD_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I -X HEAD "http://localhost:9001/$EXPECTED_HASH")
HEAD_RESPONSE=$(curl -s -w "HTTP_CODE:%{http_code}" -I -X HEAD "$BLOSSOM_SERVER/$EXPECTED_HASH")
HEAD_HTTP_CODE=$(echo "$HEAD_RESPONSE" | grep "HTTP_CODE:" | cut -d: -f2)
if [ "$HEAD_HTTP_CODE" = "200" ]; then