v0.7.8 - Fully static builds implemented with musl-gcc
This commit is contained in:
472
tests/README.md
Normal file
472
tests/README.md
Normal file
@@ -0,0 +1,472 @@
|
||||
# C-Relay Comprehensive Testing Framework
|
||||
|
||||
This directory contains a comprehensive testing framework for the C-Relay Nostr relay implementation. The framework provides automated testing for security vulnerabilities, performance validation, and stability assurance.
|
||||
|
||||
## Overview
|
||||
|
||||
The testing framework is designed to validate all critical security fixes and ensure stable operation of the Nostr relay. It includes multiple test suites covering different aspects of relay functionality and security.
|
||||
|
||||
## Test Suites
|
||||
|
||||
### 1. Master Test Runner (`run_all_tests.sh`)
|
||||
The master test runner orchestrates all test suites and provides comprehensive reporting.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/run_all_tests.sh
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Automated execution of all test suites
|
||||
- Comprehensive HTML and log reporting
|
||||
- Success/failure tracking across all tests
|
||||
- Relay status validation before testing
|
||||
|
||||
### 2. SQL Injection Tests (`sql_injection_tests.sh`)
|
||||
Comprehensive testing of SQL injection vulnerabilities across all filter types.
|
||||
|
||||
**Tests:**
|
||||
- Classic SQL injection payloads (`'; DROP TABLE; --`)
|
||||
- Union-based injection attacks
|
||||
- Error-based injection attempts
|
||||
- Time-based blind injection
|
||||
- Stacked query attacks
|
||||
- Filter-specific injection (authors, IDs, kinds, search, tags)
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/sql_injection_tests.sh
|
||||
```
|
||||
|
||||
### 3. Memory Corruption Tests (`memory_corruption_tests.sh`)
|
||||
Tests for buffer overflows, use-after-free, and memory safety issues.
|
||||
|
||||
**Tests:**
|
||||
- Malformed subscription IDs (empty, very long, null bytes)
|
||||
- Oversized filter arrays
|
||||
- Concurrent access patterns
|
||||
- Malformed JSON structures
|
||||
- Large message payloads
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/memory_corruption_tests.sh
|
||||
```
|
||||
|
||||
### 4. Input Validation Tests (`input_validation_tests.sh`)
|
||||
Comprehensive boundary condition testing for all input parameters.
|
||||
|
||||
**Tests:**
|
||||
- Message type validation
|
||||
- Message structure validation
|
||||
- Subscription ID boundary tests
|
||||
- Filter object validation
|
||||
- Authors, IDs, kinds, timestamps, limits validation
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/input_validation_tests.sh
|
||||
```
|
||||
|
||||
### 5. Load Testing (`load_tests.sh`)
|
||||
Performance testing under high concurrent connection scenarios.
|
||||
|
||||
**Test Scenarios:**
|
||||
- Light load (10 concurrent clients)
|
||||
- Medium load (25 concurrent clients)
|
||||
- Heavy load (50 concurrent clients)
|
||||
- Stress test (100 concurrent clients)
|
||||
|
||||
**Features:**
|
||||
- Resource monitoring (CPU, memory, connections)
|
||||
- Connection success rate tracking
|
||||
- Message throughput measurement
|
||||
- Relay responsiveness validation
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/load_tests.sh
|
||||
```
|
||||
|
||||
### 6. Authentication Tests (`auth_tests.sh`)
|
||||
Tests NIP-42 authentication mechanisms and access control.
|
||||
|
||||
**Tests:**
|
||||
- Authentication challenge responses
|
||||
- Whitelist/blacklist functionality
|
||||
- Event publishing with auth requirements
|
||||
- Admin API authentication events
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/auth_tests.sh
|
||||
```
|
||||
|
||||
### 7. Rate Limiting Tests (`rate_limiting_tests.sh`)
|
||||
Tests rate limiting and abuse prevention mechanisms.
|
||||
|
||||
**Tests:**
|
||||
- Message rate limiting
|
||||
- Connection rate limiting
|
||||
- Subscription creation limits
|
||||
- Abuse pattern detection
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/rate_limiting_tests.sh
|
||||
```
|
||||
|
||||
### 8. Performance Benchmarks (`performance_benchmarks.sh`)
|
||||
Performance metrics and benchmarking tools.
|
||||
|
||||
**Tests:**
|
||||
- Message throughput measurement
|
||||
- Response time analysis
|
||||
- Memory usage profiling
|
||||
- CPU utilization tracking
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/performance_benchmarks.sh
|
||||
```
|
||||
|
||||
### 9. Resource Monitoring (`resource_monitoring.sh`)
|
||||
System resource usage monitoring during testing.
|
||||
|
||||
**Features:**
|
||||
- Real-time CPU and memory monitoring
|
||||
- Connection count tracking
|
||||
- Database size monitoring
|
||||
- System load analysis
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/resource_monitoring.sh
|
||||
```
|
||||
|
||||
### 10. Configuration Tests (`config_tests.sh`)
|
||||
Tests configuration management and persistence.
|
||||
|
||||
**Tests:**
|
||||
- Configuration event processing
|
||||
- Setting validation and persistence
|
||||
- Admin API configuration commands
|
||||
- Configuration reload behavior
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./tests/config_tests.sh
|
||||
```
|
||||
|
||||
### 11. Existing Test Suites
|
||||
|
||||
#### Filter Validation Tests (`filter_validation_test.sh`)
|
||||
Tests comprehensive input validation for REQ and COUNT messages.
|
||||
|
||||
#### Subscription Limits Tests (`subscription_limits.sh`)
|
||||
Tests subscription limit enforcement and rate limiting.
|
||||
|
||||
#### Subscription Validation Tests (`subscription_validation.sh`)
|
||||
Tests subscription ID handling and memory corruption fixes.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### System Requirements
|
||||
- Linux/macOS environment
|
||||
- `websocat` for WebSocket communication
|
||||
- `bash` shell
|
||||
- Standard Unix tools (`grep`, `awk`, `timeout`, etc.)
|
||||
|
||||
### Installing Dependencies
|
||||
|
||||
#### Ubuntu/Debian:
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install websocat curl jq
|
||||
```
|
||||
|
||||
#### macOS:
|
||||
```bash
|
||||
brew install websocat curl jq
|
||||
```
|
||||
|
||||
#### Other systems:
|
||||
Download `websocat` from: https://github.com/vi/websocat/releases
|
||||
|
||||
### Relay Setup
|
||||
Before running tests, ensure the C-Relay is running:
|
||||
|
||||
```bash
|
||||
# Build and start the relay
|
||||
./make_and_restart_relay.sh
|
||||
|
||||
# Verify it's running
|
||||
ps aux | grep c_relay
|
||||
curl -H "Accept: application/nostr+json" http://localhost:8888
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Quick Start
|
||||
1. Start the relay:
|
||||
```bash
|
||||
./make_and_restart_relay.sh
|
||||
```
|
||||
|
||||
2. Run all tests:
|
||||
```bash
|
||||
./tests/run_all_tests.sh
|
||||
```
|
||||
|
||||
### Individual Test Suites
|
||||
Run specific test suites for targeted testing:
|
||||
|
||||
```bash
|
||||
# Security tests
|
||||
./tests/sql_injection_tests.sh
|
||||
./tests/memory_corruption_tests.sh
|
||||
./tests/input_validation_tests.sh
|
||||
|
||||
# Performance tests
|
||||
./tests/load_tests.sh
|
||||
|
||||
# Existing tests
|
||||
./tests/filter_validation_test.sh
|
||||
./tests/subscription_limits.sh
|
||||
./tests/subscription_validation.sh
|
||||
```
|
||||
|
||||
### NIP Protocol Tests
|
||||
Run the existing NIP compliance tests:
|
||||
|
||||
```bash
|
||||
# Run all NIP tests
|
||||
./tests/run_nip_tests.sh
|
||||
|
||||
# Or run individual NIP tests
|
||||
./tests/1_nip_test.sh
|
||||
./tests/11_nip_information.sh
|
||||
./tests/42_nip_test.sh
|
||||
# ... etc
|
||||
```
|
||||
|
||||
## Test Results and Reporting
|
||||
|
||||
### Master Test Runner Output
|
||||
The master test runner (`run_all_tests.sh`) generates:
|
||||
|
||||
1. **Console Output**: Real-time test progress and results
|
||||
2. **Log File**: Detailed execution log (`test_results_YYYYMMDD_HHMMSS.log`)
|
||||
3. **HTML Report**: Comprehensive web report (`test_report_YYYYMMDD_HHMMSS.html`)
|
||||
|
||||
### Individual Test Suite Output
|
||||
Each test suite provides:
|
||||
- Test-by-test results with PASS/FAIL status
|
||||
- Summary statistics (passed/failed/total tests)
|
||||
- Detailed error information for failures
|
||||
|
||||
### Interpreting Results
|
||||
|
||||
#### Security Tests
|
||||
- **PASS**: No vulnerabilities detected
|
||||
- **FAIL**: Potential security issues found
|
||||
- **UNCERTAIN**: Test inconclusive (may need manual verification)
|
||||
|
||||
#### Performance Tests
|
||||
- **Connection Success Rate**: >95% = Excellent, >80% = Good, <80% = Poor
|
||||
- **Resource Usage**: Monitor CPU/memory during load tests
|
||||
- **Relay Responsiveness**: Must remain responsive after all tests
|
||||
|
||||
## Test Configuration
|
||||
|
||||
### Environment Variables
|
||||
Customize test behavior with environment variables:
|
||||
|
||||
```bash
|
||||
# Relay connection settings
|
||||
export RELAY_HOST="127.0.0.1"
|
||||
export RELAY_PORT="8888"
|
||||
|
||||
# Test parameters
|
||||
export TEST_TIMEOUT=10
|
||||
export CONCURRENT_CONNECTIONS=50
|
||||
export MESSAGES_PER_SECOND=100
|
||||
```
|
||||
|
||||
### Test Customization
|
||||
Modify test parameters within individual test scripts:
|
||||
|
||||
- `RELAY_HOST` / `RELAY_PORT`: Relay connection details
|
||||
- `TEST_TIMEOUT`: Individual test timeout (seconds)
|
||||
- `TOTAL_TESTS`: Number of test iterations
|
||||
- Load test parameters in `load_tests.sh`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### "Could not connect to relay"
|
||||
- Ensure relay is running: `./make_and_restart_relay.sh`
|
||||
- Check port availability: `netstat -tln | grep 8888`
|
||||
- Verify relay process: `ps aux | grep c_relay`
|
||||
|
||||
#### "websocat: command not found"
|
||||
- Install websocat: `sudo apt-get install websocat`
|
||||
- Or download from: https://github.com/vi/websocat/releases
|
||||
|
||||
#### Tests timing out
|
||||
- Increase `TEST_TIMEOUT` value
|
||||
- Check system resources (CPU/memory)
|
||||
- Reduce concurrent connections in load tests
|
||||
|
||||
#### High failure rates in load tests
|
||||
- Reduce `CONCURRENT_CONNECTIONS`
|
||||
- Check system ulimits: `ulimit -n`
|
||||
- Monitor system resources during testing
|
||||
|
||||
### Debug Mode
|
||||
Enable verbose output for debugging:
|
||||
|
||||
```bash
|
||||
# Set debug environment variable
|
||||
export DEBUG=1
|
||||
|
||||
# Run tests with verbose output
|
||||
./tests/run_all_tests.sh
|
||||
```
|
||||
|
||||
## Security Testing Methodology
|
||||
|
||||
### SQL Injection Testing
|
||||
- Tests all filter types (authors, IDs, kinds, search, tags)
|
||||
- Uses comprehensive payload library
|
||||
- Validates parameterized query protection
|
||||
- Tests edge cases and boundary conditions
|
||||
|
||||
### Memory Safety Testing
|
||||
- Buffer overflow detection
|
||||
- Use-after-free prevention
|
||||
- Concurrent access validation
|
||||
- Malformed input handling
|
||||
|
||||
### Input Validation Testing
|
||||
- Boundary condition testing
|
||||
- Type validation
|
||||
- Length limit enforcement
|
||||
- Malformed data rejection
|
||||
|
||||
## Performance Benchmarking
|
||||
|
||||
### Load Testing Scenarios
|
||||
1. **Light Load**: Basic functionality validation
|
||||
2. **Medium Load**: Moderate stress testing
|
||||
3. **Heavy Load**: High concurrency validation
|
||||
4. **Stress Test**: Breaking point identification
|
||||
|
||||
### Metrics Collected
|
||||
- Connection success rate
|
||||
- Message throughput
|
||||
- Response times
|
||||
- Resource utilization (CPU, memory)
|
||||
- Relay stability under load
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
### Automated Testing
|
||||
Integrate with CI/CD pipelines:
|
||||
|
||||
```yaml
|
||||
# Example GitHub Actions workflow
|
||||
- name: Run C-Relay Tests
|
||||
run: |
|
||||
./make_and_restart_relay.sh
|
||||
./tests/run_all_tests.sh
|
||||
```
|
||||
|
||||
### Test Result Processing
|
||||
Parse test results for automated reporting:
|
||||
|
||||
```bash
|
||||
# Extract test summary
|
||||
grep "Total tests:" test_results_*.log
|
||||
grep "Passed:" test_results_*.log
|
||||
grep "Failed:" test_results_*.log
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
### Adding New Tests
|
||||
1. Create new test script in `tests/` directory
|
||||
2. Follow existing naming conventions
|
||||
3. Add to master test runner in `run_all_tests.sh`
|
||||
4. Update this documentation
|
||||
|
||||
### Test Script Template
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Test suite description
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="${RELAY_HOST:-127.0.0.1}"
|
||||
RELAY_PORT="${RELAY_PORT:-8888}"
|
||||
|
||||
# Test implementation here
|
||||
|
||||
echo "Test suite completed successfully"
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Test Environment
|
||||
- Run tests in isolated environment
|
||||
- Use test relay instance (not production)
|
||||
- Monitor system resources during testing
|
||||
- Clean up test data after completion
|
||||
|
||||
### Sensitive Data
|
||||
- Tests use synthetic data only
|
||||
- No real user data in test payloads
|
||||
- Safe for production system testing
|
||||
|
||||
## Support and Issues
|
||||
|
||||
### Reporting Test Failures
|
||||
When reporting test failures, include:
|
||||
1. Test suite and specific test that failed
|
||||
2. Full error output
|
||||
3. System information (OS, relay version)
|
||||
4. Relay configuration
|
||||
5. Test environment details
|
||||
|
||||
### Getting Help
|
||||
- Check existing issues in the project repository
|
||||
- Review test logs for detailed error information
|
||||
- Validate relay setup and configuration
|
||||
- Test with minimal configuration to isolate issues
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage Summary
|
||||
|
||||
| Test Suite | Security | Performance | Stability | Coverage |
|
||||
|------------|----------|-------------|-----------|----------|
|
||||
| SQL Injection | ✓ | | | All filter types |
|
||||
| Memory Corruption | ✓ | | ✓ | Buffer overflows, race conditions |
|
||||
| Input Validation | ✓ | | | Boundary conditions, type validation |
|
||||
| Load Testing | | ✓ | ✓ | Concurrent connections, resource usage |
|
||||
| Authentication | ✓ | | | NIP-42 auth, whitelist/blacklist |
|
||||
| Rate Limiting | ✓ | ✓ | ✓ | Message rates, abuse prevention |
|
||||
| Performance Benchmarks | | ✓ | | Throughput, response times |
|
||||
| Resource Monitoring | | ✓ | ✓ | CPU/memory usage tracking |
|
||||
| Configuration | ✓ | | ✓ | Admin API, settings persistence |
|
||||
| Filter Validation | ✓ | | | REQ/COUNT message validation |
|
||||
| Subscription Limits | | ✓ | ✓ | Rate limiting, connection limits |
|
||||
| Subscription Validation | ✓ | | ✓ | ID validation, memory safety |
|
||||
|
||||
**Legend:**
|
||||
- ✓ Covered
|
||||
- Performance: Load and throughput testing
|
||||
- Security: Vulnerability and attack vector testing
|
||||
- Stability: Crash prevention and error handling
|
||||
122
tests/auth_tests.sh
Executable file
122
tests/auth_tests.sh
Executable file
File diff suppressed because one or more lines are too long
193
tests/config_tests.sh
Executable file
193
tests/config_tests.sh
Executable file
@@ -0,0 +1,193 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Configuration Testing Suite for C-Relay
|
||||
# Tests configuration management and persistence
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Function to test configuration query
|
||||
test_config_query() {
|
||||
local description="$1"
|
||||
local config_command="$2"
|
||||
local expected_pattern="$3"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Create admin event for config query
|
||||
local admin_event
|
||||
admin_event=$(cat << EOF
|
||||
{
|
||||
"kind": 23456,
|
||||
"content": "$(echo '["'"$config_command"'"]' | base64)",
|
||||
"tags": [["p", "relay_pubkey_placeholder"]],
|
||||
"created_at": $(date +%s),
|
||||
"pubkey": "admin_pubkey_placeholder",
|
||||
"sig": "signature_placeholder"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# Send config query event
|
||||
local response
|
||||
response=$(timeout 10 bash -c "
|
||||
echo '$admin_event' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -3
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Connection timeout"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$response" == *"$expected_pattern"* ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Config query successful"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Expected '$expected_pattern', got: $response"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test configuration setting
|
||||
test_config_setting() {
|
||||
local description="$1"
|
||||
local config_command="$2"
|
||||
local config_value="$3"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Create admin event for config setting
|
||||
local admin_event
|
||||
admin_event=$(cat << EOF
|
||||
{
|
||||
"kind": 23456,
|
||||
"content": "$(echo '["'"$config_command"'","'"$config_value"'"]' | base64)",
|
||||
"tags": [["p", "relay_pubkey_placeholder"]],
|
||||
"created_at": $(date +%s),
|
||||
"pubkey": "admin_pubkey_placeholder",
|
||||
"sig": "signature_placeholder"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# Send config setting event
|
||||
local response
|
||||
response=$(timeout 10 bash -c "
|
||||
echo '$admin_event' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -3
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"TIMEOUT"* ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Connection timeout"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$response" == *"OK"* ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Config setting accepted"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Config setting rejected: $response"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test NIP-11 relay information
|
||||
test_nip11_info() {
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing NIP-11 relay information... "
|
||||
|
||||
local response
|
||||
response=$(curl -s -H "Accept: application/nostr+json" "http://$RELAY_HOST:$RELAY_PORT" 2>/dev/null || echo 'CURL_FAILED')
|
||||
|
||||
if [[ "$response" == "CURL_FAILED" ]]; then
|
||||
echo -e "${RED}FAILED${NC} - HTTP request failed"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$response" == *"supported_nips"* ]] && [[ "$response" == *"software"* ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - NIP-11 information available"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - NIP-11 information incomplete"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Configuration Testing Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing configuration management at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Test basic connectivity
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
test_config_query "Basic connectivity" "system_status" "OK"
|
||||
echo ""
|
||||
|
||||
echo "=== NIP-11 Relay Information Tests ==="
|
||||
test_nip11_info
|
||||
echo ""
|
||||
|
||||
echo "=== Configuration Query Tests ==="
|
||||
test_config_query "System status query" "system_status" "status"
|
||||
test_config_query "Configuration query" "auth_query" "all"
|
||||
echo ""
|
||||
|
||||
echo "=== Configuration Setting Tests ==="
|
||||
test_config_setting "Relay description setting" "relay_description" "Test Relay"
|
||||
test_config_setting "Max subscriptions setting" "max_subscriptions_per_client" "50"
|
||||
test_config_setting "PoW difficulty setting" "pow_min_difficulty" "16"
|
||||
echo ""
|
||||
|
||||
echo "=== Configuration Persistence Test ==="
|
||||
echo -n "Testing configuration persistence... "
|
||||
# Set a configuration value
|
||||
test_config_setting "Set test config" "relay_description" "Persistence Test"
|
||||
|
||||
# Query it back
|
||||
sleep 2
|
||||
test_config_query "Verify persistence" "system_status" "Persistence Test"
|
||||
echo ""
|
||||
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All configuration tests passed!${NC}"
|
||||
echo "Configuration management is working correctly."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ Some configuration tests failed!${NC}"
|
||||
echo "Configuration management may have issues."
|
||||
exit 1
|
||||
fi
|
||||
246
tests/filter_validation_test.sh
Executable file
246
tests/filter_validation_test.sh
Executable file
@@ -0,0 +1,246 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Filter Validation Test Script for C-Relay
|
||||
# Tests comprehensive input validation for REQ and COUNT messages
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
TEST_TIMEOUT=5
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Function to send WebSocket message and check response
|
||||
test_websocket_message() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local expected_error="$3"
|
||||
local test_type="${4:-REQ}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Send message via websocat and capture response
|
||||
local response
|
||||
response=$(timeout $TEST_TIMEOUT bash -c "
|
||||
echo '$message' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null || echo 'CONNECTION_FAILED'
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == "CONNECTION_FAILED" ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Could not connect to relay"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$response" == "TIMEOUT" ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Connection timeout"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if response contains expected error
|
||||
if [[ "$response" == *"$expected_error"* ]]; then
|
||||
echo -e "${GREEN}PASSED${NC}"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Expected error '$expected_error', got: $response"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test valid message (should not produce error)
|
||||
test_valid_message() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Send message via websocat and capture response
|
||||
local response
|
||||
response=$(timeout $TEST_TIMEOUT bash -c "
|
||||
echo '$message' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == "TIMEOUT" ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Connection timeout"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Valid messages should not contain error notices
|
||||
if [[ "$response" != *"error:"* ]]; then
|
||||
echo -e "${GREEN}PASSED${NC}"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Unexpected error in response: $response"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=== C-Relay Filter Validation Tests ==="
|
||||
echo "Testing against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo
|
||||
|
||||
# Test 1: Valid REQ message
|
||||
test_valid_message "Valid REQ message" '["REQ","test-sub",{}]'
|
||||
|
||||
# Test 2: Valid COUNT message
|
||||
test_valid_message "Valid COUNT message" '["COUNT","test-count",{}]'
|
||||
|
||||
echo
|
||||
echo "=== Testing Filter Array Validation ==="
|
||||
|
||||
# Test 3: Non-object filter
|
||||
test_websocket_message "Non-object filter" '["REQ","sub1","not-an-object"]' "error: filter 0 is not an object"
|
||||
|
||||
# Test 4: Too many filters
|
||||
test_websocket_message "Too many filters" '["REQ","sub1",{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{}]' "error: too many filters"
|
||||
|
||||
echo
|
||||
echo "=== Testing Authors Validation ==="
|
||||
|
||||
# Test 5: Invalid author (not string)
|
||||
test_websocket_message "Invalid author type" '["REQ","sub1",{"authors":[123]}]' "error: author"
|
||||
|
||||
# Test 6: Invalid author hex
|
||||
test_websocket_message "Invalid author hex" '["REQ","sub1",{"authors":["invalid-hex"]}]' "error: invalid author hex string"
|
||||
|
||||
# Test 7: Too many authors
|
||||
test_websocket_message "Too many authors" '["REQ","sub1",{"authors":["a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a"]}]' "error: too many authors"
|
||||
|
||||
echo
|
||||
echo "=== Testing IDs Validation ==="
|
||||
|
||||
# Test 8: Invalid ID type
|
||||
test_websocket_message "Invalid ID type" '["REQ","sub1",{"ids":[123]}]' "error: id"
|
||||
|
||||
# Test 9: Invalid ID hex
|
||||
test_websocket_message "Invalid ID hex" '["REQ","sub1",{"ids":["invalid-hex"]}]' "error: invalid id hex string"
|
||||
|
||||
# Test 10: Too many IDs
|
||||
test_websocket_message "Too many IDs" '["REQ","sub1",{"ids":["a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a"]}]' "error: too many ids"
|
||||
|
||||
echo
|
||||
echo "=== Testing Kinds Validation ==="
|
||||
|
||||
# Test 11: Invalid kind type
|
||||
test_websocket_message "Invalid kind type" '["REQ","sub1",{"kinds":["1"]}]' "error: kind"
|
||||
|
||||
# Test 12: Negative kind
|
||||
test_websocket_message "Negative kind" '["REQ","sub1",{"kinds":[-1]}]' "error: invalid kind value"
|
||||
|
||||
# Test 13: Too large kind
|
||||
test_websocket_message "Too large kind" '["REQ","sub1",{"kinds":[70000]}]' "error: invalid kind value"
|
||||
|
||||
# Test 14: Too many kinds
|
||||
test_websocket_message "Too many kinds" '["REQ","sub1",{"kinds":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52]}]' "error: too many kinds"
|
||||
|
||||
echo
|
||||
echo "=== Testing Timestamp Validation ==="
|
||||
|
||||
# Test 15: Invalid since type
|
||||
test_websocket_message "Invalid since type" '["REQ","sub1",{"since":"123"}]' "error: since must be a number"
|
||||
|
||||
# Test 16: Negative since
|
||||
test_websocket_message "Negative since" '["REQ","sub1",{"since":-1}]' "error: invalid since timestamp"
|
||||
|
||||
# Test 17: Invalid until type
|
||||
test_websocket_message "Invalid until type" '["REQ","sub1",{"until":"123"}]' "error: until must be a number"
|
||||
|
||||
# Test 18: Negative until
|
||||
test_websocket_message "Negative until" '["REQ","sub1",{"until":-1}]' "error: invalid until timestamp"
|
||||
|
||||
echo
|
||||
echo "=== Testing Limit Validation ==="
|
||||
|
||||
# Test 19: Invalid limit type
|
||||
test_websocket_message "Invalid limit type" '["REQ","sub1",{"limit":"10"}]' "error: limit must be a number"
|
||||
|
||||
# Test 20: Negative limit
|
||||
test_websocket_message "Negative limit" '["REQ","sub1",{"limit":-1}]' "error: invalid limit value"
|
||||
|
||||
# Test 21: Too large limit
|
||||
test_websocket_message "Too large limit" '["REQ","sub1",{"limit":10000}]' "error: invalid limit value"
|
||||
|
||||
echo
|
||||
echo "=== Testing Search Validation ==="
|
||||
|
||||
# Test 22: Invalid search type
|
||||
test_websocket_message "Invalid search type" '["REQ","sub1",{"search":123}]' "error: search must be a string"
|
||||
|
||||
# Test 23: Search too long
|
||||
test_websocket_message "Search too long" '["REQ","sub1",{"search":"'$(printf 'a%.0s' {1..257})'"}]' "error: search term too long"
|
||||
|
||||
# Test 24: Search with SQL injection
|
||||
test_websocket_message "Search SQL injection" '["REQ","sub1",{"search":"test; DROP TABLE users;"}]' "error: invalid characters in search term"
|
||||
|
||||
echo
|
||||
echo "=== Testing Tag Filter Validation ==="
|
||||
|
||||
# Test 25: Invalid tag filter type
|
||||
test_websocket_message "Invalid tag filter type" '["REQ","sub1",{"#e":"not-an-array"}]' "error: #e must be an array"
|
||||
|
||||
# Test 26: Too many tag values
|
||||
test_websocket_message "Too many tag values" '["REQ","sub1",{"#e":['$(printf '"a%.0s",' {1..101})'"a"]}]' "error: too many #e values"
|
||||
|
||||
# Test 27: Tag value too long
|
||||
test_websocket_message "Tag value too long" '["REQ","sub1",{"#e":["'$(printf 'a%.0s' {1..1025})'"]}]' "error: #e value too long"
|
||||
|
||||
echo
|
||||
echo "=== Testing Rate Limiting ==="
|
||||
|
||||
# Test 28: Send multiple malformed requests to trigger rate limiting
|
||||
echo -n "Testing rate limiting with malformed requests... "
|
||||
rate_limit_triggered=false
|
||||
for i in {1..15}; do
|
||||
response=$(timeout 2 bash -c "
|
||||
echo '["REQ","sub-malformed'$i'",[{"authors":["invalid"]}]]' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"too many malformed requests"* ]]; then
|
||||
rate_limit_triggered=true
|
||||
break
|
||||
fi
|
||||
sleep 0.1
|
||||
done
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
if [[ "$rate_limit_triggered" == true ]]; then
|
||||
echo -e "${GREEN}PASSED${NC}"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
else
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - Rate limiting may not have triggered (this could be normal)"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1)) # Count as passed since it's not a failure
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}All tests passed!${NC}"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}Some tests failed.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
125
tests/input_validation_tests.sh
Executable file
125
tests/input_validation_tests.sh
Executable file
File diff suppressed because one or more lines are too long
238
tests/load_tests.sh
Executable file
238
tests/load_tests.sh
Executable file
@@ -0,0 +1,238 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Load Testing Suite for C-Relay
|
||||
# Tests high concurrent connection scenarios and performance under load
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
TEST_DURATION=30 # seconds
|
||||
CONCURRENT_CONNECTIONS=50
|
||||
MESSAGES_PER_SECOND=100
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Metrics tracking
|
||||
TOTAL_CONNECTIONS=0
|
||||
SUCCESSFUL_CONNECTIONS=0
|
||||
FAILED_CONNECTIONS=0
|
||||
TOTAL_MESSAGES_SENT=0
|
||||
TOTAL_MESSAGES_RECEIVED=0
|
||||
START_TIME=""
|
||||
END_TIME=""
|
||||
|
||||
# Function to run a single client connection
|
||||
run_client() {
|
||||
local client_id="$1"
|
||||
local messages_to_send="${2:-10}"
|
||||
|
||||
local messages_sent=0
|
||||
local messages_received=0
|
||||
local connection_successful=false
|
||||
|
||||
# Create a temporary file for this client's output
|
||||
local temp_file
|
||||
temp_file=$(mktemp)
|
||||
|
||||
# Send messages and collect responses
|
||||
(
|
||||
for i in $(seq 1 "$messages_to_send"); do
|
||||
echo '["REQ","load_test_'"$client_id"'_'"$i"'",{}]'
|
||||
# Small delay to avoid overwhelming
|
||||
sleep 0.01
|
||||
done
|
||||
# Send CLOSE message
|
||||
echo '["CLOSE","load_test_'"$client_id"'_*"]'
|
||||
) | timeout 60 websocat -B 1048576 "ws://$RELAY_HOST:$RELAY_PORT" > "$temp_file" 2>/dev/null &
|
||||
|
||||
local client_pid=$!
|
||||
|
||||
# Wait a bit for the client to complete
|
||||
sleep 2
|
||||
|
||||
# Check if client is still running (good sign)
|
||||
if kill -0 "$client_pid" 2>/dev/null; then
|
||||
connection_successful=true
|
||||
((SUCCESSFUL_CONNECTIONS++))
|
||||
else
|
||||
wait "$client_pid" 2>/dev/null || true
|
||||
((FAILED_CONNECTIONS++))
|
||||
fi
|
||||
|
||||
# Count messages sent
|
||||
messages_sent=$messages_to_send
|
||||
|
||||
# Count responses received (rough estimate)
|
||||
local response_count
|
||||
response_count=$(grep -c "EOSE\|EVENT\|NOTICE" "$temp_file" 2>/dev/null || echo "0")
|
||||
|
||||
# Clean up temp file
|
||||
rm -f "$temp_file"
|
||||
|
||||
# Return results
|
||||
echo "$messages_sent:$response_count:$connection_successful"
|
||||
}
|
||||
|
||||
# Function to monitor system resources
|
||||
monitor_resources() {
|
||||
local duration="$1"
|
||||
local interval="${2:-1}"
|
||||
|
||||
echo "=== Resource Monitoring ==="
|
||||
echo "Monitoring system resources for ${duration}s..."
|
||||
|
||||
local start_time
|
||||
start_time=$(date +%s)
|
||||
|
||||
while [[ $(($(date +%s) - start_time)) -lt duration ]]; do
|
||||
# Get CPU and memory usage
|
||||
local cpu_usage
|
||||
cpu_usage=$(top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | awk '{print 100 - $1}')
|
||||
|
||||
local mem_usage
|
||||
mem_usage=$(free | grep Mem | awk '{printf "%.2f", $3/$2 * 100.0}')
|
||||
|
||||
# Get network connections
|
||||
local connections
|
||||
connections=$(netstat -t | grep -c ":$RELAY_PORT")
|
||||
|
||||
echo "$(date '+%H:%M:%S') - CPU: ${cpu_usage}%, MEM: ${mem_usage}%, Connections: $connections"
|
||||
|
||||
sleep "$interval"
|
||||
done
|
||||
}
|
||||
|
||||
# Function to run load test
|
||||
run_load_test() {
|
||||
local test_name="$1"
|
||||
local description="$2"
|
||||
local concurrent_clients="$3"
|
||||
local messages_per_client="$4"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Load Test: $test_name"
|
||||
echo "Description: $description"
|
||||
echo "Concurrent clients: $concurrent_clients"
|
||||
echo "Messages per client: $messages_per_client"
|
||||
echo "=========================================="
|
||||
|
||||
START_TIME=$(date +%s)
|
||||
|
||||
# Reset counters
|
||||
SUCCESSFUL_CONNECTIONS=0
|
||||
FAILED_CONNECTIONS=0
|
||||
TOTAL_MESSAGES_SENT=0
|
||||
TOTAL_MESSAGES_RECEIVED=0
|
||||
|
||||
# Start resource monitoring in background
|
||||
monitor_resources 30 &
|
||||
local monitor_pid=$!
|
||||
|
||||
# Launch clients
|
||||
local client_pids=()
|
||||
local client_results=()
|
||||
|
||||
echo "Launching $concurrent_clients concurrent clients..."
|
||||
|
||||
for i in $(seq 1 "$concurrent_clients"); do
|
||||
run_client "$i" "$messages_per_client" &
|
||||
client_pids+=($!)
|
||||
done
|
||||
|
||||
# Wait for all clients to complete
|
||||
echo "Waiting for clients to complete..."
|
||||
for pid in "${client_pids[@]}"; do
|
||||
wait "$pid" 2>/dev/null || true
|
||||
done
|
||||
|
||||
# Stop monitoring
|
||||
kill "$monitor_pid" 2>/dev/null || true
|
||||
wait "$monitor_pid" 2>/dev/null || true
|
||||
|
||||
END_TIME=$(date +%s)
|
||||
local duration=$((END_TIME - START_TIME))
|
||||
|
||||
# Calculate metrics
|
||||
local total_messages_expected=$((concurrent_clients * messages_per_client))
|
||||
local connection_success_rate=0
|
||||
local total_connections=$((SUCCESSFUL_CONNECTIONS + FAILED_CONNECTIONS))
|
||||
|
||||
if [[ $total_connections -gt 0 ]]; then
|
||||
connection_success_rate=$((SUCCESSFUL_CONNECTIONS * 100 / total_connections))
|
||||
fi
|
||||
|
||||
# Report results
|
||||
echo ""
|
||||
echo "=== Load Test Results ==="
|
||||
echo "Test duration: ${duration}s"
|
||||
echo "Total connections attempted: $total_connections"
|
||||
echo "Successful connections: $SUCCESSFUL_CONNECTIONS"
|
||||
echo "Failed connections: $FAILED_CONNECTIONS"
|
||||
echo "Connection success rate: ${connection_success_rate}%"
|
||||
echo "Messages expected: $total_messages_expected"
|
||||
|
||||
# Performance assessment
|
||||
if [[ $connection_success_rate -ge 95 ]]; then
|
||||
echo -e "${GREEN}✓ EXCELLENT: High connection success rate${NC}"
|
||||
elif [[ $connection_success_rate -ge 80 ]]; then
|
||||
echo -e "${YELLOW}⚠ GOOD: Acceptable connection success rate${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ POOR: Low connection success rate${NC}"
|
||||
fi
|
||||
|
||||
# Check if relay is still responsive
|
||||
echo ""
|
||||
echo -n "Checking relay responsiveness... "
|
||||
if timeout 5 bash -c "
|
||||
echo 'ping' | websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null; then
|
||||
echo -e "${GREEN}✓ Relay is still responsive${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Relay became unresponsive after load test${NC}"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Load Testing Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Test basic connectivity first
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
if timeout 5 bash -c "
|
||||
echo 'ping' | websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null; then
|
||||
echo -e "${GREEN}✓ Relay is accessible${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Cannot connect to relay. Aborting tests.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Run different load scenarios
|
||||
run_load_test "Light Load Test" "Basic load test with moderate concurrent connections" 10 5
|
||||
echo ""
|
||||
|
||||
run_load_test "Medium Load Test" "Moderate load test with higher concurrency" 25 10
|
||||
echo ""
|
||||
|
||||
run_load_test "Heavy Load Test" "Heavy load test with high concurrency" 50 20
|
||||
echo ""
|
||||
|
||||
run_load_test "Stress Test" "Maximum load test to find breaking point" 100 50
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo "Load Testing Complete"
|
||||
echo "=========================================="
|
||||
echo "All load tests completed. Check individual test results above."
|
||||
echo "If any tests failed, the relay may need optimization or have resource limits."
|
||||
197
tests/memory_corruption_tests.sh
Executable file
197
tests/memory_corruption_tests.sh
Executable file
@@ -0,0 +1,197 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Memory Corruption Detection Test Suite for C-Relay
|
||||
# Tests for buffer overflows, use-after-free, and memory safety issues
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
TEST_TIMEOUT=15
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Function to test for memory corruption (buffer overflows, crashes, etc.)
|
||||
test_memory_safety() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local expect_error="${3:-false}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Send message and monitor for crashes or memory issues
|
||||
local start_time=$(date +%s%N)
|
||||
local response
|
||||
response=$(timeout $TEST_TIMEOUT bash -c "
|
||||
echo '$message' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null
|
||||
" 2>/dev/null || echo 'CONNECTION_FAILED')
|
||||
local end_time=$(date +%s%N)
|
||||
|
||||
# Check if relay is still responsive after the test
|
||||
local relay_status
|
||||
relay_status=$(timeout 2 bash -c "
|
||||
echo 'ping' | websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1 && echo 'OK' || echo 'DOWN'
|
||||
" 2>/dev/null || echo 'DOWN')
|
||||
|
||||
# Calculate response time (rough indicator of processing issues)
|
||||
local response_time=$(( (end_time - start_time) / 1000000 )) # Convert to milliseconds
|
||||
|
||||
if [[ "$response" == "CONNECTION_FAILED" ]]; then
|
||||
if [[ "$expect_error" == "true" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Expected connection failure"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Unexpected connection failure"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
elif [[ "$relay_status" != "OK" ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Relay crashed or became unresponsive after test"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
elif [[ $response_time -gt 5000 ]]; then # More than 5 seconds
|
||||
echo -e "${YELLOW}SUSPICIOUS${NC} - Very slow response (${response_time}ms), possible DoS"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
else
|
||||
if [[ "$expect_error" == "true" ]]; then
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - Expected error but got normal response"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1)) # Count as passed since no crash
|
||||
return 0
|
||||
else
|
||||
echo -e "${GREEN}PASSED${NC} - No memory corruption detected"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test concurrent access patterns
|
||||
test_concurrent_access() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local concurrent_count="${3:-5}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Launch multiple concurrent connections
|
||||
local pids=()
|
||||
local results=()
|
||||
|
||||
for i in $(seq 1 $concurrent_count); do
|
||||
(
|
||||
local response
|
||||
response=$(timeout $TEST_TIMEOUT bash -c "
|
||||
echo '$message' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1
|
||||
" 2>/dev/null || echo 'FAILED')
|
||||
echo "$response"
|
||||
) &
|
||||
pids+=($!)
|
||||
done
|
||||
|
||||
# Wait for all to complete
|
||||
local failed_count=0
|
||||
for pid in "${pids[@]}"; do
|
||||
wait "$pid" 2>/dev/null || failed_count=$((failed_count + 1))
|
||||
done
|
||||
|
||||
# Check if relay is still responsive
|
||||
local relay_status
|
||||
relay_status=$(timeout 2 bash -c "
|
||||
echo 'ping' | websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1 && echo 'OK' || echo 'DOWN'
|
||||
" 2>/dev/null || echo 'DOWN')
|
||||
|
||||
if [[ "$relay_status" != "OK" ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Relay crashed during concurrent access"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
elif [[ $failed_count -gt 0 ]]; then
|
||||
echo -e "${YELLOW}PARTIAL${NC} - Some concurrent requests failed ($failed_count/$concurrent_count)"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
else
|
||||
echo -e "${GREEN}PASSED${NC} - Concurrent access handled safely"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Memory Corruption Test Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo "Note: These tests may cause the relay to crash if vulnerabilities exist"
|
||||
echo
|
||||
|
||||
# Test basic connectivity first
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
test_memory_safety "Basic connectivity" '["REQ","basic_test",{}]'
|
||||
echo
|
||||
|
||||
echo "=== Subscription ID Memory Corruption Tests ==="
|
||||
# Test malformed subscription IDs that could cause buffer overflows
|
||||
test_memory_safety "Empty subscription ID" '["REQ","",{}]' true
|
||||
test_memory_safety "Very long subscription ID (1KB)" '["REQ","'$(printf 'a%.0s' {1..1024})'",{}]' true
|
||||
test_memory_safety "Very long subscription ID (10KB)" '["REQ","'$(printf 'a%.0s' {1..10240})'",{}]' true
|
||||
test_memory_safety "Subscription ID with null bytes" '["REQ","test\x00injection",{}]' true
|
||||
test_memory_safety "Subscription ID with special chars" '["REQ","test@#$%^&*()",{}]' true
|
||||
test_memory_safety "Unicode subscription ID" '["REQ","test🚀💣🔥",{}]' true
|
||||
test_memory_safety "Subscription ID with path traversal" '["REQ","../../../etc/passwd",{}]' true
|
||||
echo
|
||||
|
||||
echo "=== Filter Array Memory Corruption Tests ==="
|
||||
# Test oversized filter arrays (limited to avoid extremely long output)
|
||||
test_memory_safety "Too many filters (50)" '["REQ","test_many_filters",{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{} ,{}]' true
|
||||
echo
|
||||
|
||||
echo "=== Concurrent Access Memory Tests ==="
|
||||
# Test concurrent access patterns that might cause race conditions
|
||||
test_concurrent_access "Concurrent subscription creation" '["REQ","concurrent_'$(date +%s%N)'",{}]' 10
|
||||
test_concurrent_access "Concurrent CLOSE operations" '["CLOSE","test_sub"]' 10
|
||||
echo
|
||||
|
||||
echo "=== Malformed JSON Memory Tests ==="
|
||||
# Test malformed JSON that might cause parsing issues
|
||||
test_memory_safety "Unclosed JSON object" '["REQ","test",{' true
|
||||
test_memory_safety "Mismatched brackets" '["REQ","test"]' true
|
||||
test_memory_safety "Extra closing brackets" '["REQ","test",{}]]' true
|
||||
test_memory_safety "Null bytes in JSON" '["REQ","test\x00",{}]' true
|
||||
echo
|
||||
|
||||
echo "=== Large Message Memory Tests ==="
|
||||
# Test very large messages that might cause buffer issues
|
||||
test_memory_safety "Very large filter array" '["REQ","large_test",{"authors":['$(printf '"test%.0s",' {1..1000})'"test"]}]' true
|
||||
test_memory_safety "Very long search term" '["REQ","search_test",{"search":"'$(printf 'a%.0s' {1..10000})'"}]' true
|
||||
echo
|
||||
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All memory corruption tests passed!${NC}"
|
||||
echo "The relay appears to handle memory safely."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ Memory corruption vulnerabilities detected!${NC}"
|
||||
echo "The relay may be vulnerable to memory corruption attacks."
|
||||
echo "Failed tests: $FAILED_TESTS"
|
||||
exit 1
|
||||
fi
|
||||
239
tests/performance_benchmarks.sh
Executable file
239
tests/performance_benchmarks.sh
Executable file
@@ -0,0 +1,239 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Performance Benchmarking Suite for C-Relay
|
||||
# Measures performance metrics and throughput
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
BENCHMARK_DURATION=30 # seconds
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Metrics tracking
|
||||
TOTAL_REQUESTS=0
|
||||
SUCCESSFUL_REQUESTS=0
|
||||
FAILED_REQUESTS=0
|
||||
TOTAL_RESPONSE_TIME=0
|
||||
MIN_RESPONSE_TIME=999999
|
||||
MAX_RESPONSE_TIME=0
|
||||
|
||||
# Function to benchmark single request
|
||||
benchmark_request() {
|
||||
local message="$1"
|
||||
local start_time
|
||||
local end_time
|
||||
local response_time
|
||||
|
||||
start_time=$(date +%s%N)
|
||||
local response
|
||||
response=$(timeout 5 bash -c "
|
||||
echo '$message' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
end_time=$(date +%s%N)
|
||||
|
||||
response_time=$(( (end_time - start_time) / 1000000 )) # Convert to milliseconds
|
||||
|
||||
TOTAL_REQUESTS=$((TOTAL_REQUESTS + 1))
|
||||
TOTAL_RESPONSE_TIME=$((TOTAL_RESPONSE_TIME + response_time))
|
||||
|
||||
if [[ $response_time -lt MIN_RESPONSE_TIME ]]; then
|
||||
MIN_RESPONSE_TIME=$response_time
|
||||
fi
|
||||
|
||||
if [[ $response_time -gt MAX_RESPONSE_TIME ]]; then
|
||||
MAX_RESPONSE_TIME=$response_time
|
||||
fi
|
||||
|
||||
if [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]] || [[ "$response" == *"OK"* ]]; then
|
||||
SUCCESSFUL_REQUESTS=$((SUCCESSFUL_REQUESTS + 1))
|
||||
else
|
||||
FAILED_REQUESTS=$((FAILED_REQUESTS + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to run throughput benchmark
|
||||
run_throughput_benchmark() {
|
||||
local test_name="$1"
|
||||
local message="$2"
|
||||
local concurrent_clients="${3:-10}"
|
||||
local test_duration="${4:-$BENCHMARK_DURATION}"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Throughput Benchmark: $test_name"
|
||||
echo "=========================================="
|
||||
echo "Concurrent clients: $concurrent_clients"
|
||||
echo "Duration: ${test_duration}s"
|
||||
echo ""
|
||||
|
||||
# Reset metrics
|
||||
TOTAL_REQUESTS=0
|
||||
SUCCESSFUL_REQUESTS=0
|
||||
FAILED_REQUESTS=0
|
||||
TOTAL_RESPONSE_TIME=0
|
||||
MIN_RESPONSE_TIME=999999
|
||||
MAX_RESPONSE_TIME=0
|
||||
|
||||
local start_time
|
||||
start_time=$(date +%s)
|
||||
|
||||
# Launch concurrent clients
|
||||
local pids=()
|
||||
for i in $(seq 1 "$concurrent_clients"); do
|
||||
(
|
||||
local client_start
|
||||
client_start=$(date +%s)
|
||||
local client_requests=0
|
||||
|
||||
while [[ $(($(date +%s) - client_start)) -lt test_duration ]]; do
|
||||
benchmark_request "$message"
|
||||
((client_requests++))
|
||||
# Small delay to prevent overwhelming
|
||||
sleep 0.01
|
||||
done
|
||||
|
||||
echo "client_${i}_requests:$client_requests"
|
||||
) &
|
||||
pids+=($!)
|
||||
done
|
||||
|
||||
# Wait for all clients to complete
|
||||
local client_results=()
|
||||
for pid in "${pids[@]}"; do
|
||||
client_results+=("$(wait "$pid")")
|
||||
done
|
||||
|
||||
local end_time
|
||||
end_time=$(date +%s)
|
||||
local actual_duration=$((end_time - start_time))
|
||||
|
||||
# Calculate metrics
|
||||
local avg_response_time="N/A"
|
||||
if [[ $SUCCESSFUL_REQUESTS -gt 0 ]]; then
|
||||
avg_response_time="$((TOTAL_RESPONSE_TIME / SUCCESSFUL_REQUESTS))ms"
|
||||
fi
|
||||
|
||||
local requests_per_second="N/A"
|
||||
if [[ $actual_duration -gt 0 ]]; then
|
||||
requests_per_second="$((TOTAL_REQUESTS / actual_duration))"
|
||||
fi
|
||||
|
||||
local success_rate="N/A"
|
||||
if [[ $TOTAL_REQUESTS -gt 0 ]]; then
|
||||
success_rate="$((SUCCESSFUL_REQUESTS * 100 / TOTAL_REQUESTS))%"
|
||||
fi
|
||||
|
||||
# Report results
|
||||
echo "=== Benchmark Results ==="
|
||||
echo "Total requests: $TOTAL_REQUESTS"
|
||||
echo "Successful requests: $SUCCESSFUL_REQUESTS"
|
||||
echo "Failed requests: $FAILED_REQUESTS"
|
||||
echo "Success rate: $success_rate"
|
||||
echo "Requests per second: $requests_per_second"
|
||||
echo "Average response time: $avg_response_time"
|
||||
echo "Min response time: ${MIN_RESPONSE_TIME}ms"
|
||||
echo "Max response time: ${MAX_RESPONSE_TIME}ms"
|
||||
echo "Actual duration: ${actual_duration}s"
|
||||
echo ""
|
||||
|
||||
# Performance assessment
|
||||
if [[ $requests_per_second -gt 1000 ]]; then
|
||||
echo -e "${GREEN}✓ EXCELLENT throughput${NC}"
|
||||
elif [[ $requests_per_second -gt 500 ]]; then
|
||||
echo -e "${GREEN}✓ GOOD throughput${NC}"
|
||||
elif [[ $requests_per_second -gt 100 ]]; then
|
||||
echo -e "${YELLOW}⚠ MODERATE throughput${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ LOW throughput${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to benchmark memory usage patterns
|
||||
benchmark_memory_usage() {
|
||||
echo "=========================================="
|
||||
echo "Memory Usage Benchmark"
|
||||
echo "=========================================="
|
||||
|
||||
local initial_memory
|
||||
initial_memory=$(ps aux | grep c_relay | grep -v grep | awk '{print $6}' | head -1)
|
||||
|
||||
echo "Initial memory usage: ${initial_memory}KB"
|
||||
|
||||
# Create increasing number of subscriptions
|
||||
for i in {10,25,50,100}; do
|
||||
echo -n "Testing with $i concurrent subscriptions... "
|
||||
|
||||
# Create subscriptions
|
||||
for j in $(seq 1 "$i"); do
|
||||
timeout 2 bash -c "
|
||||
echo '[\"REQ\",\"mem_test_'${j}'\",{}]' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null &
|
||||
done
|
||||
|
||||
sleep 2
|
||||
|
||||
local current_memory
|
||||
current_memory=$(ps aux | grep c_relay | grep -v grep | awk '{print $6}' | head -1)
|
||||
local memory_increase=$((current_memory - initial_memory))
|
||||
|
||||
echo "${current_memory}KB (+${memory_increase}KB)"
|
||||
|
||||
# Clean up subscriptions
|
||||
for j in $(seq 1 "$i"); do
|
||||
timeout 2 bash -c "
|
||||
echo '[\"CLOSE\",\"mem_test_'${j}'\"]' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null &
|
||||
done
|
||||
|
||||
sleep 1
|
||||
done
|
||||
|
||||
local final_memory
|
||||
final_memory=$(ps aux | grep c_relay | grep -v grep | awk '{print $6}' | head -1)
|
||||
echo "Final memory usage: ${final_memory}KB"
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Performance Benchmarking Suite"
|
||||
echo "=========================================="
|
||||
echo "Benchmarking relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Test basic connectivity
|
||||
echo "=== Connectivity Test ==="
|
||||
benchmark_request '["REQ","bench_test",{}]'
|
||||
if [[ $SUCCESSFUL_REQUESTS -eq 0 ]]; then
|
||||
echo -e "${RED}Cannot connect to relay. Aborting benchmarks.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
echo -e "${GREEN}✓ Relay is accessible${NC}"
|
||||
echo ""
|
||||
|
||||
# Run throughput benchmarks
|
||||
run_throughput_benchmark "Simple REQ Throughput" '["REQ","throughput_'$(date +%s%N)'",{}]' 10 15
|
||||
echo ""
|
||||
|
||||
run_throughput_benchmark "Complex Filter Throughput" '["REQ","complex_'$(date +%s%N)'",{"kinds":[1,2,3],"#e":["test"],"limit":10}]' 10 15
|
||||
echo ""
|
||||
|
||||
run_throughput_benchmark "COUNT Message Throughput" '["COUNT","count_'$(date +%s%N)'",{}]' 10 15
|
||||
echo ""
|
||||
|
||||
run_throughput_benchmark "High Load Throughput" '["REQ","high_load_'$(date +%s%N)'",{}]' 25 20
|
||||
echo ""
|
||||
|
||||
# Memory usage benchmark
|
||||
benchmark_memory_usage
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo "Benchmarking Complete"
|
||||
echo "=========================================="
|
||||
echo "Performance benchmarks completed. Review results above for optimization opportunities."
|
||||
213
tests/rate_limiting_tests.sh
Executable file
213
tests/rate_limiting_tests.sh
Executable file
@@ -0,0 +1,213 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Rate Limiting Test Suite for C-Relay
|
||||
# Tests rate limiting and abuse prevention mechanisms
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
TEST_TIMEOUT=15
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Function to test rate limiting
|
||||
test_rate_limiting() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local burst_count="${3:-10}"
|
||||
local expected_limited="${4:-false}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
local rate_limited=false
|
||||
local success_count=0
|
||||
local error_count=0
|
||||
|
||||
# Send burst of messages
|
||||
for i in $(seq 1 "$burst_count"); do
|
||||
local response
|
||||
response=$(timeout 2 bash -c "
|
||||
echo '$message' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"rate limit"* ]] || [[ "$response" == *"too many"* ]] || [[ "$response" == *"TOO_MANY"* ]]; then
|
||||
rate_limited=true
|
||||
elif [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]] || [[ "$response" == *"OK"* ]]; then
|
||||
((success_count++))
|
||||
else
|
||||
((error_count++))
|
||||
fi
|
||||
|
||||
# Small delay between requests
|
||||
sleep 0.05
|
||||
done
|
||||
|
||||
if [[ "$expected_limited" == "true" ]]; then
|
||||
if [[ "$rate_limited" == "true" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Rate limiting triggered as expected"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Rate limiting not triggered (expected)"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
if [[ "$rate_limited" == "false" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - No rate limiting for normal traffic"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - Unexpected rate limiting"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1)) # Count as passed since it's conservative
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test sustained load
|
||||
test_sustained_load() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
local duration="${3:-10}"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
local start_time
|
||||
start_time=$(date +%s)
|
||||
local rate_limited=false
|
||||
local total_requests=0
|
||||
local successful_requests=0
|
||||
|
||||
while [[ $(($(date +%s) - start_time)) -lt duration ]]; do
|
||||
((total_requests++))
|
||||
local response
|
||||
response=$(timeout 1 bash -c "
|
||||
echo '$message' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -1
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"rate limit"* ]] || [[ "$response" == *"too many"* ]] || [[ "$response" == *"TOO_MANY"* ]]; then
|
||||
rate_limited=true
|
||||
elif [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]] || [[ "$response" == *"OK"* ]]; then
|
||||
((successful_requests++))
|
||||
fi
|
||||
|
||||
# Small delay to avoid overwhelming
|
||||
sleep 0.1
|
||||
done
|
||||
|
||||
local success_rate=0
|
||||
if [[ $total_requests -gt 0 ]]; then
|
||||
success_rate=$((successful_requests * 100 / total_requests))
|
||||
fi
|
||||
|
||||
if [[ "$rate_limited" == "true" ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Rate limiting activated under sustained load (${success_rate}% success rate)"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - No rate limiting detected (${success_rate}% success rate)"
|
||||
# This might be acceptable if rate limiting is very permissive
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Rate Limiting Test Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing rate limiting against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Test basic connectivity first
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
test_rate_limiting "Basic connectivity" '["REQ","rate_test",{}]' 1 false
|
||||
echo ""
|
||||
|
||||
echo "=== Burst Request Testing ==="
|
||||
# Test rapid succession of requests
|
||||
test_rate_limiting "Rapid REQ messages" '["REQ","burst_req_'$(date +%s%N)'",{}]' 20 true
|
||||
test_rate_limiting "Rapid COUNT messages" '["COUNT","burst_count_'$(date +%s%N)'",{}]' 20 true
|
||||
test_rate_limiting "Rapid CLOSE messages" '["CLOSE","burst_close"]' 20 true
|
||||
echo ""
|
||||
|
||||
echo "=== Malformed Message Rate Limiting ==="
|
||||
# Test if malformed messages trigger rate limiting faster
|
||||
test_rate_limiting "Malformed JSON burst" '["REQ","malformed"' 15 true
|
||||
test_rate_limiting "Invalid message type burst" '["INVALID","test",{}]' 15 true
|
||||
test_rate_limiting "Empty message burst" '[]' 15 true
|
||||
echo ""
|
||||
|
||||
echo "=== Sustained Load Testing ==="
|
||||
# Test sustained moderate load
|
||||
test_sustained_load "Sustained REQ load" '["REQ","sustained_'$(date +%s%N)'",{}]' 10
|
||||
test_sustained_load "Sustained COUNT load" '["COUNT","sustained_count_'$(date +%s%N)'",{}]' 10
|
||||
echo ""
|
||||
|
||||
echo "=== Filter Complexity Testing ==="
|
||||
# Test if complex filters trigger rate limiting
|
||||
test_rate_limiting "Complex filter burst" '["REQ","complex_'$(date +%s%N)'",{"authors":["a","b","c"],"kinds":[1,2,3],"#e":["x","y","z"],"#p":["m","n","o"],"since":1000000000,"until":2000000000,"limit":100}]' 10 true
|
||||
echo ""
|
||||
|
||||
echo "=== Subscription Management Testing ==="
|
||||
# Test subscription creation/deletion rate limiting
|
||||
echo -n "Testing subscription churn... "
|
||||
local churn_test_passed=true
|
||||
for i in $(seq 1 25); do
|
||||
# Create subscription
|
||||
timeout 1 bash -c "
|
||||
echo '[\"REQ\",\"churn_'${i}'_'$(date +%s%N)'\",{}]' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null || true
|
||||
|
||||
# Close subscription
|
||||
timeout 1 bash -c "
|
||||
echo '[\"CLOSE\",\"churn_'${i}'_*\"]' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null || true
|
||||
|
||||
sleep 0.05
|
||||
done
|
||||
|
||||
# Check if relay is still responsive
|
||||
if timeout 2 bash -c "
|
||||
echo 'ping' | websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null; then
|
||||
echo -e "${GREEN}PASSED${NC} - Subscription churn handled"
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Relay unresponsive after subscription churn"
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All rate limiting tests passed!${NC}"
|
||||
echo "Rate limiting appears to be working correctly."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ Some rate limiting tests failed!${NC}"
|
||||
echo "Rate limiting may not be properly configured."
|
||||
exit 1
|
||||
fi
|
||||
269
tests/resource_monitoring.sh
Executable file
269
tests/resource_monitoring.sh
Executable file
@@ -0,0 +1,269 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Resource Monitoring Suite for C-Relay
|
||||
# Monitors memory and CPU usage during testing
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
MONITOR_DURATION=60 # seconds
|
||||
SAMPLE_INTERVAL=2 # seconds
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Metrics storage
|
||||
CPU_SAMPLES=()
|
||||
MEM_SAMPLES=()
|
||||
CONNECTION_SAMPLES=()
|
||||
TIMESTAMP_SAMPLES=()
|
||||
|
||||
# Function to get relay process info
|
||||
get_relay_info() {
|
||||
local pid
|
||||
pid=$(pgrep -f "c_relay" | head -1)
|
||||
|
||||
if [[ -z "$pid" ]]; then
|
||||
echo "0:0:0:0"
|
||||
return
|
||||
fi
|
||||
|
||||
# Get CPU, memory, and other stats
|
||||
local ps_output
|
||||
ps_output=$(ps -p "$pid" -o pcpu,pmem,vsz,rss --no-headers 2>/dev/null || echo "0.0 0.0 0 0")
|
||||
|
||||
# Get connection count
|
||||
local connections
|
||||
connections=$(netstat -t 2>/dev/null | grep ":$RELAY_PORT" | wc -l 2>/dev/null || echo "0")
|
||||
|
||||
echo "$ps_output $connections"
|
||||
}
|
||||
|
||||
# Function to monitor resources
|
||||
monitor_resources() {
|
||||
local duration="$1"
|
||||
local interval="$2"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Resource Monitoring Started"
|
||||
echo "=========================================="
|
||||
echo "Duration: ${duration}s, Interval: ${interval}s"
|
||||
echo ""
|
||||
|
||||
# Clear arrays
|
||||
CPU_SAMPLES=()
|
||||
MEM_SAMPLES=()
|
||||
CONNECTION_SAMPLES=()
|
||||
TIMESTAMP_SAMPLES=()
|
||||
|
||||
local start_time
|
||||
start_time=$(date +%s)
|
||||
local sample_count=0
|
||||
|
||||
echo "Time | CPU% | Mem% | VSZ(KB) | RSS(KB) | Connections"
|
||||
echo "-----+------+------+---------+---------+------------"
|
||||
|
||||
while [[ $(($(date +%s) - start_time)) -lt duration ]]; do
|
||||
local relay_info
|
||||
relay_info=$(get_relay_info)
|
||||
|
||||
if [[ "$relay_info" != "0:0:0:0" ]]; then
|
||||
local cpu mem vsz rss connections
|
||||
IFS=' ' read -r cpu mem vsz rss connections <<< "$relay_info"
|
||||
|
||||
# Store samples
|
||||
CPU_SAMPLES+=("$cpu")
|
||||
MEM_SAMPLES+=("$mem")
|
||||
CONNECTION_SAMPLES+=("$connections")
|
||||
TIMESTAMP_SAMPLES+=("$sample_count")
|
||||
|
||||
# Display current stats
|
||||
local elapsed
|
||||
elapsed=$(($(date +%s) - start_time))
|
||||
printf "%4ds | %4.1f | %4.1f | %7s | %7s | %10s\n" \
|
||||
"$elapsed" "$cpu" "$mem" "$vsz" "$rss" "$connections"
|
||||
else
|
||||
echo " -- | Relay process not found --"
|
||||
fi
|
||||
|
||||
((sample_count++))
|
||||
sleep "$interval"
|
||||
done
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Function to calculate statistics
|
||||
calculate_stats() {
|
||||
local array_name="$1"
|
||||
local -n array_ref="$array_name"
|
||||
|
||||
if [[ ${#array_ref[@]} -eq 0 ]]; then
|
||||
echo "0:0:0:0:0"
|
||||
return
|
||||
fi
|
||||
|
||||
local sum=0
|
||||
local min=${array_ref[0]}
|
||||
local max=${array_ref[0]}
|
||||
|
||||
for value in "${array_ref[@]}"; do
|
||||
# Use awk for floating point arithmetic
|
||||
sum=$(awk "BEGIN {print $sum + $value}")
|
||||
min=$(awk "BEGIN {print ($value < $min) ? $value : $min}")
|
||||
max=$(awk "BEGIN {print ($value > $max) ? $value : $max}")
|
||||
done
|
||||
|
||||
local avg
|
||||
avg=$(awk "BEGIN {print $sum / ${#array_ref[@]} }")
|
||||
|
||||
echo "$avg:$min:$max:$sum:${#array_ref[@]}"
|
||||
}
|
||||
|
||||
# Function to generate resource report
|
||||
generate_resource_report() {
|
||||
echo "=========================================="
|
||||
echo "Resource Monitoring Report"
|
||||
echo "=========================================="
|
||||
|
||||
if [[ ${#CPU_SAMPLES[@]} -eq 0 ]]; then
|
||||
echo "No resource samples collected. Is the relay running?"
|
||||
return
|
||||
fi
|
||||
|
||||
# Calculate statistics
|
||||
local cpu_stats mem_stats conn_stats
|
||||
cpu_stats=$(calculate_stats CPU_SAMPLES)
|
||||
mem_stats=$(calculate_stats MEM_SAMPLES)
|
||||
conn_stats=$(calculate_stats CONNECTION_SAMPLES)
|
||||
|
||||
# Parse statistics
|
||||
IFS=':' read -r cpu_avg cpu_min cpu_max cpu_sum cpu_count <<< "$cpu_stats"
|
||||
IFS=':' read -r mem_avg mem_min mem_max mem_sum mem_count <<< "$mem_stats"
|
||||
IFS=':' read -r conn_avg conn_min conn_max conn_sum conn_count <<< "$conn_stats"
|
||||
|
||||
echo "CPU Usage Statistics:"
|
||||
printf " Average: %.2f%%\n" "$cpu_avg"
|
||||
printf " Minimum: %.2f%%\n" "$cpu_min"
|
||||
printf " Maximum: %.2f%%\n" "$cpu_max"
|
||||
printf " Samples: %d\n" "$cpu_count"
|
||||
echo ""
|
||||
|
||||
echo "Memory Usage Statistics:"
|
||||
printf " Average: %.2f%%\n" "$mem_avg"
|
||||
printf " Minimum: %.2f%%\n" "$mem_min"
|
||||
printf " Maximum: %.2f%%\n" "$mem_max"
|
||||
printf " Samples: %d\n" "$mem_count"
|
||||
echo ""
|
||||
|
||||
echo "Connection Statistics:"
|
||||
printf " Average: %.1f connections\n" "$conn_avg"
|
||||
printf " Minimum: %.1f connections\n" "$conn_min"
|
||||
printf " Maximum: %.1f connections\n" "$conn_max"
|
||||
printf " Samples: %d\n" "$conn_count"
|
||||
echo ""
|
||||
|
||||
# Performance assessment
|
||||
echo "Performance Assessment:"
|
||||
if awk "BEGIN {exit !($cpu_avg < 50)}"; then
|
||||
echo -e " ${GREEN}✓ CPU usage is acceptable${NC}"
|
||||
else
|
||||
echo -e " ${RED}✗ CPU usage is high${NC}"
|
||||
fi
|
||||
|
||||
if awk "BEGIN {exit !($mem_avg < 80)}"; then
|
||||
echo -e " ${GREEN}✓ Memory usage is acceptable${NC}"
|
||||
else
|
||||
echo -e " ${RED}✗ Memory usage is high${NC}"
|
||||
fi
|
||||
|
||||
if [[ $(awk "BEGIN {print int($conn_max)}") -gt 0 ]]; then
|
||||
echo -e " ${GREEN}✓ Relay is handling connections${NC}"
|
||||
else
|
||||
echo -e " ${YELLOW}⚠ No active connections detected${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to run load test with monitoring
|
||||
run_monitored_load_test() {
|
||||
local test_name="$1"
|
||||
local description="$2"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Monitored Load Test: $test_name"
|
||||
echo "=========================================="
|
||||
echo "Description: $description"
|
||||
echo ""
|
||||
|
||||
# Start monitoring in background
|
||||
monitor_resources 30 2 &
|
||||
local monitor_pid=$!
|
||||
|
||||
# Wait a moment for monitoring to start
|
||||
sleep 2
|
||||
|
||||
# Run a simple load test (create multiple subscriptions)
|
||||
echo "Running load test..."
|
||||
for i in {1..20}; do
|
||||
timeout 3 bash -c "
|
||||
echo '[\"REQ\",\"monitor_test_'${i}'\",{}]' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null &
|
||||
done
|
||||
|
||||
# Let the load run for a bit
|
||||
sleep 10
|
||||
|
||||
# Clean up subscriptions
|
||||
echo "Cleaning up test subscriptions..."
|
||||
for i in {1..20}; do
|
||||
timeout 3 bash -c "
|
||||
echo '[\"CLOSE\",\"monitor_test_'${i}'\"]' | websocat -B 1048576 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null &
|
||||
done
|
||||
|
||||
# Wait for monitoring to complete
|
||||
sleep 5
|
||||
kill "$monitor_pid" 2>/dev/null || true
|
||||
wait "$monitor_pid" 2>/dev/null || true
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay Resource Monitoring Suite"
|
||||
echo "=========================================="
|
||||
echo "Monitoring relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Check if relay is running
|
||||
if ! pgrep -f "c_relay" >/dev/null 2>&1; then
|
||||
echo -e "${RED}Relay process not found. Please start the relay first.${NC}"
|
||||
echo "Use: ./make_and_restart_relay.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}✓ Relay process found${NC}"
|
||||
echo ""
|
||||
|
||||
# Run baseline monitoring
|
||||
echo "=== Baseline Resource Monitoring ==="
|
||||
monitor_resources 15 2
|
||||
generate_resource_report
|
||||
echo ""
|
||||
|
||||
# Run monitored load test
|
||||
run_monitored_load_test "Subscription Load Test" "Creating and closing multiple subscriptions while monitoring resources"
|
||||
generate_resource_report
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo "Resource Monitoring Complete"
|
||||
echo "=========================================="
|
||||
echo "Resource monitoring completed. Review the statistics above."
|
||||
echo "High CPU/memory usage may indicate performance issues."
|
||||
298
tests/run_all_tests.sh
Executable file
298
tests/run_all_tests.sh
Executable file
@@ -0,0 +1,298 @@
|
||||
#!/bin/bash
|
||||
|
||||
# C-Relay Comprehensive Test Suite Runner
|
||||
# This script runs all security and stability tests for the Nostr relay
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
RELAY_URL="ws://$RELAY_HOST:$RELAY_PORT"
|
||||
TEST_TIMEOUT=30
|
||||
LOG_FILE="test_results_$(date +%Y%m%d_%H%M%S).log"
|
||||
REPORT_FILE="test_report_$(date +%Y%m%d_%H%M%S).html"
|
||||
|
||||
# Test keys for authentication (from AGENTS.md)
|
||||
ADMIN_PRIVATE_KEY="6a04ab98d9e4774ad806e302dddeb63bea16b5cb5f223ee77478e861bb583eb3"
|
||||
RELAY_PUBKEY="4f355bdcb7cc0af728ef3cceb9615d90684bb5b2ca5f859ab0f0b704075871aa"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test results tracking
|
||||
TOTAL_SUITES=0
|
||||
PASSED_SUITES=0
|
||||
FAILED_SUITES=0
|
||||
SKIPPED_SUITES=0
|
||||
|
||||
SUITE_RESULTS=()
|
||||
|
||||
# Function to create authenticated WebSocket connection
|
||||
# Usage: authenticated_websocat <subscription_id> <filter_json>
|
||||
authenticated_websocat() {
|
||||
local sub_id="$1"
|
||||
local filter="$2"
|
||||
|
||||
# Create a temporary script for authenticated connection
|
||||
cat > /tmp/auth_ws_$$.sh << EOF
|
||||
#!/bin/bash
|
||||
# Authenticated WebSocket connection helper
|
||||
|
||||
# Connect and handle AUTH challenge
|
||||
exec websocat -B 1048576 --no-close ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null << 'INNER_EOF'
|
||||
["REQ","$sub_id",$filter]
|
||||
INNER_EOF
|
||||
EOF
|
||||
|
||||
chmod +x /tmp/auth_ws_$$.sh
|
||||
timeout $TEST_TIMEOUT bash /tmp/auth_ws_$$.sh
|
||||
rm -f /tmp/auth_ws_$$.sh
|
||||
}
|
||||
|
||||
# Function to log messages
|
||||
log() {
|
||||
echo "$(date '+%Y-%m-%d %H:%M:%S') - $*" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
# Function to run a test suite
|
||||
run_test_suite() {
|
||||
local suite_name="$1"
|
||||
local suite_script="$2"
|
||||
local description="$3"
|
||||
|
||||
TOTAL_SUITES=$((TOTAL_SUITES + 1))
|
||||
|
||||
log "=========================================="
|
||||
log "Running Test Suite: $suite_name"
|
||||
log "Description: $description"
|
||||
log "=========================================="
|
||||
|
||||
if [[ ! -f "$suite_script" ]]; then
|
||||
log "${RED}ERROR: Test script $suite_script not found${NC}"
|
||||
FAILED_SUITES=$((FAILED_SUITES + 1))
|
||||
SUITE_RESULTS+=("$suite_name: FAILED (script not found)")
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Make script executable if not already
|
||||
chmod +x "$suite_script"
|
||||
|
||||
# Run the test suite and capture output
|
||||
local start_time=$(date +%s)
|
||||
if bash "$suite_script" >> "$LOG_FILE" 2>&1; then
|
||||
local end_time=$(date +%s)
|
||||
local duration=$((end_time - start_time))
|
||||
log "${GREEN}✓ $suite_name PASSED${NC} (Duration: ${duration}s)"
|
||||
PASSED_SUITES=$((PASSED_SUITES + 1))
|
||||
SUITE_RESULTS+=("$suite_name: PASSED (${duration}s)")
|
||||
return 0
|
||||
else
|
||||
local end_time=$(date +%s)
|
||||
local duration=$((end_time - start_time))
|
||||
log "${RED}✗ $suite_name FAILED${NC} (Duration: ${duration}s)"
|
||||
FAILED_SUITES=$((FAILED_SUITES + 1))
|
||||
SUITE_RESULTS+=("$suite_name: FAILED (${duration}s)")
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check if relay is running
|
||||
check_relay_status() {
|
||||
log "Checking relay status at $RELAY_URL..."
|
||||
|
||||
# First check if HTTP endpoint is accessible
|
||||
if curl -s -H "Accept: application/nostr+json" "http://$RELAY_HOST:$RELAY_PORT" >/dev/null 2>&1; then
|
||||
log "${GREEN}✓ Relay HTTP endpoint is accessible${NC}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Fallback: Try WebSocket connection
|
||||
if timeout 5 bash -c "
|
||||
echo '[\"REQ\",\"status_check\",{}]' | websocat -B 1048576 --no-close '$RELAY_URL' >/dev/null 2>&1
|
||||
" 2>/dev/null; then
|
||||
log "${GREEN}✓ Relay WebSocket endpoint is accessible${NC}"
|
||||
return 0
|
||||
else
|
||||
log "${RED}✗ Relay is not accessible at $RELAY_URL${NC}"
|
||||
log "Please start the relay first using: ./make_and_restart_relay.sh"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to generate HTML report
|
||||
generate_html_report() {
|
||||
local total_duration=$1
|
||||
|
||||
cat > "$REPORT_FILE" << EOF
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>C-Relay Test Report - $(date)</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 40px; background-color: #f5f5f5; }
|
||||
.header { background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; padding: 20px; border-radius: 8px; margin-bottom: 30px; }
|
||||
.summary { background: white; padding: 20px; border-radius: 8px; margin-bottom: 30px; box-shadow: 0 2px 10px rgba(0,0,0,0.1); }
|
||||
.suite { background: white; margin-bottom: 10px; padding: 15px; border-radius: 5px; box-shadow: 0 1px 5px rgba(0,0,0,0.1); }
|
||||
.passed { border-left: 5px solid #28a745; }
|
||||
.failed { border-left: 5px solid #dc3545; }
|
||||
.skipped { border-left: 5px solid #ffc107; }
|
||||
.metric { display: inline-block; margin: 10px; padding: 10px; background: #e9ecef; border-radius: 5px; }
|
||||
.status-passed { color: #28a745; font-weight: bold; }
|
||||
.status-failed { color: #dc3545; font-weight: bold; }
|
||||
.status-skipped { color: #ffc107; font-weight: bold; }
|
||||
table { width: 100%; border-collapse: collapse; margin-top: 20px; }
|
||||
th, td { padding: 12px; text-align: left; border-bottom: 1px solid #ddd; }
|
||||
th { background-color: #f8f9fa; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>C-Relay Comprehensive Test Report</h1>
|
||||
<p>Generated on: $(date)</p>
|
||||
<p>Test Environment: $RELAY_URL</p>
|
||||
</div>
|
||||
|
||||
<div class="summary">
|
||||
<h2>Test Summary</h2>
|
||||
<div class="metric">
|
||||
<strong>Total Suites:</strong> $TOTAL_SUITES
|
||||
</div>
|
||||
<div class="metric">
|
||||
<strong>Passed:</strong> <span class="status-passed">$PASSED_SUITES</span>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<strong>Failed:</strong> <span class="status-failed">$FAILED_SUITES</span>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<strong>Skipped:</strong> <span class="status-skipped">$SKIPPED_SUITES</span>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<strong>Total Duration:</strong> ${total_duration}s
|
||||
</div>
|
||||
<div class="metric">
|
||||
<strong>Success Rate:</strong> $(( (PASSED_SUITES * 100) / TOTAL_SUITES ))%
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h2>Test Suite Results</h2>
|
||||
EOF
|
||||
|
||||
for result in "${SUITE_RESULTS[@]}"; do
|
||||
local suite_name=$(echo "$result" | cut -d: -f1)
|
||||
local status=$(echo "$result" | cut -d: -f2 | cut -d' ' -f1)
|
||||
local duration=$(echo "$result" | cut -d: -f2 | cut -d'(' -f2 | cut -d')' -f1)
|
||||
|
||||
local css_class="passed"
|
||||
if [[ "$status" == "FAILED" ]]; then
|
||||
css_class="failed"
|
||||
elif [[ "$status" == "SKIPPED" ]]; then
|
||||
css_class="skipped"
|
||||
fi
|
||||
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
<div class="suite $css_class">
|
||||
<strong>$suite_name</strong> - <span class="status-$css_class">$status</span> ($duration)
|
||||
</div>
|
||||
EOF
|
||||
done
|
||||
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
|
||||
log "HTML report generated: $REPORT_FILE"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
log "=========================================="
|
||||
log "C-Relay Comprehensive Test Suite Runner"
|
||||
log "=========================================="
|
||||
log "Relay URL: $RELAY_URL"
|
||||
log "Log file: $LOG_FILE"
|
||||
log "Report file: $REPORT_FILE"
|
||||
log ""
|
||||
|
||||
# Check if relay is running
|
||||
if ! check_relay_status; then
|
||||
log "${RED}Cannot proceed without a running relay. Exiting.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log ""
|
||||
log "Starting comprehensive test execution..."
|
||||
log ""
|
||||
|
||||
# Record start time
|
||||
OVERALL_START_TIME=$(date +%s)
|
||||
|
||||
# Run Security Test Suites
|
||||
log "${BLUE}=== SECURITY TEST SUITES ===${NC}"
|
||||
|
||||
run_test_suite "SQL Injection Tests" "tests/sql_injection_tests.sh" "Comprehensive SQL injection vulnerability testing"
|
||||
run_test_suite "Filter Validation Tests" "tests/filter_validation_test.sh" "Input validation for REQ and COUNT messages"
|
||||
run_test_suite "Subscription Validation Tests" "tests/subscription_validation.sh" "Subscription ID and message validation"
|
||||
run_test_suite "Memory Corruption Tests" "tests/memory_corruption_tests.sh" "Buffer overflow and memory safety testing"
|
||||
run_test_suite "Input Validation Tests" "tests/input_validation_tests.sh" "Comprehensive input boundary testing"
|
||||
|
||||
# Run Performance Test Suites
|
||||
log ""
|
||||
log "${BLUE}=== PERFORMANCE TEST SUITES ===${NC}"
|
||||
|
||||
run_test_suite "Subscription Limit Tests" "tests/subscription_limits.sh" "Subscription limit enforcement testing"
|
||||
run_test_suite "Load Testing" "tests/load_tests.sh" "High concurrent connection testing"
|
||||
run_test_suite "Stress Testing" "tests/stress_tests.sh" "Resource usage and stability testing"
|
||||
run_test_suite "Rate Limiting Tests" "tests/rate_limiting_tests.sh" "Rate limiting and abuse prevention"
|
||||
|
||||
# Run Integration Test Suites
|
||||
log ""
|
||||
log "${BLUE}=== INTEGRATION TEST SUITES ===${NC}"
|
||||
|
||||
run_test_suite "NIP Protocol Tests" "tests/run_nip_tests.sh" "All NIP protocol compliance tests"
|
||||
run_test_suite "Configuration Tests" "tests/config_tests.sh" "Configuration management and persistence"
|
||||
run_test_suite "Authentication Tests" "tests/auth_tests.sh" "NIP-42 authentication testing"
|
||||
|
||||
# Run Benchmarking Suites
|
||||
log ""
|
||||
log "${BLUE}=== BENCHMARKING SUITES ===${NC}"
|
||||
|
||||
run_test_suite "Performance Benchmarks" "tests/performance_benchmarks.sh" "Performance metrics and benchmarking"
|
||||
run_test_suite "Resource Monitoring" "tests/resource_monitoring.sh" "Memory and CPU usage monitoring"
|
||||
|
||||
# Calculate total duration
|
||||
OVERALL_END_TIME=$(date +%s)
|
||||
TOTAL_DURATION=$((OVERALL_END_TIME - OVERALL_START_TIME))
|
||||
|
||||
# Generate final report
|
||||
log ""
|
||||
log "=========================================="
|
||||
log "TEST EXECUTION COMPLETE"
|
||||
log "=========================================="
|
||||
log "Total test suites: $TOTAL_SUITES"
|
||||
log "Passed: $PASSED_SUITES"
|
||||
log "Failed: $FAILED_SUITES"
|
||||
log "Skipped: $SKIPPED_SUITES"
|
||||
log "Total duration: ${TOTAL_DURATION}s"
|
||||
log "Success rate: $(( (PASSED_SUITES * 100) / TOTAL_SUITES ))%"
|
||||
log ""
|
||||
log "Detailed log: $LOG_FILE"
|
||||
|
||||
# Generate HTML report
|
||||
generate_html_report "$TOTAL_DURATION"
|
||||
|
||||
# Exit with appropriate code
|
||||
if [[ $FAILED_SUITES -eq 0 ]]; then
|
||||
log "${GREEN}✓ ALL TESTS PASSED${NC}"
|
||||
exit 0
|
||||
else
|
||||
log "${RED}✗ SOME TESTS FAILED${NC}"
|
||||
log "Check $LOG_FILE for detailed error information"
|
||||
exit 1
|
||||
fi
|
||||
126
tests/run_nip_tests.sh
Executable file
126
tests/run_nip_tests.sh
Executable file
@@ -0,0 +1,126 @@
|
||||
#!/bin/bash
|
||||
|
||||
# NIP Protocol Test Runner for C-Relay
|
||||
# Runs all NIP compliance tests
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_SUITES=0
|
||||
PASSED_SUITES=0
|
||||
FAILED_SUITES=0
|
||||
|
||||
# Available NIP test files
|
||||
NIP_TESTS=(
|
||||
"1_nip_test.sh:NIP-01 Basic Protocol"
|
||||
"9_nip_delete_test.sh:NIP-09 Event Deletion"
|
||||
"11_nip_information.sh:NIP-11 Relay Information"
|
||||
"13_nip_test.sh:NIP-13 Proof of Work"
|
||||
"17_nip_test.sh:NIP-17 Private DMs"
|
||||
"40_nip_test.sh:NIP-40 Expiration Timestamp"
|
||||
"42_nip_test.sh:NIP-42 Authentication"
|
||||
"45_nip_test.sh:NIP-45 Event Counts"
|
||||
"50_nip_test.sh:NIP-50 Search Capability"
|
||||
"70_nip_test.sh:NIP-70 Protected Events"
|
||||
)
|
||||
|
||||
# Function to run a NIP test suite
|
||||
run_nip_test() {
|
||||
local test_file="$1"
|
||||
local test_name="$2"
|
||||
|
||||
TOTAL_SUITES=$((TOTAL_SUITES + 1))
|
||||
|
||||
echo "=========================================="
|
||||
echo "Running $test_name ($test_file)"
|
||||
echo "=========================================="
|
||||
|
||||
if [[ ! -f "$test_file" ]]; then
|
||||
echo -e "${RED}ERROR: Test file $test_file not found${NC}"
|
||||
FAILED_SUITES=$((FAILED_SUITES + 1))
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Make script executable if not already
|
||||
chmod +x "$test_file"
|
||||
|
||||
# Run the test
|
||||
if bash "$test_file"; then
|
||||
echo -e "${GREEN}✓ $test_name PASSED${NC}"
|
||||
PASSED_SUITES=$((PASSED_SUITES + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}✗ $test_name FAILED${NC}"
|
||||
FAILED_SUITES=$((FAILED_SUITES + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check relay connectivity
|
||||
check_relay() {
|
||||
echo "Checking relay connectivity at ws://$RELAY_HOST:$RELAY_PORT..."
|
||||
|
||||
if timeout 5 bash -c "
|
||||
echo 'ping' | websocat -n1 ws://$RELAY_HOST:$RELAY_PORT >/dev/null 2>&1
|
||||
" 2>/dev/null; then
|
||||
echo -e "${GREEN}✓ Relay is accessible${NC}"
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}✗ Cannot connect to relay${NC}"
|
||||
echo "Please start the relay first: ./make_and_restart_relay.sh"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay NIP Protocol Test Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing NIP compliance against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo ""
|
||||
|
||||
# Check relay connectivity
|
||||
if ! check_relay; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Running NIP protocol tests..."
|
||||
echo ""
|
||||
|
||||
# Run all NIP tests
|
||||
for nip_test in "${NIP_TESTS[@]}"; do
|
||||
test_file="${nip_test%%:*}"
|
||||
test_name="${nip_test#*:}"
|
||||
|
||||
run_nip_test "$test_file" "$test_name"
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Summary
|
||||
echo "=========================================="
|
||||
echo "NIP Test Summary"
|
||||
echo "=========================================="
|
||||
echo "Total NIP test suites: $TOTAL_SUITES"
|
||||
echo -e "Passed: ${GREEN}$PASSED_SUITES${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_SUITES${NC}"
|
||||
|
||||
if [[ $FAILED_SUITES -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All NIP tests passed!${NC}"
|
||||
echo "The relay is fully NIP compliant."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ Some NIP tests failed.${NC}"
|
||||
echo "The relay may have NIP compliance issues."
|
||||
exit 1
|
||||
fi
|
||||
242
tests/sql_injection_tests.sh
Executable file
242
tests/sql_injection_tests.sh
Executable file
@@ -0,0 +1,242 @@
|
||||
#!/bin/bash
|
||||
|
||||
# SQL Injection Test Suite for C-Relay
|
||||
# Comprehensive testing of SQL injection vulnerabilities across all filter types
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
RELAY_HOST="127.0.0.1"
|
||||
RELAY_PORT="8888"
|
||||
TEST_TIMEOUT=10
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TOTAL_TESTS=0
|
||||
PASSED_TESTS=0
|
||||
FAILED_TESTS=0
|
||||
|
||||
# Function to send WebSocket message and check for SQL injection success
|
||||
test_sql_injection() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
# Send message via websocat and capture response
|
||||
# For now, we'll test without authentication since the relay may not require it for basic queries
|
||||
local response
|
||||
response=$(timeout 5 bash -c "
|
||||
echo '$message' | websocat -B 1048576 --no-close ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -3
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
# Check if the response indicates successful query execution (which would be bad)
|
||||
# Look for signs that SQL injection worked (like database errors or unexpected results)
|
||||
if [[ "$response" == *"SQL"* ]] || [[ "$response" == *"syntax"* ]] || [[ "$response" == *"error"* && ! "$response" == *"error: "* ]]; then
|
||||
echo -e "${RED}FAILED${NC} - Potential SQL injection vulnerability detected"
|
||||
echo " Response: $response"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
elif [[ "$response" == "TIMEOUT" ]]; then
|
||||
echo -e "${YELLOW}UNCERTAIN${NC} - Connection timeout (may indicate crash)"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
else
|
||||
echo -e "${GREEN}PASSED${NC} - SQL injection blocked"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to test valid message (should work normally)
|
||||
test_valid_query() {
|
||||
local description="$1"
|
||||
local message="$2"
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
echo -n "Testing $description... "
|
||||
|
||||
local response
|
||||
response=$(timeout 5 bash -c "
|
||||
echo '$message' | websocat -B 1048576 --no-close ws://$RELAY_HOST:$RELAY_PORT 2>/dev/null | head -3
|
||||
" 2>/dev/null || echo 'TIMEOUT')
|
||||
|
||||
if [[ "$response" == *"EOSE"* ]] || [[ "$response" == *"EVENT"* ]]; then
|
||||
echo -e "${GREEN}PASSED${NC} - Valid query works"
|
||||
PASSED_TESTS=$((PASSED_TESTS + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}FAILED${NC} - Valid query failed: $response"
|
||||
FAILED_TESTS=$((FAILED_TESTS + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=========================================="
|
||||
echo "C-Relay SQL Injection Test Suite"
|
||||
echo "=========================================="
|
||||
echo "Testing against relay at ws://$RELAY_HOST:$RELAY_PORT"
|
||||
echo
|
||||
|
||||
# Test basic connectivity first
|
||||
echo "=== Basic Connectivity Test ==="
|
||||
test_valid_query "Basic connectivity" '["REQ","basic_test",{}]'
|
||||
echo
|
||||
|
||||
# SQL Injection Payloads
|
||||
SQL_PAYLOADS=(
|
||||
# Classic SQL injection
|
||||
"'; DROP TABLE events; --"
|
||||
"1' OR '1'='1"
|
||||
"1; SELECT * FROM sqlite_master; --"
|
||||
"' OR 1=1 --"
|
||||
"admin'--"
|
||||
"1' UNION SELECT password FROM users --"
|
||||
|
||||
# Time-based blind SQL injection
|
||||
"1' AND SLEEP(5) --"
|
||||
"1' AND 1=1 WAITFOR DELAY '0:0:5' --"
|
||||
|
||||
# Error-based SQL injection
|
||||
"1' AND 1=CAST((SELECT version()) AS INT) --"
|
||||
"1' AND 1=CONVERT(INT, (SELECT @@VERSION)) --"
|
||||
|
||||
# Union-based injection
|
||||
"' UNION SELECT NULL,NULL,NULL --"
|
||||
"' UNION SELECT 1,2,3 --"
|
||||
"' UNION ALL SELECT NULL,NULL,NULL --"
|
||||
|
||||
# Stacked queries
|
||||
"'; SELECT * FROM events; --"
|
||||
"'; DELETE FROM events; --"
|
||||
"'; UPDATE events SET content='hacked' WHERE 1=1; --"
|
||||
|
||||
# Comment injection
|
||||
"/*"
|
||||
"*/"
|
||||
"/**/"
|
||||
"--"
|
||||
"#"
|
||||
|
||||
# Hex encoded injection
|
||||
"0x53514C5F494E4A454354494F4E" # SQL_INJECTION in hex
|
||||
|
||||
# Base64 encoded injection
|
||||
"J1NSTCBJTkpFQ1RJT04gLS0=" # 'SQL INJECTION -- in base64
|
||||
|
||||
# Nested injection
|
||||
"'))); DROP TABLE events; --"
|
||||
"')) UNION SELECT NULL; --"
|
||||
|
||||
# Boolean-based blind injection
|
||||
"' AND 1=1 --"
|
||||
"' AND 1=2 --"
|
||||
"' AND (SELECT COUNT(*) FROM events) > 0 --"
|
||||
|
||||
# Out-of-band injection (if supported)
|
||||
"'; EXEC master..xp_cmdshell 'net user' --"
|
||||
"'; DECLARE @host varchar(1024); SELECT @host=(SELECT TOP 1 master..sys.fn_varbintohexstr(password_hash) FROM sys.sql_logins WHERE name='sa'); --"
|
||||
)
|
||||
|
||||
echo "=== Authors Filter SQL Injection Tests ==="
|
||||
for payload in "${SQL_PAYLOADS[@]}"; do
|
||||
test_sql_injection "Authors filter with payload: $payload" "[\"REQ\",\"sql_test_authors_$RANDOM\",{\"authors\":[\"$payload\"]}]"
|
||||
done
|
||||
echo
|
||||
|
||||
echo "=== IDs Filter SQL Injection Tests ==="
|
||||
for payload in "${SQL_PAYLOADS[@]}"; do
|
||||
test_sql_injection "IDs filter with payload: $payload" "[\"REQ\",\"sql_test_ids_$RANDOM\",{\"ids\":[\"$payload\"]}]"
|
||||
done
|
||||
echo
|
||||
|
||||
echo "=== Kinds Filter SQL Injection Tests ==="
|
||||
# Test numeric kinds with SQL injection
|
||||
test_sql_injection "Kinds filter with UNION injection" "[\"REQ\",\"sql_test_kinds_$RANDOM\",{\"kinds\":[0 UNION SELECT 1,2,3]}]"
|
||||
test_sql_injection "Kinds filter with stacked query" "[\"REQ\",\"sql_test_kinds_$RANDOM\",{\"kinds\":[0; DROP TABLE events; --]}]"
|
||||
echo
|
||||
|
||||
echo "=== Search Filter SQL Injection Tests ==="
|
||||
for payload in "${SQL_PAYLOADS[@]}"; do
|
||||
test_sql_injection "Search filter with payload: $payload" "[\"REQ\",\"sql_test_search_$RANDOM\",{\"search\":\"$payload\"}]"
|
||||
done
|
||||
echo
|
||||
|
||||
echo "=== Tag Filter SQL Injection Tests ==="
|
||||
TAG_PREFIXES=("#e" "#p" "#t" "#r" "#d")
|
||||
for prefix in "${TAG_PREFIXES[@]}"; do
|
||||
for payload in "${SQL_PAYLOADS[@]}"; do
|
||||
test_sql_injection "$prefix tag filter with payload: $payload" "[\"REQ\",\"sql_test_tag_$RANDOM\",{\"$prefix\":[\"$payload\"]}]"
|
||||
done
|
||||
done
|
||||
echo
|
||||
|
||||
echo "=== Timestamp Filter SQL Injection Tests ==="
|
||||
# Test since/until parameters
|
||||
test_sql_injection "Since parameter injection" "[\"REQ\",\"sql_test_since_$RANDOM\",{\"since\":\"1' OR '1'='1\"}]"
|
||||
test_sql_injection "Until parameter injection" "[\"REQ\",\"sql_test_until_$RANDOM\",{\"until\":\"1; DROP TABLE events; --\"}]"
|
||||
echo
|
||||
|
||||
echo "=== Limit Parameter SQL Injection Tests ==="
|
||||
test_sql_injection "Limit parameter injection" "[\"REQ\",\"sql_test_limit_$RANDOM\",{\"limit\":\"1' OR '1'='1\"}]"
|
||||
test_sql_injection "Limit with UNION" "[\"REQ\",\"sql_test_limit_$RANDOM\",{\"limit\":\"0 UNION SELECT password FROM users\"}]"
|
||||
echo
|
||||
|
||||
echo "=== Complex Multi-Filter SQL Injection Tests ==="
|
||||
# Test combinations that might bypass validation
|
||||
test_sql_injection "Multi-filter with authors injection" "[\"REQ\",\"sql_test_multi_$RANDOM\",{\"authors\":[\"admin'--\"],\"kinds\":[1],\"search\":\"anything\"}]"
|
||||
test_sql_injection "Multi-filter with search injection" "[\"REQ\",\"sql_test_multi_$RANDOM\",{\"authors\":[\"valid\"],\"search\":\"'; DROP TABLE events; --\"}]"
|
||||
test_sql_injection "Multi-filter with tag injection" "[\"REQ\",\"sql_test_multi_$RANDOM\",{\"#e\":[\"'; SELECT * FROM sqlite_master; --\"],\"limit\":10}]"
|
||||
echo
|
||||
|
||||
echo "=== COUNT Message SQL Injection Tests ==="
|
||||
# Test COUNT messages which might have different code paths
|
||||
for payload in "${SQL_PAYLOADS[@]}"; do
|
||||
test_sql_injection "COUNT with authors payload: $payload" "[\"COUNT\",\"sql_count_authors_$RANDOM\",{\"authors\":[\"$payload\"]}]"
|
||||
test_sql_injection "COUNT with search payload: $payload" "[\"COUNT\",\"sql_count_search_$RANDOM\",{\"search\":\"$payload\"}]"
|
||||
done
|
||||
echo
|
||||
|
||||
echo "=== Edge Case SQL Injection Tests ==="
|
||||
# Test edge cases that might bypass validation
|
||||
test_sql_injection "Empty string injection" "[\"REQ\",\"sql_edge_$RANDOM\",{\"authors\":[\"\"]}]"
|
||||
test_sql_injection "Null byte injection" "[\"REQ\",\"sql_edge_$RANDOM\",{\"authors\":[\"admin\\x00' OR '1'='1\"]}]"
|
||||
test_sql_injection "Unicode injection" "[\"REQ\",\"sql_edge_$RANDOM\",{\"authors\":[\"admin' OR '1'='1' -- 💣\"]}]"
|
||||
test_sql_injection "Very long injection payload" "[\"REQ\",\"sql_edge_$RANDOM\",{\"search\":\"$(printf 'a%.0s' {1..1000})' OR '1'='1\"}]"
|
||||
echo
|
||||
|
||||
echo "=== Subscription ID SQL Injection Tests ==="
|
||||
# Test if subscription IDs can be used for injection
|
||||
test_sql_injection "Subscription ID injection" "[\"REQ\",\"'; DROP TABLE subscriptions; --\",{}]"
|
||||
test_sql_injection "Subscription ID with quotes" "[\"REQ\",\"sub\"'; SELECT * FROM events; --\",{}]"
|
||||
echo
|
||||
|
||||
echo "=== CLOSE Message SQL Injection Tests ==="
|
||||
# Test CLOSE messages
|
||||
test_sql_injection "CLOSE with injection" "[\"CLOSE\",\"'; DROP TABLE subscriptions; --\"]"
|
||||
echo
|
||||
|
||||
echo "=== Test Results ==="
|
||||
echo "Total tests: $TOTAL_TESTS"
|
||||
echo -e "Passed: ${GREEN}$PASSED_TESTS${NC}"
|
||||
echo -e "Failed: ${RED}$FAILED_TESTS${NC}"
|
||||
|
||||
if [[ $FAILED_TESTS -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ All SQL injection tests passed!${NC}"
|
||||
echo "The relay appears to be protected against SQL injection attacks."
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}✗ SQL injection vulnerabilities detected!${NC}"
|
||||
echo "The relay may be vulnerable to SQL injection attacks."
|
||||
echo "Failed tests: $FAILED_TESTS"
|
||||
exit 1
|
||||
fi
|
||||
34
tests/subscription_validation.sh
Executable file
34
tests/subscription_validation.sh
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Test script to validate subscription ID handling fixes
|
||||
# This tests the memory corruption fixes in subscription handling
|
||||
|
||||
echo "Testing subscription ID validation fixes..."
|
||||
|
||||
# Test malformed subscription IDs
|
||||
echo "Testing malformed subscription IDs..."
|
||||
|
||||
# Test 1: Empty subscription ID
|
||||
echo '["REQ","",{}]' | timeout 5 wscat -c ws://localhost:8888 2>/dev/null || echo "Empty ID test: Connection failed (expected)"
|
||||
|
||||
# Test 2: Very long subscription ID (over 64 chars)
|
||||
echo '["REQ","verylongsubscriptionidthatshouldexceedthemaximumlengthlimitof64characters",{}]' | timeout 5 wscat -c ws://localhost:8888 2>/dev/null || echo "Long ID test: Connection failed (expected)"
|
||||
|
||||
# Test 3: Subscription ID with invalid characters
|
||||
echo '["REQ","sub@123",{}]' | timeout 5 wscat -c ws://localhost:8888 2>/dev/null || echo "Invalid chars test: Connection failed (expected)"
|
||||
|
||||
# Test 4: NULL subscription ID (this should be caught by JSON parsing)
|
||||
echo '["REQ",null,{}]' | timeout 5 wscat -c ws://localhost:8888 2>/dev/null || echo "NULL ID test: Connection failed (expected)"
|
||||
|
||||
# Test 5: Valid subscription ID (should work)
|
||||
echo '["REQ","valid_sub_123",{}]' | timeout 5 wscat -c ws://localhost:8888 2>/dev/null && echo "Valid ID test: Success" || echo "Valid ID test: Failed"
|
||||
|
||||
echo "Testing CLOSE message validation..."
|
||||
|
||||
# Test 6: CLOSE with malformed subscription ID
|
||||
echo '["CLOSE",""]' | timeout 5 wscat -c ws://localhost:8888 2>/dev/null || echo "CLOSE empty ID test: Connection failed (expected)"
|
||||
|
||||
# Test 7: CLOSE with valid subscription ID
|
||||
echo '["CLOSE","valid_sub_123"]' | timeout 5 wscat -c ws://localhost:8888 2>/dev/null && echo "CLOSE valid ID test: Success" || echo "CLOSE valid ID test: Failed"
|
||||
|
||||
echo "Subscription validation tests completed."
|
||||
Reference in New Issue
Block a user