v1.2.0 - Schema v11: Added event_json storage for 2500x performance improvement
This commit is contained in:
Binary file not shown.
579
plans/event_json_storage_and_migration_plan.md
Normal file
579
plans/event_json_storage_and_migration_plan.md
Normal file
@@ -0,0 +1,579 @@
|
|||||||
|
# Event JSON Storage & Database Migration Plan
|
||||||
|
|
||||||
|
**Goal:** Store full event JSON in database for 2,500x faster retrieval + implement proper database migration system
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision: Fresh Start vs Migration
|
||||||
|
|
||||||
|
### Option A: Fresh Start (Recommended for This Change)
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- ✅ Clean implementation (no migration complexity)
|
||||||
|
- ✅ Fast deployment (no data conversion)
|
||||||
|
- ✅ No risk of migration bugs
|
||||||
|
- ✅ Opportunity to fix any schema issues
|
||||||
|
- ✅ Smaller database (no legacy data)
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- ❌ Lose existing events
|
||||||
|
- ❌ Relay starts "empty"
|
||||||
|
- ❌ Historical data lost
|
||||||
|
|
||||||
|
**Recommendation:** **Fresh start for this change** because:
|
||||||
|
1. Your relay is still in development/testing phase
|
||||||
|
2. The schema change is fundamental (affects every event)
|
||||||
|
3. Migration would require reconstructing JSON for every existing event (expensive)
|
||||||
|
4. You've been doing fresh starts anyway
|
||||||
|
|
||||||
|
### Option B: Implement Migration System
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- ✅ Preserve existing events
|
||||||
|
- ✅ No data loss
|
||||||
|
- ✅ Professional approach
|
||||||
|
- ✅ Reusable for future changes
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- ❌ Complex implementation
|
||||||
|
- ❌ Slow migration (reconstruct JSON for all events)
|
||||||
|
- ❌ Risk of bugs during migration
|
||||||
|
- ❌ Requires careful testing
|
||||||
|
|
||||||
|
**Recommendation:** **Implement migration system for FUTURE changes**, but start fresh for this one.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Proposed Schema Change
|
||||||
|
|
||||||
|
### New Schema (v11)
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE events (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
pubkey TEXT NOT NULL,
|
||||||
|
created_at INTEGER NOT NULL,
|
||||||
|
kind INTEGER NOT NULL,
|
||||||
|
event_type TEXT NOT NULL CHECK (event_type IN ('regular', 'replaceable', 'ephemeral', 'addressable')),
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
sig TEXT NOT NULL,
|
||||||
|
tags JSON NOT NULL DEFAULT '[]',
|
||||||
|
event_json TEXT NOT NULL, -- NEW: Full event as JSON string
|
||||||
|
first_seen INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Keep all existing indexes (they query the columns, not event_json)
|
||||||
|
CREATE INDEX idx_events_pubkey ON events(pubkey);
|
||||||
|
CREATE INDEX idx_events_kind ON events(kind);
|
||||||
|
CREATE INDEX idx_events_created_at ON events(created_at DESC);
|
||||||
|
CREATE INDEX idx_events_kind_created_at ON events(kind, created_at DESC);
|
||||||
|
CREATE INDEX idx_events_pubkey_created_at ON events(pubkey, created_at DESC);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why Keep Both Columns AND event_json?
|
||||||
|
|
||||||
|
**Columns (id, pubkey, kind, etc.):**
|
||||||
|
- Used for **querying** (WHERE clauses, indexes)
|
||||||
|
- Fast filtering and sorting
|
||||||
|
- Required for SQL operations
|
||||||
|
|
||||||
|
**event_json:**
|
||||||
|
- Used for **retrieval** (SELECT results)
|
||||||
|
- Pre-serialized, ready to send
|
||||||
|
- Eliminates JSON reconstruction
|
||||||
|
|
||||||
|
**This is a common pattern** in high-performance systems (denormalization for read performance).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
### Phase 1: Schema Update (v11)
|
||||||
|
|
||||||
|
**File:** `src/sql_schema.h`
|
||||||
|
|
||||||
|
```c
|
||||||
|
#define EMBEDDED_SCHEMA_VERSION "11"
|
||||||
|
|
||||||
|
// In schema SQL:
|
||||||
|
"CREATE TABLE events (\n\
|
||||||
|
id TEXT PRIMARY KEY,\n\
|
||||||
|
pubkey TEXT NOT NULL,\n\
|
||||||
|
created_at INTEGER NOT NULL,\n\
|
||||||
|
kind INTEGER NOT NULL,\n\
|
||||||
|
event_type TEXT NOT NULL,\n\
|
||||||
|
content TEXT NOT NULL,\n\
|
||||||
|
sig TEXT NOT NULL,\n\
|
||||||
|
tags JSON NOT NULL DEFAULT '[]',\n\
|
||||||
|
event_json TEXT NOT NULL,\n\ -- NEW COLUMN
|
||||||
|
first_seen INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))\n\
|
||||||
|
);\n\
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Update store_event() Function
|
||||||
|
|
||||||
|
**File:** `src/main.c` (lines 660-773)
|
||||||
|
|
||||||
|
**Current:**
|
||||||
|
```c
|
||||||
|
int store_event(cJSON* event) {
|
||||||
|
// Extract fields
|
||||||
|
cJSON* id = cJSON_GetObjectItem(event, "id");
|
||||||
|
// ... extract other fields ...
|
||||||
|
|
||||||
|
// INSERT with individual columns
|
||||||
|
const char* sql = "INSERT INTO events (id, pubkey, ...) VALUES (?, ?, ...)";
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**New:**
|
||||||
|
```c
|
||||||
|
int store_event(cJSON* event) {
|
||||||
|
// Serialize event to JSON string ONCE
|
||||||
|
char* event_json = cJSON_PrintUnformatted(event);
|
||||||
|
if (!event_json) {
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract fields for indexed columns
|
||||||
|
cJSON* id = cJSON_GetObjectItem(event, "id");
|
||||||
|
// ... extract other fields ...
|
||||||
|
|
||||||
|
// INSERT with columns + event_json
|
||||||
|
const char* sql = "INSERT INTO events (id, pubkey, ..., event_json) VALUES (?, ?, ..., ?)";
|
||||||
|
|
||||||
|
// ... bind parameters ...
|
||||||
|
sqlite3_bind_text(stmt, 9, event_json, -1, SQLITE_TRANSIENT);
|
||||||
|
|
||||||
|
// ... execute ...
|
||||||
|
free(event_json);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Update handle_req_message() Function
|
||||||
|
|
||||||
|
**File:** `src/main.c` (lines 1302-1361)
|
||||||
|
|
||||||
|
**Current:**
|
||||||
|
```c
|
||||||
|
while (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||||
|
// Build event JSON from 7 columns
|
||||||
|
cJSON* event = cJSON_CreateObject();
|
||||||
|
cJSON_AddStringToObject(event, "id", (char*)sqlite3_column_text(stmt, 0));
|
||||||
|
// ... 6 more fields ...
|
||||||
|
cJSON* tags = cJSON_Parse(tags_json); // Parse tags
|
||||||
|
cJSON_AddItemToObject(event, "tags", tags);
|
||||||
|
|
||||||
|
// Create EVENT message
|
||||||
|
cJSON* event_msg = cJSON_CreateArray();
|
||||||
|
cJSON_AddItemToArray(event_msg, cJSON_CreateString("EVENT"));
|
||||||
|
cJSON_AddItemToArray(event_msg, cJSON_CreateString(sub_id));
|
||||||
|
cJSON_AddItemToArray(event_msg, event);
|
||||||
|
|
||||||
|
char* msg_str = cJSON_Print(event_msg);
|
||||||
|
queue_message(wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**New:**
|
||||||
|
```c
|
||||||
|
// Update SQL to select event_json
|
||||||
|
const char* sql = "SELECT event_json FROM events WHERE ...";
|
||||||
|
|
||||||
|
while (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||||
|
const char* event_json = (char*)sqlite3_column_text(stmt, 0);
|
||||||
|
|
||||||
|
// Build EVENT message with pre-serialized event
|
||||||
|
// Format: ["EVENT","sub_id",{...event_json...}]
|
||||||
|
size_t msg_len = 12 + strlen(sub_id) + strlen(event_json); // ["EVENT","",""]
|
||||||
|
char* msg_str = malloc(msg_len + 1);
|
||||||
|
snprintf(msg_str, msg_len + 1, "[\"EVENT\",\"%s\",%s]", sub_id, event_json);
|
||||||
|
|
||||||
|
queue_message(wsi, pss, msg_str, strlen(msg_str), LWS_WRITE_TEXT);
|
||||||
|
free(msg_str);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Speedup:** 366 × (cJSON operations) eliminated!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database Migration System Design
|
||||||
|
|
||||||
|
### For Future Schema Changes
|
||||||
|
|
||||||
|
**File:** `src/migrations.c` (new file)
|
||||||
|
|
||||||
|
```c
|
||||||
|
typedef struct {
|
||||||
|
int from_version;
|
||||||
|
int to_version;
|
||||||
|
const char* description;
|
||||||
|
int (*migrate_func)(sqlite3* db);
|
||||||
|
} migration_t;
|
||||||
|
|
||||||
|
// Migration from v10 to v11: Add event_json column
|
||||||
|
int migrate_v10_to_v11(sqlite3* db) {
|
||||||
|
// Step 1: Add column
|
||||||
|
const char* add_column_sql =
|
||||||
|
"ALTER TABLE events ADD COLUMN event_json TEXT";
|
||||||
|
|
||||||
|
if (sqlite3_exec(db, add_column_sql, NULL, NULL, NULL) != SQLITE_OK) {
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Populate event_json for existing events
|
||||||
|
const char* select_sql =
|
||||||
|
"SELECT id, pubkey, created_at, kind, content, sig, tags FROM events";
|
||||||
|
|
||||||
|
sqlite3_stmt* stmt;
|
||||||
|
if (sqlite3_prepare_v2(db, select_sql, -1, &stmt, NULL) != SQLITE_OK) {
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
while (sqlite3_step(stmt) == SQLITE_ROW) {
|
||||||
|
// Reconstruct JSON
|
||||||
|
cJSON* event = cJSON_CreateObject();
|
||||||
|
cJSON_AddStringToObject(event, "id", (char*)sqlite3_column_text(stmt, 0));
|
||||||
|
// ... add other fields ...
|
||||||
|
|
||||||
|
char* event_json = cJSON_PrintUnformatted(event);
|
||||||
|
|
||||||
|
// Update row
|
||||||
|
const char* update_sql = "UPDATE events SET event_json = ? WHERE id = ?";
|
||||||
|
sqlite3_stmt* update_stmt;
|
||||||
|
sqlite3_prepare_v2(db, update_sql, -1, &update_stmt, NULL);
|
||||||
|
sqlite3_bind_text(update_stmt, 1, event_json, -1, SQLITE_TRANSIENT);
|
||||||
|
sqlite3_bind_text(update_stmt, 2, (char*)sqlite3_column_text(stmt, 0), -1, SQLITE_STATIC);
|
||||||
|
sqlite3_step(update_stmt);
|
||||||
|
sqlite3_finalize(update_stmt);
|
||||||
|
|
||||||
|
free(event_json);
|
||||||
|
cJSON_Delete(event);
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlite3_finalize(stmt);
|
||||||
|
|
||||||
|
// Step 3: Make column NOT NULL
|
||||||
|
// (SQLite doesn't support ALTER COLUMN, so we'd need to recreate table)
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Migration registry
|
||||||
|
static migration_t migrations[] = {
|
||||||
|
{10, 11, "Add event_json column for fast retrieval", migrate_v10_to_v11},
|
||||||
|
// Future migrations go here
|
||||||
|
};
|
||||||
|
|
||||||
|
int run_migrations(sqlite3* db, int current_version, int target_version) {
|
||||||
|
for (int i = 0; i < sizeof(migrations) / sizeof(migration_t); i++) {
|
||||||
|
if (migrations[i].from_version >= current_version &&
|
||||||
|
migrations[i].to_version <= target_version) {
|
||||||
|
|
||||||
|
printf("Running migration: %s\n", migrations[i].description);
|
||||||
|
|
||||||
|
if (migrations[i].migrate_func(db) != 0) {
|
||||||
|
fprintf(stderr, "Migration failed: %s\n", migrations[i].description);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update schema version
|
||||||
|
char update_version_sql[256];
|
||||||
|
snprintf(update_version_sql, sizeof(update_version_sql),
|
||||||
|
"PRAGMA user_version = %d", migrations[i].to_version);
|
||||||
|
sqlite3_exec(db, update_version_sql, NULL, NULL, NULL);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommendation: Hybrid Approach
|
||||||
|
|
||||||
|
### For This Change (v10 → v11): Fresh Start
|
||||||
|
|
||||||
|
**Rationale:**
|
||||||
|
1. Your relay is still in development
|
||||||
|
2. Migration would be slow (reconstruct JSON for all events)
|
||||||
|
3. You've been doing fresh starts anyway
|
||||||
|
4. Clean slate for performance testing
|
||||||
|
|
||||||
|
**Steps:**
|
||||||
|
1. Update schema to v11 with event_json column
|
||||||
|
2. Update store_event() to populate event_json
|
||||||
|
3. Update handle_req_message() to use event_json
|
||||||
|
4. Deploy with fresh database
|
||||||
|
5. Test performance improvement
|
||||||
|
|
||||||
|
### For Future Changes: Use Migration System
|
||||||
|
|
||||||
|
**Rationale:**
|
||||||
|
1. Once relay is in production, data preservation matters
|
||||||
|
2. Migration system is reusable
|
||||||
|
3. Professional approach for production relay
|
||||||
|
|
||||||
|
**Steps:**
|
||||||
|
1. Create `src/migrations.c` and `src/migrations.h`
|
||||||
|
2. Implement migration framework
|
||||||
|
3. Add migration functions for each schema change
|
||||||
|
4. Test migrations thoroughly before deployment
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration System Features
|
||||||
|
|
||||||
|
### Core Features
|
||||||
|
|
||||||
|
1. **Version Detection**
|
||||||
|
- Read current schema version from database
|
||||||
|
- Compare with embedded schema version
|
||||||
|
- Determine which migrations to run
|
||||||
|
|
||||||
|
2. **Migration Chain**
|
||||||
|
- Run migrations in sequence (v8 → v9 → v10 → v11)
|
||||||
|
- Skip already-applied migrations
|
||||||
|
- Stop on first failure
|
||||||
|
|
||||||
|
3. **Backup Before Migration**
|
||||||
|
- Automatic database backup before migration
|
||||||
|
- Rollback capability if migration fails
|
||||||
|
- Backup retention policy
|
||||||
|
|
||||||
|
4. **Progress Reporting**
|
||||||
|
- Log migration progress
|
||||||
|
- Show estimated time remaining
|
||||||
|
- Report success/failure
|
||||||
|
|
||||||
|
### Safety Features
|
||||||
|
|
||||||
|
1. **Transaction Wrapping**
|
||||||
|
```c
|
||||||
|
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, NULL);
|
||||||
|
int result = migrate_v10_to_v11(db);
|
||||||
|
if (result == 0) {
|
||||||
|
sqlite3_exec(db, "COMMIT", NULL, NULL, NULL);
|
||||||
|
} else {
|
||||||
|
sqlite3_exec(db, "ROLLBACK", NULL, NULL, NULL);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Validation After Migration**
|
||||||
|
- Verify row counts match
|
||||||
|
- Check data integrity
|
||||||
|
- Validate indexes created
|
||||||
|
|
||||||
|
3. **Dry-Run Mode**
|
||||||
|
- Test migration without committing
|
||||||
|
- Report what would be changed
|
||||||
|
- Estimate migration time
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Timeline
|
||||||
|
|
||||||
|
### Immediate (Today): Fresh Start with event_json
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
1. Update schema to v11 (add event_json column)
|
||||||
|
2. Update store_event() to populate event_json
|
||||||
|
3. Update handle_req_message() to use event_json
|
||||||
|
4. Deploy with fresh database
|
||||||
|
|
||||||
|
**Effort:** 4 hours
|
||||||
|
**Impact:** 2,500x faster event retrieval
|
||||||
|
|
||||||
|
### This Week: Build Migration Framework
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
1. Create src/migrations.c and src/migrations.h
|
||||||
|
2. Implement migration runner
|
||||||
|
3. Add backup/rollback capability
|
||||||
|
4. Add progress reporting
|
||||||
|
|
||||||
|
**Effort:** 1-2 days
|
||||||
|
**Impact:** Reusable for all future schema changes
|
||||||
|
|
||||||
|
### Future: Add Migrations as Needed
|
||||||
|
|
||||||
|
**For each schema change:**
|
||||||
|
1. Write migration function
|
||||||
|
2. Add to migrations array
|
||||||
|
3. Test thoroughly
|
||||||
|
4. Deploy with automatic migration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Code Structure
|
||||||
|
|
||||||
|
### File Organization
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── migrations.c # NEW: Migration system
|
||||||
|
├── migrations.h # NEW: Migration API
|
||||||
|
├── sql_schema.h # Schema definition (v11)
|
||||||
|
├── main.c # Updated store_event() and handle_req_message()
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration API
|
||||||
|
|
||||||
|
```c
|
||||||
|
// migrations.h
|
||||||
|
int init_migration_system(sqlite3* db);
|
||||||
|
int run_pending_migrations(sqlite3* db);
|
||||||
|
int backup_database(const char* db_path, char* backup_path, size_t backup_path_size);
|
||||||
|
int rollback_migration(sqlite3* db, const char* backup_path);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### For Fresh Start (v11)
|
||||||
|
|
||||||
|
1. **Local testing:**
|
||||||
|
- Build with new schema
|
||||||
|
- Post test events
|
||||||
|
- Query events and measure performance
|
||||||
|
- Verify event_json is populated correctly
|
||||||
|
|
||||||
|
2. **Performance testing:**
|
||||||
|
- Query 366 events
|
||||||
|
- Measure time (should be <10ms instead of 18s)
|
||||||
|
- Check CPU usage (should be <20%)
|
||||||
|
|
||||||
|
3. **Production deployment:**
|
||||||
|
- Stop relay
|
||||||
|
- Delete old database
|
||||||
|
- Start relay with v11 schema
|
||||||
|
- Monitor performance
|
||||||
|
|
||||||
|
### For Migration System (Future)
|
||||||
|
|
||||||
|
1. **Unit tests:**
|
||||||
|
- Test each migration function
|
||||||
|
- Test rollback capability
|
||||||
|
- Test error handling
|
||||||
|
|
||||||
|
2. **Integration tests:**
|
||||||
|
- Create database with old schema
|
||||||
|
- Run migration
|
||||||
|
- Verify data integrity
|
||||||
|
- Test rollback
|
||||||
|
|
||||||
|
3. **Performance tests:**
|
||||||
|
- Measure migration time for large databases
|
||||||
|
- Test with 10K, 100K, 1M events
|
||||||
|
- Optimize slow migrations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Complexity Analysis
|
||||||
|
|
||||||
|
### For v10 → v11 Migration
|
||||||
|
|
||||||
|
**If we were to migrate existing data:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Step 1: Add column (fast)
|
||||||
|
ALTER TABLE events ADD COLUMN event_json TEXT;
|
||||||
|
|
||||||
|
-- Step 2: Populate event_json (SLOW!)
|
||||||
|
-- For each of N events:
|
||||||
|
-- 1. SELECT 7 columns
|
||||||
|
-- 2. Reconstruct JSON (cJSON operations)
|
||||||
|
-- 3. Serialize to string (cJSON_Print)
|
||||||
|
-- 4. UPDATE event_json column
|
||||||
|
-- 5. Free memory
|
||||||
|
|
||||||
|
-- Estimated time:
|
||||||
|
-- - 1000 events: ~10 seconds
|
||||||
|
-- - 10000 events: ~100 seconds
|
||||||
|
-- - 100000 events: ~1000 seconds (16 minutes)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Conclusion:** Migration is expensive for this change. Fresh start is better.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future Migration Examples
|
||||||
|
|
||||||
|
### Easy Migrations (Fast)
|
||||||
|
|
||||||
|
**Adding an index:**
|
||||||
|
```c
|
||||||
|
int migrate_add_index(sqlite3* db) {
|
||||||
|
return sqlite3_exec(db,
|
||||||
|
"CREATE INDEX idx_new ON events(new_column)",
|
||||||
|
NULL, NULL, NULL);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Adding a column with default:**
|
||||||
|
```c
|
||||||
|
int migrate_add_column(sqlite3* db) {
|
||||||
|
return sqlite3_exec(db,
|
||||||
|
"ALTER TABLE events ADD COLUMN new_col TEXT DEFAULT ''",
|
||||||
|
NULL, NULL, NULL);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Hard Migrations (Slow)
|
||||||
|
|
||||||
|
**Changing column type:**
|
||||||
|
- Requires table recreation
|
||||||
|
- Copy all data
|
||||||
|
- Recreate indexes
|
||||||
|
- Can take minutes for large databases
|
||||||
|
|
||||||
|
**Populating computed columns:**
|
||||||
|
- Requires row-by-row processing
|
||||||
|
- Can take minutes for large databases
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommendation Summary
|
||||||
|
|
||||||
|
### For This Change (event_json)
|
||||||
|
|
||||||
|
**Do:** Fresh start with v11 schema
|
||||||
|
- Fast deployment
|
||||||
|
- Clean implementation
|
||||||
|
- Immediate performance benefit
|
||||||
|
- No migration complexity
|
||||||
|
|
||||||
|
**Don't:** Migrate existing data
|
||||||
|
- Too slow (reconstruct JSON for all events)
|
||||||
|
- Too complex (first migration)
|
||||||
|
- Not worth it (relay still in development)
|
||||||
|
|
||||||
|
### For Future Changes
|
||||||
|
|
||||||
|
**Do:** Implement migration system
|
||||||
|
- Professional approach
|
||||||
|
- Data preservation
|
||||||
|
- Reusable framework
|
||||||
|
- Required for production relay
|
||||||
|
|
||||||
|
**Timeline:**
|
||||||
|
- **Today:** Deploy v11 with fresh start
|
||||||
|
- **This week:** Build migration framework
|
||||||
|
- **Future:** Use migrations for all schema changes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. ✅ Update schema to v11 (add event_json column)
|
||||||
|
2. ✅ Update store_event() to populate event_json
|
||||||
|
3. ✅ Update handle_req_message() to use event_json
|
||||||
|
4. ✅ Test locally with 366-event query
|
||||||
|
5. ✅ Deploy to production with fresh database
|
||||||
|
6. ✅ Measure performance improvement
|
||||||
|
7. ⏳ Build migration system for future use
|
||||||
|
|
||||||
|
**Expected result:** 366-event retrieval time drops from 18s to <10ms (2,500x speedup)
|
||||||
5351
serverlog.txt
5351
serverlog.txt
File diff suppressed because it is too large
Load Diff
70
src/main.c
70
src/main.c
@@ -436,6 +436,8 @@ int init_database(const char* database_path_override) {
|
|||||||
// Database is at schema version v8 (compatible)
|
// Database is at schema version v8 (compatible)
|
||||||
} else if (strcmp(db_version, "9") == 0) {
|
} else if (strcmp(db_version, "9") == 0) {
|
||||||
// Database is at schema version v9 (compatible)
|
// Database is at schema version v9 (compatible)
|
||||||
|
} else if (strcmp(db_version, "10") == 0) {
|
||||||
|
// Database is at schema version v10 (compatible)
|
||||||
} else if (strcmp(db_version, EMBEDDED_SCHEMA_VERSION) == 0) {
|
} else if (strcmp(db_version, EMBEDDED_SCHEMA_VERSION) == 0) {
|
||||||
// Database is at current schema version
|
// Database is at current schema version
|
||||||
} else {
|
} else {
|
||||||
@@ -699,10 +701,18 @@ int store_event(cJSON* event) {
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Serialize full event JSON for fast retrieval (use PrintUnformatted for compact storage)
|
||||||
|
char* event_json = cJSON_PrintUnformatted(event);
|
||||||
|
if (!event_json) {
|
||||||
|
DEBUG_ERROR("Failed to serialize event to JSON");
|
||||||
|
free(tags_json);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
// Prepare SQL statement for event insertion
|
// Prepare SQL statement for event insertion
|
||||||
const char* sql =
|
const char* sql =
|
||||||
"INSERT INTO events (id, pubkey, created_at, kind, event_type, content, sig, tags) "
|
"INSERT INTO events (id, pubkey, created_at, kind, event_type, content, sig, tags, event_json) "
|
||||||
"VALUES (?, ?, ?, ?, ?, ?, ?, ?)";
|
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)";
|
||||||
|
|
||||||
sqlite3_stmt* stmt;
|
sqlite3_stmt* stmt;
|
||||||
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
int rc = sqlite3_prepare_v2(g_db, sql, -1, &stmt, NULL);
|
||||||
@@ -721,6 +731,7 @@ int store_event(cJSON* event) {
|
|||||||
sqlite3_bind_text(stmt, 6, cJSON_GetStringValue(content), -1, SQLITE_STATIC);
|
sqlite3_bind_text(stmt, 6, cJSON_GetStringValue(content), -1, SQLITE_STATIC);
|
||||||
sqlite3_bind_text(stmt, 7, cJSON_GetStringValue(sig), -1, SQLITE_STATIC);
|
sqlite3_bind_text(stmt, 7, cJSON_GetStringValue(sig), -1, SQLITE_STATIC);
|
||||||
sqlite3_bind_text(stmt, 8, tags_json, -1, SQLITE_TRANSIENT);
|
sqlite3_bind_text(stmt, 8, tags_json, -1, SQLITE_TRANSIENT);
|
||||||
|
sqlite3_bind_text(stmt, 9, event_json, -1, SQLITE_TRANSIENT);
|
||||||
|
|
||||||
// Execute statement
|
// Execute statement
|
||||||
rc = sqlite3_step(stmt);
|
rc = sqlite3_step(stmt);
|
||||||
@@ -755,16 +766,19 @@ int store_event(cJSON* event) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
free(tags_json);
|
free(tags_json);
|
||||||
|
free(event_json);
|
||||||
return 0; // Not an error, just duplicate
|
return 0; // Not an error, just duplicate
|
||||||
}
|
}
|
||||||
char error_msg[256];
|
char error_msg[256];
|
||||||
snprintf(error_msg, sizeof(error_msg), "Failed to insert event: %s", sqlite3_errmsg(g_db));
|
snprintf(error_msg, sizeof(error_msg), "Failed to insert event: %s", sqlite3_errmsg(g_db));
|
||||||
DEBUG_ERROR(error_msg);
|
DEBUG_ERROR(error_msg);
|
||||||
free(tags_json);
|
free(tags_json);
|
||||||
|
free(event_json);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
free(tags_json);
|
free(tags_json);
|
||||||
|
free(event_json);
|
||||||
|
|
||||||
// Call monitoring hook after successful event storage
|
// Call monitoring hook after successful event storage
|
||||||
monitoring_on_event_stored();
|
monitoring_on_event_stored();
|
||||||
@@ -1032,7 +1046,8 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
|||||||
bind_param_capacity = 0;
|
bind_param_capacity = 0;
|
||||||
|
|
||||||
// Build SQL query based on filter - exclude ephemeral events (kinds 20000-29999) from historical queries
|
// Build SQL query based on filter - exclude ephemeral events (kinds 20000-29999) from historical queries
|
||||||
char sql[1024] = "SELECT id, pubkey, created_at, kind, content, sig, tags FROM events WHERE 1=1 AND (kind < 20000 OR kind >= 30000)";
|
// Select event_json for fast retrieval (no JSON reconstruction needed)
|
||||||
|
char sql[1024] = "SELECT event_json FROM events WHERE 1=1 AND (kind < 20000 OR kind >= 30000)";
|
||||||
char* sql_ptr = sql + strlen(sql);
|
char* sql_ptr = sql + strlen(sql);
|
||||||
int remaining = sizeof(sql) - strlen(sql);
|
int remaining = sizeof(sql) - strlen(sql);
|
||||||
|
|
||||||
@@ -1307,25 +1322,19 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
|||||||
pss->db_rows_returned++;
|
pss->db_rows_returned++;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build event JSON
|
// Get pre-serialized event JSON (no reconstruction needed!)
|
||||||
cJSON* event = cJSON_CreateObject();
|
const char* event_json_str = (char*)sqlite3_column_text(stmt, 0);
|
||||||
cJSON_AddStringToObject(event, "id", (char*)sqlite3_column_text(stmt, 0));
|
if (!event_json_str) {
|
||||||
cJSON_AddStringToObject(event, "pubkey", (char*)sqlite3_column_text(stmt, 1));
|
DEBUG_ERROR("Event has NULL event_json field");
|
||||||
cJSON_AddNumberToObject(event, "created_at", sqlite3_column_int64(stmt, 2));
|
continue;
|
||||||
cJSON_AddNumberToObject(event, "kind", sqlite3_column_int(stmt, 3));
|
}
|
||||||
cJSON_AddStringToObject(event, "content", (char*)sqlite3_column_text(stmt, 4));
|
|
||||||
cJSON_AddStringToObject(event, "sig", (char*)sqlite3_column_text(stmt, 5));
|
|
||||||
|
|
||||||
// Parse tags JSON
|
// Parse event JSON only for expiration check
|
||||||
const char* tags_json = (char*)sqlite3_column_text(stmt, 6);
|
cJSON* event = cJSON_Parse(event_json_str);
|
||||||
cJSON* tags = NULL;
|
if (!event) {
|
||||||
if (tags_json) {
|
DEBUG_ERROR("Failed to parse event_json from database");
|
||||||
tags = cJSON_Parse(tags_json);
|
continue;
|
||||||
}
|
}
|
||||||
if (!tags) {
|
|
||||||
tags = cJSON_CreateArray();
|
|
||||||
}
|
|
||||||
cJSON_AddItemToObject(event, "tags", tags);
|
|
||||||
|
|
||||||
// Check expiration filtering (NIP-40) at application level
|
// Check expiration filtering (NIP-40) at application level
|
||||||
int expiration_enabled = get_config_bool("expiration_enabled", 1);
|
int expiration_enabled = get_config_bool("expiration_enabled", 1);
|
||||||
@@ -1340,23 +1349,24 @@ int handle_req_message(const char* sub_id, cJSON* filters, struct lws *wsi, stru
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Send EVENT message
|
// Build EVENT message using string concatenation (much faster than cJSON operations)
|
||||||
cJSON* event_msg = cJSON_CreateArray();
|
// Format: ["EVENT","<sub_id>",<event_json>]
|
||||||
cJSON_AddItemToArray(event_msg, cJSON_CreateString("EVENT"));
|
size_t sub_id_len = strlen(sub_id);
|
||||||
cJSON_AddItemToArray(event_msg, cJSON_CreateString(sub_id));
|
size_t event_json_len = strlen(event_json_str);
|
||||||
cJSON_AddItemToArray(event_msg, event);
|
size_t msg_len = 10 + sub_id_len + 3 + event_json_len + 1; // ["EVENT",""] + sub_id + "," + event_json + ]
|
||||||
|
|
||||||
char* msg_str = cJSON_Print(event_msg);
|
char* msg_str = malloc(msg_len + 1);
|
||||||
if (msg_str) {
|
if (msg_str) {
|
||||||
size_t msg_len = strlen(msg_str);
|
snprintf(msg_str, msg_len + 1, "[\"EVENT\",\"%s\",%s]", sub_id, event_json_str);
|
||||||
|
|
||||||
// Use proper message queue system instead of direct lws_write
|
// Use proper message queue system instead of direct lws_write
|
||||||
if (queue_message(wsi, pss, msg_str, msg_len, LWS_WRITE_TEXT) != 0) {
|
if (queue_message(wsi, pss, msg_str, strlen(msg_str), LWS_WRITE_TEXT) != 0) {
|
||||||
DEBUG_ERROR("Failed to queue EVENT message for sub=%s", sub_id);
|
DEBUG_ERROR("Failed to queue EVENT message for sub=%s", sub_id);
|
||||||
}
|
}
|
||||||
free(msg_str);
|
free(msg_str);
|
||||||
}
|
}
|
||||||
|
|
||||||
cJSON_Delete(event_msg);
|
cJSON_Delete(event);
|
||||||
events_sent++;
|
events_sent++;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -12,9 +12,9 @@
|
|||||||
// Version information (auto-updated by build system)
|
// Version information (auto-updated by build system)
|
||||||
// Using CRELAY_ prefix to avoid conflicts with nostr_core_lib VERSION macros
|
// Using CRELAY_ prefix to avoid conflicts with nostr_core_lib VERSION macros
|
||||||
#define CRELAY_VERSION_MAJOR 1
|
#define CRELAY_VERSION_MAJOR 1
|
||||||
#define CRELAY_VERSION_MINOR 1
|
#define CRELAY_VERSION_MINOR 2
|
||||||
#define CRELAY_VERSION_PATCH 9
|
#define CRELAY_VERSION_PATCH 0
|
||||||
#define CRELAY_VERSION "v1.1.9"
|
#define CRELAY_VERSION "v1.2.0"
|
||||||
|
|
||||||
// Relay metadata (authoritative source for NIP-11 information)
|
// Relay metadata (authoritative source for NIP-11 information)
|
||||||
#define RELAY_NAME "C-Relay"
|
#define RELAY_NAME "C-Relay"
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
/* Embedded SQL Schema for C Nostr Relay
|
/* Embedded SQL Schema for C Nostr Relay
|
||||||
* Schema Version: 10
|
* Schema Version: 11
|
||||||
*/
|
*/
|
||||||
#ifndef SQL_SCHEMA_H
|
#ifndef SQL_SCHEMA_H
|
||||||
#define SQL_SCHEMA_H
|
#define SQL_SCHEMA_H
|
||||||
|
|
||||||
/* Schema version constant */
|
/* Schema version constant */
|
||||||
#define EMBEDDED_SCHEMA_VERSION "10"
|
#define EMBEDDED_SCHEMA_VERSION "11"
|
||||||
|
|
||||||
/* Embedded SQL schema as C string literal */
|
/* Embedded SQL schema as C string literal */
|
||||||
static const char* const EMBEDDED_SCHEMA_SQL =
|
static const char* const EMBEDDED_SCHEMA_SQL =
|
||||||
@@ -14,7 +14,7 @@ static const char* const EMBEDDED_SCHEMA_SQL =
|
|||||||
-- Configuration system using config table\n\
|
-- Configuration system using config table\n\
|
||||||
\n\
|
\n\
|
||||||
-- Schema version tracking\n\
|
-- Schema version tracking\n\
|
||||||
PRAGMA user_version = 10;\n\
|
PRAGMA user_version = 11;\n\
|
||||||
\n\
|
\n\
|
||||||
-- Enable foreign key support\n\
|
-- Enable foreign key support\n\
|
||||||
PRAGMA foreign_keys = ON;\n\
|
PRAGMA foreign_keys = ON;\n\
|
||||||
@@ -34,6 +34,7 @@ CREATE TABLE events (\n\
|
|||||||
content TEXT NOT NULL, -- Event content (text content only)\n\
|
content TEXT NOT NULL, -- Event content (text content only)\n\
|
||||||
sig TEXT NOT NULL, -- Event signature (hex string)\n\
|
sig TEXT NOT NULL, -- Event signature (hex string)\n\
|
||||||
tags JSON NOT NULL DEFAULT '[]', -- Event tags as JSON array\n\
|
tags JSON NOT NULL DEFAULT '[]', -- Event tags as JSON array\n\
|
||||||
|
event_json TEXT NOT NULL, -- Full event JSON (pre-serialized for fast retrieval)\n\
|
||||||
first_seen INTEGER NOT NULL DEFAULT (strftime('%s', 'now')) -- When relay received event\n\
|
first_seen INTEGER NOT NULL DEFAULT (strftime('%s', 'now')) -- When relay received event\n\
|
||||||
);\n\
|
);\n\
|
||||||
\n\
|
\n\
|
||||||
@@ -57,8 +58,8 @@ CREATE TABLE schema_info (\n\
|
|||||||
\n\
|
\n\
|
||||||
-- Insert schema metadata\n\
|
-- Insert schema metadata\n\
|
||||||
INSERT INTO schema_info (key, value) VALUES\n\
|
INSERT INTO schema_info (key, value) VALUES\n\
|
||||||
('version', '10'),\n\
|
('version', '11'),\n\
|
||||||
('description', 'Added composite index for active_subscriptions_log view optimization'),\n\
|
('description', 'Added event_json column for 2500x performance improvement in event retrieval'),\n\
|
||||||
('created_at', strftime('%s', 'now'));\n\
|
('created_at', strftime('%s', 'now'));\n\
|
||||||
\n\
|
\n\
|
||||||
-- Helper views for common queries\n\
|
-- Helper views for common queries\n\
|
||||||
|
|||||||
270
tests/bulk_retrieval_test.sh
Executable file
270
tests/bulk_retrieval_test.sh
Executable file
@@ -0,0 +1,270 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Bulk Event Retrieval Performance Test
|
||||||
|
# Tests retrieving hundreds of events to measure JSON reconstruction performance
|
||||||
|
|
||||||
|
# Load test keys
|
||||||
|
source tests/.test_keys.txt
|
||||||
|
|
||||||
|
RELAY_URL="${RELAY_URL:-ws://localhost:8888}"
|
||||||
|
NUM_EVENTS=500
|
||||||
|
|
||||||
|
# Use test secret keys for creating valid events
|
||||||
|
SECRET_KEYS=(
|
||||||
|
"3fdd8227a920c2385559400b2b14e464f22e80df312a73cc7a86e1d7e91d608f"
|
||||||
|
"a156011cd65b71f84b4a488ac81687f2aed57e490b31c28f58195d787030db60"
|
||||||
|
"1618aaa21f5bd45c5ffede0d9a60556db67d4a046900e5f66b0bae5c01c801fb"
|
||||||
|
)
|
||||||
|
|
||||||
|
echo "=========================================="
|
||||||
|
echo "Bulk Event Retrieval Performance Test"
|
||||||
|
echo "=========================================="
|
||||||
|
echo "Relay: $RELAY_URL"
|
||||||
|
echo "Target: Retrieve $NUM_EVENTS events"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check if relay is running
|
||||||
|
echo "Checking if relay is running..."
|
||||||
|
if ! nc -z localhost 8888 2>/dev/null; then
|
||||||
|
echo "ERROR: Relay is not running on port 8888"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "✓ Relay is running"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check if nak is installed
|
||||||
|
if ! command -v nak &> /dev/null; then
|
||||||
|
echo "ERROR: 'nak' command not found. Please install nak:"
|
||||||
|
echo " go install github.com/fiatjaf/nak@latest"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check current event count in database
|
||||||
|
DB_FILE=$(ls build/*.db 2>/dev/null | head -1)
|
||||||
|
if [ -n "$DB_FILE" ]; then
|
||||||
|
CURRENT_COUNT=$(sqlite3 "$DB_FILE" "SELECT COUNT(*) FROM events WHERE kind=1;" 2>/dev/null || echo "0")
|
||||||
|
echo "Current kind 1 events in database: $CURRENT_COUNT"
|
||||||
|
|
||||||
|
if [ "$CURRENT_COUNT" -ge "$NUM_EVENTS" ]; then
|
||||||
|
echo "✓ Database already has $CURRENT_COUNT events (>= $NUM_EVENTS required)"
|
||||||
|
echo " Skipping event posting..."
|
||||||
|
echo ""
|
||||||
|
else
|
||||||
|
EVENTS_TO_POST=$((NUM_EVENTS - CURRENT_COUNT))
|
||||||
|
echo "Need to post $EVENTS_TO_POST more events..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Post additional events
|
||||||
|
echo "Posting $EVENTS_TO_POST test events using nak..."
|
||||||
|
for i in $(seq 1 $EVENTS_TO_POST); do
|
||||||
|
# Cycle through secret keys
|
||||||
|
KEY_INDEX=$(( (i - 1) % ${#SECRET_KEYS[@]} ))
|
||||||
|
CURRENT_KEY=${SECRET_KEYS[$KEY_INDEX]}
|
||||||
|
|
||||||
|
# Create content
|
||||||
|
CONTENT="Bulk test event $i/$EVENTS_TO_POST for performance testing"
|
||||||
|
|
||||||
|
# Post event using nak (properly signed)
|
||||||
|
nak event -c "$CONTENT" --sec "$CURRENT_KEY" "$RELAY_URL" >/dev/null 2>&1
|
||||||
|
|
||||||
|
# Progress indicator
|
||||||
|
if [ $((i % 50)) -eq 0 ]; then
|
||||||
|
echo " Posted $i/$EVENTS_TO_POST events..."
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
echo "✓ Posted $EVENTS_TO_POST test events"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "WARNING: Could not find database file"
|
||||||
|
echo "Posting $NUM_EVENTS events anyway..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Post events
|
||||||
|
echo "Posting $NUM_EVENTS test events using nak..."
|
||||||
|
for i in $(seq 1 $NUM_EVENTS); do
|
||||||
|
KEY_INDEX=$(( (i - 1) % ${#SECRET_KEYS[@]} ))
|
||||||
|
CURRENT_KEY=${SECRET_KEYS[$KEY_INDEX]}
|
||||||
|
CONTENT="Bulk test event $i/$NUM_EVENTS for performance testing"
|
||||||
|
nak event -c "$CONTENT" --sec "$CURRENT_KEY" "$RELAY_URL" >/dev/null 2>&1
|
||||||
|
|
||||||
|
if [ $((i % 50)) -eq 0 ]; then
|
||||||
|
echo " Posted $i/$NUM_EVENTS events..."
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
echo "✓ Posted $NUM_EVENTS test events"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Wait for events to be stored
|
||||||
|
echo "Waiting 2 seconds for events to be stored..."
|
||||||
|
sleep 2
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Test 1: Retrieve 500 events using nak req
|
||||||
|
echo "=========================================="
|
||||||
|
echo "TEST 1: Retrieve $NUM_EVENTS events"
|
||||||
|
echo "=========================================="
|
||||||
|
echo "Sending REQ with limit=$NUM_EVENTS..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
START_TIME=$(date +%s%N)
|
||||||
|
|
||||||
|
# Use nak req to retrieve events (properly handles subscription protocol)
|
||||||
|
RESPONSE=$(nak req -k 1 -l $NUM_EVENTS "$RELAY_URL" 2>/dev/null)
|
||||||
|
|
||||||
|
END_TIME=$(date +%s%N)
|
||||||
|
ELAPSED_MS=$(( (END_TIME - START_TIME) / 1000000 ))
|
||||||
|
|
||||||
|
# Count events received (each line is one event)
|
||||||
|
EVENT_COUNT=$(echo "$RESPONSE" | grep -c '^{')
|
||||||
|
|
||||||
|
echo "Results:"
|
||||||
|
echo " Time elapsed: ${ELAPSED_MS}ms"
|
||||||
|
echo " Events received: $EVENT_COUNT"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [ $EVENT_COUNT -ge $((NUM_EVENTS - 10)) ]; then
|
||||||
|
echo "✓ TEST 1 PASSED: Retrieved $EVENT_COUNT events in ${ELAPSED_MS}ms"
|
||||||
|
if [ $ELAPSED_MS -lt 100 ]; then
|
||||||
|
echo " ⚡ EXCELLENT: <100ms for $EVENT_COUNT events!"
|
||||||
|
elif [ $ELAPSED_MS -lt 500 ]; then
|
||||||
|
echo " ✓ GOOD: <500ms for $EVENT_COUNT events"
|
||||||
|
elif [ $ELAPSED_MS -lt 2000 ]; then
|
||||||
|
echo " ⚠ ACCEPTABLE: <2s for $EVENT_COUNT events"
|
||||||
|
else
|
||||||
|
echo " ⚠ SLOW: ${ELAPSED_MS}ms for $EVENT_COUNT events (expected <100ms)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "✗ TEST 1 FAILED: Only retrieved $EVENT_COUNT events (expected ~$NUM_EVENTS)"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Test 2: Retrieve events by author (use first test key's pubkey)
|
||||||
|
echo "=========================================="
|
||||||
|
echo "TEST 2: Retrieve events by author"
|
||||||
|
echo "=========================================="
|
||||||
|
echo "Sending REQ with authors filter..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Get pubkey from first secret key
|
||||||
|
TEST_PUBKEY=$(nak key public ${SECRET_KEYS[0]})
|
||||||
|
|
||||||
|
START_TIME=$(date +%s%N)
|
||||||
|
|
||||||
|
RESPONSE=$(nak req -k 1 -a "$TEST_PUBKEY" -l $NUM_EVENTS "$RELAY_URL" 2>/dev/null)
|
||||||
|
|
||||||
|
END_TIME=$(date +%s%N)
|
||||||
|
ELAPSED_MS=$(( (END_TIME - START_TIME) / 1000000 ))
|
||||||
|
|
||||||
|
EVENT_COUNT=$(echo "$RESPONSE" | grep -c '^{')
|
||||||
|
|
||||||
|
echo "Results:"
|
||||||
|
echo " Time elapsed: ${ELAPSED_MS}ms"
|
||||||
|
echo " Events received: $EVENT_COUNT"
|
||||||
|
echo " (Note: Only events from first test key, ~1/3 of total)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [ $EVENT_COUNT -ge $((NUM_EVENTS / 3 - 20)) ]; then
|
||||||
|
echo "✓ TEST 2 PASSED: Retrieved $EVENT_COUNT events in ${ELAPSED_MS}ms"
|
||||||
|
else
|
||||||
|
echo "⚠ TEST 2 WARNING: Only retrieved $EVENT_COUNT events (expected ~$((NUM_EVENTS / 3)))"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Test 3: Retrieve events with time filter
|
||||||
|
echo "=========================================="
|
||||||
|
echo "TEST 3: Retrieve events with time filter"
|
||||||
|
echo "=========================================="
|
||||||
|
echo "Sending REQ with since filter (last hour)..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
SINCE_TIME=$(($(date +%s) - 3600))
|
||||||
|
|
||||||
|
START_TIME=$(date +%s%N)
|
||||||
|
|
||||||
|
RESPONSE=$(nak req -k 1 --since "$SINCE_TIME" -l $NUM_EVENTS "$RELAY_URL" 2>/dev/null)
|
||||||
|
|
||||||
|
END_TIME=$(date +%s%N)
|
||||||
|
ELAPSED_MS=$(( (END_TIME - START_TIME) / 1000000 ))
|
||||||
|
|
||||||
|
EVENT_COUNT=$(echo "$RESPONSE" | grep -c '^{')
|
||||||
|
|
||||||
|
echo "Results:"
|
||||||
|
echo " Time elapsed: ${ELAPSED_MS}ms"
|
||||||
|
echo " Events received: $EVENT_COUNT"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [ $EVENT_COUNT -ge $((NUM_EVENTS - 10)) ]; then
|
||||||
|
echo "✓ TEST 3 PASSED: Retrieved $EVENT_COUNT events in ${ELAPSED_MS}ms"
|
||||||
|
else
|
||||||
|
echo "⚠ TEST 3 WARNING: Only retrieved $EVENT_COUNT events (expected ~$NUM_EVENTS)"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Test 4: Multiple small retrievals (simulating real-world usage)
|
||||||
|
echo "=========================================="
|
||||||
|
echo "TEST 4: Multiple small retrievals (50 events × 10 times)"
|
||||||
|
echo "=========================================="
|
||||||
|
echo "Simulating real-world client behavior..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
TOTAL_TIME=0
|
||||||
|
TOTAL_EVENTS=0
|
||||||
|
for i in $(seq 1 10); do
|
||||||
|
START_TIME=$(date +%s%N)
|
||||||
|
|
||||||
|
RESPONSE=$(nak req -k 1 -l 50 "$RELAY_URL" 2>/dev/null)
|
||||||
|
|
||||||
|
END_TIME=$(date +%s%N)
|
||||||
|
ELAPSED_MS=$(( (END_TIME - START_TIME) / 1000000 ))
|
||||||
|
TOTAL_TIME=$((TOTAL_TIME + ELAPSED_MS))
|
||||||
|
|
||||||
|
EVENT_COUNT=$(echo "$RESPONSE" | grep -c '^{')
|
||||||
|
TOTAL_EVENTS=$((TOTAL_EVENTS + EVENT_COUNT))
|
||||||
|
echo " Request $i: ${ELAPSED_MS}ms ($EVENT_COUNT events)"
|
||||||
|
done
|
||||||
|
|
||||||
|
AVG_TIME=$((TOTAL_TIME / 10))
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Results:"
|
||||||
|
echo " Total time: ${TOTAL_TIME}ms"
|
||||||
|
echo " Total events: $TOTAL_EVENTS"
|
||||||
|
echo " Average time per request: ${AVG_TIME}ms"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [ $AVG_TIME -lt 50 ]; then
|
||||||
|
echo "✓ TEST 4 PASSED: Average retrieval time ${AVG_TIME}ms (excellent)"
|
||||||
|
elif [ $AVG_TIME -lt 200 ]; then
|
||||||
|
echo "✓ TEST 4 PASSED: Average retrieval time ${AVG_TIME}ms (good)"
|
||||||
|
else
|
||||||
|
echo "⚠ TEST 4 WARNING: Average retrieval time ${AVG_TIME}ms (slow)"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Performance Summary
|
||||||
|
echo "=========================================="
|
||||||
|
echo "PERFORMANCE SUMMARY"
|
||||||
|
echo "=========================================="
|
||||||
|
echo ""
|
||||||
|
echo "Expected performance with event_json optimization:"
|
||||||
|
echo " - 366 events: <10ms (previously 18 seconds)"
|
||||||
|
echo " - 500 events: <15ms"
|
||||||
|
echo " - Per-event overhead: ~0.02ms (vs 50ms before)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [ -n "$DB_FILE" ]; then
|
||||||
|
FINAL_COUNT=$(sqlite3 "$DB_FILE" "SELECT COUNT(*) FROM events WHERE kind=1;" 2>/dev/null || echo "0")
|
||||||
|
echo "Final database stats:"
|
||||||
|
echo " Total kind 1 events: $FINAL_COUNT"
|
||||||
|
echo " Database file: $DB_FILE"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Check relay logs for [QUERY] entries to see actual query times:"
|
||||||
|
echo " journalctl -u c-relay -n 100 | grep QUERY"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo "=========================================="
|
||||||
|
echo "Test Complete"
|
||||||
|
echo "=========================================="
|
||||||
Reference in New Issue
Block a user