Database
SQLite schema, key tables, migration overview, and cloud sync mechanism
Luma uses a single SQLite database (luma.db) for all application data. Portable creative data (patterns, tracks) and venue-specific hardware configuration (fixtures, groups) all live in the same file, linked by foreign keys. A separate lightweight state.db stores transient auth session data.
Key Tables
| Table | Purpose |
|---|---|
venues | Project containers; each venue is a row with name, description, and sync metadata |
fixtures | Patched fixtures with DMX universe, address, 3D position (pos_x/y/z), rotation (rot_x/y/z), manufacturer, model, mode |
patterns | Pattern definitions (name, description, category, publication status) |
implementations | Node graph JSON for each pattern, with UID for cross-device sync |
scores / track_scores | Pattern placements on track timelines (start_time, end_time, z_index, blend_mode, args) |
tracks | Audio file references with metadata (title, artist, BPM, key, file path, hash) |
track_beats | Beat positions and downbeats for tracks |
track_roots | Chord root analysis results |
track_waveforms | Binary waveform data (preview + full resolution) |
track_stems | Stem separation file paths |
fixture_groups / fixture_group_members | Spatial fixture grouping with axis assignments; group names are used in selection expressions |
settings | Key-value app settings |
categories | Pattern categories |
Migrations
Location: src-tauri/migrations/
11 migrations total, applied in order by timestamp. Each migration is a SQL file that creates or alters tables. sqlx manages migration state and applies pending migrations on app startup.
Cloud Sync
Files: src-tauri/src/sync/
Luma syncs to Supabase (PostgreSQL + PostgREST API). The architecture is local-first -- SQLite is the source of truth, and the cloud is a secondary copy for backup and sharing. Sync is automatic and bidirectional -- changes are pushed in the background and pulled on launch, with no manual sync button needed.
UUID-Based Identity
All entities use UUIDs as their primary key. The same UUID is used both locally and in Supabase -- there is no separate remote_id. This eliminates dual-ID bookkeeping and simplifies conflict resolution.
Sync Engine
The sync system is built on a generic Syncable trait. Each syncable table implements this trait, and the engine handles push/pull/delete operations uniformly. Key components:
- Orchestrator (
sync/orchestrator.rs) -- coordinates full sync cycles (pull then push) - Push (
sync/push.rs) -- background loop that drains a pending operations queue - Pull (
sync/pull.rs) -- delta sync using per-user timestamps (only fetches records changed since last pull) - Pending ops (
sync/pending.rs) -- mutations enqueue push operations; a background loop drains them automatically - File sync (
sync/files.rs) -- handles album art, waveforms, and other binary assets separately from record sync
What Syncs
| Data | Syncs | Notes |
|---|---|---|
| Venues, fixtures, groups | Yes | Full venue setup |
| Patterns, implementations | Yes | Enables community sharing |
| Tracks (metadata, beats, roots, preview waveform) | Yes | File paths stay local |
| Scores, track scores | Yes | Full annotation data |
| Album art | Yes | Synced via file sync |
| Audio files, stem files | No | Too large; regenerated locally |
| Settings, auth state | No | Device-specific |
Per-User Sync State
Each user tracks their own sync cursor timestamps. This means multiple users collaborating on a venue each pull only the changes they haven't seen, without interfering with each other's sync state.
Soft Deletes
Delete operations are enqueued as pending ops and synced to the cloud as tombstones. Pulling tombstones removes the corresponding local records. This ensures deletes propagate across devices.
Community Pattern Sharing
Patterns can be shared with the community via the is_verified flag. Users can search for community patterns using the remote search feature, and browse patterns in "verified," "mine," and "all" tabs.
Error Handling
The pending ops queue retries failed operations. Users can view pending errors via get_pending_errors and manually retry individual operations via retry_pending_op.