LUMA
Architecture

Database

SQLite schema, key tables, migration overview, and cloud sync mechanism

Luma uses a single SQLite database (luma.db) for all application data. Portable creative data (patterns, tracks) and venue-specific hardware configuration (fixtures, groups) all live in the same file, linked by foreign keys. A separate lightweight state.db stores transient auth session data.

Key Tables

TablePurpose
venuesProject containers; each venue is a row with name, description, and sync metadata
fixturesPatched fixtures with DMX universe, address, 3D position (pos_x/y/z), rotation (rot_x/y/z), manufacturer, model, mode
patternsPattern definitions (name, description, category, publication status)
implementationsNode graph JSON for each pattern, with UID for cross-device sync
scores / track_scoresPattern placements on track timelines (start_time, end_time, z_index, blend_mode, args)
tracksAudio file references with metadata (title, artist, BPM, key, file path, hash)
track_beatsBeat positions and downbeats for tracks
track_rootsChord root analysis results
track_waveformsBinary waveform data (preview + full resolution)
track_stemsStem separation file paths
fixture_groups / fixture_group_membersSpatial fixture grouping with axis assignments; group names are used in selection expressions
settingsKey-value app settings
categoriesPattern categories

Migrations

Location: src-tauri/migrations/

11 migrations total, applied in order by timestamp. Each migration is a SQL file that creates or alters tables. sqlx manages migration state and applies pending migrations on app startup.

Cloud Sync

Files: src-tauri/src/sync/

Luma syncs to Supabase (PostgreSQL + PostgREST API). The architecture is local-first -- SQLite is the source of truth, and the cloud is a secondary copy for backup and sharing. Sync is automatic and bidirectional -- changes are pushed in the background and pulled on launch, with no manual sync button needed.

UUID-Based Identity

All entities use UUIDs as their primary key. The same UUID is used both locally and in Supabase -- there is no separate remote_id. This eliminates dual-ID bookkeeping and simplifies conflict resolution.

Sync Engine

The sync system is built on a generic Syncable trait. Each syncable table implements this trait, and the engine handles push/pull/delete operations uniformly. Key components:

  • Orchestrator (sync/orchestrator.rs) -- coordinates full sync cycles (pull then push)
  • Push (sync/push.rs) -- background loop that drains a pending operations queue
  • Pull (sync/pull.rs) -- delta sync using per-user timestamps (only fetches records changed since last pull)
  • Pending ops (sync/pending.rs) -- mutations enqueue push operations; a background loop drains them automatically
  • File sync (sync/files.rs) -- handles album art, waveforms, and other binary assets separately from record sync

What Syncs

DataSyncsNotes
Venues, fixtures, groupsYesFull venue setup
Patterns, implementationsYesEnables community sharing
Tracks (metadata, beats, roots, preview waveform)YesFile paths stay local
Scores, track scoresYesFull annotation data
Album artYesSynced via file sync
Audio files, stem filesNoToo large; regenerated locally
Settings, auth stateNoDevice-specific

Per-User Sync State

Each user tracks their own sync cursor timestamps. This means multiple users collaborating on a venue each pull only the changes they haven't seen, without interfering with each other's sync state.

Soft Deletes

Delete operations are enqueued as pending ops and synced to the cloud as tombstones. Pulling tombstones removes the corresponding local records. This ensures deletes propagate across devices.

Community Pattern Sharing

Patterns can be shared with the community via the is_verified flag. Users can search for community patterns using the remote search feature, and browse patterns in "verified," "mine," and "all" tabs.

Error Handling

The pending ops queue retries failed operations. Users can view pending errors via get_pending_errors and manually retry individual operations via retry_pending_op.

On this page