Owner: Pierros Papadeas
Last updated: 2026-04-08
Current version: v0.5.3
This is the single source of truth for TALOS development priorities.
Items are organized by target release, then by priority within each
release. Change status to APPROVED to greenlight work, or move items
between versions as priorities shift.
2026-04-08 update: v0.6.0 has been re-themed around the always-on
SDR + bidirectional TC&C architectural shift (Phase A from
docs/research/08-always-on-sdr.md and docs/research/09-tcnc-integration.md).
The previous v0.6.0 contents have been distributed across v0.6.1, v0.6.2,
v0.6.3, and v0.7+.
Dropped 2026-04-05. TALOS uses a reactive ConOps: the Director's 2Hz tick
loop tracks whatever is above the horizon based on priority — there are no
time-windowed observation jobs to schedule. Items #33-35 solved a SatNOGS-style
scheduling problem that does not exist in TALOS. Replaced by Director-level
priority arbitration in v0.6.0 (#111, #112).
#
Item
Priority
Effort
Status
Source
33
Assignment conflict detection on creation
P1
1 wk
DROPPED (reactive model — priority resolves conflicts at runtime)
tech-roadmap
34
Greedy single-campaign auto-scheduler (pass windows + station availability)
P1
1 wk
DROPPED (no time windows — assignments are permanent station↔campaign links)
tech-roadmap
35
OR-Tools CP-SAT multi-campaign optimizer
P2
3 wk
DROPPED (nothing to schedule — Director is reactive, not proactive)
exec-summary
40
Performance regression gates (load test baseline comparison)
Streaming-first data pipeline from edge flowgraphs through TALOS to external
YAMCS instances. Replaces satnogs-client with TALOS agent as the ground station
client. Aligns with phasma-operations (gitlab.com/librespacefoundation/phasma/phasma-operations).
A campaign currently tracks exactly ONE satellite. This version removes that
limitation so a single campaign can target a constellation, a debris cluster,
or a group of related objects — each with an independent weight that controls
Director priority arbitration when multiple objects compete for the same
station hardware.
Design principle: a CampaignSatellite join table replaces the single
Campaign.norad_id column. Every downstream system (Director tick loop, TLE
management, streaming triggers, dashboard visualization, campaign wizard)
is updated to iterate over the satellite list instead of assuming one.
Create CampaignSatellite model (campaign_id, satnogs_id, norad_id, name, weight 1-10 default 5, created_at) with unique constraint on (campaign_id, norad_id)
P0
2 hr
PROPOSED
multi-sat
114
Alembic migration: add campaign_satellite table; migrate existing Campaign.norad_id / Campaign.satnogs_id rows into CampaignSatellite entries (weight=5); make Campaign.norad_id nullable (keep for backward compat during transition)
P0
2 hr
PROPOSED
multi-sat
115
Add satellites: list[CampaignSatellite] relationship on Campaign model; deprecate Campaign.norad_id / Campaign.satnogs_id fields (remove in v0.6.0)
P0
1 hr
PROPOSED
multi-sat
116
Update Transmitter model: add norad_id column so transmitters can be linked per-satellite within a campaign (currently only per-campaign via campaign_id)
Update CreateCampaignRequest to accept satellites: list[{satnogs_id, norad_id, weight}] (minimum 1); keep legacy single satnogs_id/norad_id fields for backward compat (auto-wrap into single-element list)
P0
2 hr
PROPOSED
multi-sat
118
Update POST /api/orgs/{slug}/campaigns: iterate satellite list, create CampaignSatellite rows, fetch transmitters per norad_id and link each with the correct norad_id
P0
4 hr
PROPOSED
multi-sat
119
Add PUT /api/orgs/{slug}/campaigns/{id}/satellites — add/remove/reweight satellites on an existing campaign; re-fetch transmitters for newly added satellites
P1
4 hr
PROPOSED
multi-sat
120
Update GET campaign endpoints to include satellites array in response with per-satellite weight, name, norad_id, satnogs_id
P0
1 hr
PROPOSED
multi-sat
121
Update campaign activation: validate that all satellites have at least one transmitter before allowing status→active
Refactor MultiTLEManager to key on (campaign_id, norad_id) tuple instead of campaign_id alone; get_or_create() accepts list of satellites, returns list of TLEManager objects
P0
4 hr
PROPOSED
multi-sat
123
Refactor Director inner tick loop: for each assignment, iterate all CampaignSatellite objects; compute az/el per satellite; select the highest-weight satellite currently above the horizon as the tracking target
P0
1 wk
PROPOSED
multi-sat
124
Update AOS/LOS logic: track state per (station_id, campaign_id, norad_id) triple; emit StreamCommand per satellite transition, not per campaign transition
P0
4 hr
PROPOSED
multi-sat
125
Update select_transmitter(): filter transmitters by the currently-tracked satellite's norad_id within the campaign
P1
2 hr
PROPOSED
multi-sat
126
Update visualization: publish per-satellite footprint/ground-track within campaign viz payload (array of SatFootprint instead of single); update MissionViz schema
P1
4 hr
PROPOSED
multi-sat
127
Update background predictor: compute passes per (station_id, campaign_id, norad_id) triple; merge pass lists into campaign-level schedule for station
Update campaign sidebar detail panel: show satellite list with name, NORAD ID, weight badge, and current tracking indicator (which satellite the Director is currently pointing at)
P0
4 hr
PROPOSED
multi-sat
129
Update map visualization: render footprint and ground track for ALL satellites in selected campaign (each with campaign color but varying opacity by weight); highlight the currently-tracked satellite
P1
4 hr
PROPOSED
multi-sat
130
Update campaign creation wizard Step 1: multi-select satellite search — selected satellites appear as removable chips with editable weight sliders (1-10); minimum 1 satellite required to proceed
P0
1 wk
PROPOSED
multi-sat
131
Update campaign creation wizard Step 4 (review): show satellite table with name, NORAD ID, weight, transmitter count per satellite
P1
2 hr
PROPOSED
multi-sat
132
Update MQTT campaign viz handler: parse array of satellite footprints; render each independently on the map layer
Migration test: verify single-satellite campaigns are correctly migrated to CampaignSatellite table; verify norad_id backward compat
P0
2 hr
PROPOSED
multi-sat
134
API tests: create campaign with 1, 3, 10 satellites; add/remove satellites from existing campaign; verify transmitter linkage per satellite; verify weight validation (1-10)
P0
4 hr
PROPOSED
multi-sat
135
Director unit tests: mock multi-satellite campaign; verify highest-weight satellite above horizon is selected; verify AOS/LOS state transitions per satellite; verify StreamCommand emission per satellite
P0
4 hr
PROPOSED
multi-sat
136
Import check: add CampaignSatellite to tests/test_imports.py module list
The architectural shift: SDRs are always-on, raw IQ streams over ZeroMQ
to multiple parallel flowgraphs, doppler is applied as a phase-continuous
NCO inside the flowgraph (not via SDR retune), and Telemetry/Tracking &
Command (TC&C) is end-to-end through YAMCS in BOTH directions —
downlink frames AND uplink commands. gRPC controls the local Agent ↔
flowgraph hop; MQTT remains the Director ↔ Agent bus; rotctld stays for
rotators (Hamlib's strength); rigctl is dropped for the SDR.
Design: see plan in this session and docs/research/08-always-on-sdr.md +
docs/research/09-tcnc-integration.md. The legacy v0.5.x subprocess
flowgraph pipeline ships untouched, gated behind a TALOS_PIPELINE=legacy
feature flag (default for v0.6.0). The new modular pipeline ships behind
TALOS_PIPELINE=modular, exercised in CI as a parallel job. Default
flip happens in v0.6.2, legacy removed in v0.7.0.
Items #46-49, #52-56, #66, #112, #57 from the original v0.6.0 plan have
been moved into v0.6.1, v0.6.2, v0.6.3, or v0.7+ to make room. The
"Always-On SDR + Bidirectional TC&C" work is the primary v0.6.0 theme.
Target: 12 weeks. Critical invariant: Phase A must demonstrate end-to-end
TC&C through YAMCS (TM downlink + TC uplink + ack) on real hardware.
Create agent/flowgraph_orchestrator.py to replace FlowgraphManager on the modular path: manages long-lived RX and TX source flowgraphs plus N decoder + M modulator flowgraphs, each with its own gRPC stub; tracks dict[norad_id, DecoderHandle] and dict[norad_id, ModulatorHandle]
P0
1 wk
PROPOSED
always-on-sdr
152
Create agent/grpc_clients.py with thin gRPC client wrappers for the four services; reusable from orchestrator and tests
P0
4 hr
PROPOSED
always-on-sdr
153
Refactor agent/agent.pyhandle_message to route cmd/stream / cmd/doppler / cmd/tx_doppler / cmd/uplink through the orchestrator when TALOS_PIPELINE=modular; leave cmd/rig as no-op on modular path; preserve full legacy behavior on legacy
P0
1 wk
PROPOSED
always-on-sdr
154
Agent boot-time: when modular flag is set, hot-start RX source flowgraph; if station config declares TX-capable hardware, also hot-start TX source flowgraph. PLLs ready before AOS.
P0
4 hr
PROPOSED
always-on-sdr
155
Agent uplink ack path: after modulator's SendCommand completes, agent publishes UplinkAck{uplink_id, status, ts} on uplink/ack MQTT topic for Core uplink router to fan back to YAMCS
Add calculate_tx_doppler() to director/physics.py: inverse range-rate sign vs calculate_doppler(); reuse existing skyfield topocentric vector
P0
2 hr
PROPOSED
always-on-sdr
157
Promote director/physics.calculate_doppler() from int to float overload to preserve sub-Hz precision the NCO can use (current int(...) cast at physics.py:40 loses precision)
P0
1 hr
PROPOSED
always-on-sdr
158
director/director.py (modular flag): emit DopplerUpdate to cmd/doppler per active sat at 2 Hz; emit TxDopplerUpdate to cmd/tx_doppler per uplink-capable active sat at 2 Hz; suppress cmd/rig emission
P0
1 wk
PROPOSED
always-on-sdr
159
director/director.py: include flowgraph_id (from CampaignSatellite) and tx_flowgraph_id in StreamCommand on AOS; legacy fields preserved when modular flag unset
P0
4 hr
PROPOSED
always-on-sdr
Phase A.5: Core TC ingest (NEW: uplink path from YAMCS)¶
Create core/routes/uplinks.py: POST /api/orgs/{slug}/campaigns/{id}/uplink (accepts TC frame, mints uplink_id, persists, publishes on cmd/uplink); GET /api/orgs/{slug}/uplinks/{uplink_id} (status poll for YAMCS)
P0
1 wk
PROPOSED
tcnc
162
Create core/uplink_router.py: background MQTT subscriber on uplink/ack, correlates by uplink_id, updates Uplink row, fans ack back to YAMCS via HTTP callback or yamcs-mqtt topic
P0
1 wk
PROPOSED
tcnc
163
Wire routes.uplinks and uplink_router into core/app.py lifespan (CLAUDE.md invariant 1: imports come from core.config, never core.app)
P0
1 hr
PROPOSED
tcnc
164
Add yamcs_tc_link_type (HTTP | MQTT) and yamcs_tc_endpoint fields to MissionLink model; alembic migration; update core/routes/missions.py API
P1
2 hr
PROPOSED
tcnc
165
Idempotency: duplicate submissions of the same uplink_id within 24h return the existing record without re-publishing; tests/test_uplink_idempotent.py proves it
P0
2 hr
PROPOSED
tcnc
166
Authorization: uplink endpoint requires require_role("operator") from core/deps.py; uplink-disabled campaigns reject with 403
tests/test_uplink_route.py: HTTP API for command release; auth/RBAC; validation; status polling
P0
4 hr
PROPOSED
tcnc
175
tests/test_tx_doppler_math.py: calculate_tx_doppler() against known reference; sign inversion vs calculate_doppler()
P0
2 hr
PROPOSED
always-on-sdr
176
tests/test_doppler_psk.py: phase-continuity test against time-varying doppler with synthetic PSK signal — verifies BOTH decoder NCO and modulator inverse NCO maintain carrier phase across UpdateDoppler/UpdateTxDoppler calls
P0
1 day
PROPOSED
always-on-sdr
177
tests/test_uplink_idempotent.py: duplicate uplink_id within window -> single transmission, two acks correlated to same row
P0
2 hr
PROPOSED
tcnc
178
Add all new modules to tests/test_imports.py (CLAUDE.md invariant 1; v0.4.0 lesson)
Create docs/research/08-always-on-sdr.md: full architecture analysis, ZMQ HWM gotchas, gnuradio #3877 phase-continuity, Estévez doppler references, Phase A scope
P0
4 hr
PROPOSED
always-on-sdr
183
Create docs/research/09-tcnc-integration.md: TC&C uplink path design, YAMCS link options (MqttTcDataLink vs HTTP), idempotency semantics, ack flow, authorization model
P0
4 hr
PROPOSED
tcnc
184
Update docs/research/06-phasma-integration.md to add a section about the modular pipeline as the new primary path; legacy subprocess pipeline becomes the fallback
P1
2 hr
PROPOSED
always-on-sdr
185
Update CLAUDE.md with new invariants: (11) source flowgraphs own the SDR; (12) doppler is phase-continuous NCO inside decoder/modulator; (13) uplink commands carry uplink_id end-to-end
P0
1 hr
PROPOSED
always-on-sdr
186
Update CLAUDE.md "Current State" + File Map with new modules and v0.6.0 entry once shipped
P1
30 min
PROPOSED
always-on-sdr
Phase A.10: Hardware acceptance (blocking for v0.6.0 ship)¶
#
Item
Priority
Effort
Status
Source
187
RX acceptance: full LEO pass decoded via modular pipeline on Pi5+transceiver SDR; frames reach a real YAMCS instance via existing TM datalink; decoded count matches legacy on the same IQ recording
P0
2 day
PROPOSED
always-on-sdr
188
TX acceptance: during the same pass, YAMCS operator releases a TC via the new HTTP endpoint; modulator emits doppler-pre-comp signal on SDR TX; loopback receiver sees the carrier; UplinkAck propagates to YAMCS within deadline
P0
2 day
PROPOSED
tcnc
189
Phase-continuity acceptance: tests/test_doppler_psk.py passes for both decoder NCO and modulator inverse NCO with realistic LEO doppler curve
P0
4 hr
PROPOSED
always-on-sdr
Phase A: Items remaining from prior v0.6.0 plan that fit alongside the SDR work¶
Small, P0/P1, independent items kept in v0.6.0. Larger items moved to v0.6.1+.
Director priority arbitration (kept — directly enables multi-decoder)¶
Replaces v0.5.1 scheduling (DROPPED). TALOS is reactive — the Director
decides at runtime which campaign gets a station's hardware. Item #111 is
the on-ramp for the modular pipeline's multi-decoder orchestration: it
tells the orchestrator which sat to spawn a decoder for first.
#
Item
Priority
Effort
Status
Source
111
Single-station priority preemption: when multiple assigned satellites are above the horizon, the Director drives the rotator/SDR to the highest-priority campaign and parks lower-priority ones until the higher one sets or completes. On the modular pipeline, this also drives which decoder/modulator pairs the orchestrator keeps running.
P1
1 wk
PROPOSED
conops-review
Items moved out of v0.6.0 to make room for the SDR/TC&C work¶
The original v0.6.0 plan had ~30 items. To fit the always-on SDR + TC&C
work in 12 weeks, the following are deferred:
#46 CesiumJS 3D globe (3 wk, P1) → v0.6.3
#47 HTMX for non-real-time pages (2 wk, P2) → v0.6.3
#48 CCSDS 502.0 (OMM) (2 wk, P1) → v0.6.2
#49 CCSDS 503.0 (TDM) (2 wk, P2) → v0.6.2
#51 Lightweight waterfall capture → v0.6.1 as part of aux_waterfall_v1 Phase B work
Phase B from the v0.6.0 plan: with TC&C already shipped in v0.6.0, this
release adds operational depth — auxiliary flowgraphs (waterfall, PFD,
ODT), per-station hardware profiles, dashboard observability, and the
expanded decoder/modulator catalog.
Auxiliary flowgraphs (parallel consumers of the always-on RX IQ)¶
#
Item
Priority
Effort
Status
Source
190
flowgraphs/src/aux_waterfall_v1.grc: ZMQ SUB IQ -> FFT -> waterfall PNG / web stream; aggregates the v0.6.0 #51 lightweight waterfall capture work
P1
1 wk
PROPOSED
always-on-sdr
191
flowgraphs/src/aux_pfd_v1.grc: power flux density meter; periodic Prometheus export
P2
1 wk
PROPOSED
always-on-sdr
192
flowgraphs/src/aux_odt_v1.grc: orbit determination tone collector; range-rate measurements published over MQTT for downstream OD
P2
1 wk
PROPOSED
always-on-sdr
193
Register aux_*_v1 IDs in shared/flowgraphs.py; agent spawns them based on per-station config, not per-pass commands
Dashboard surface: live SDR status (RX center / sample rate / gain / temperature), decoder + modulator roster with per-flowgraph CPU and ZMQ queue depth from Prometheus
P1
1 wk
PROPOSED
always-on-sdr
201
core/metrics.py registers new gauges/counters from the orchestrator: talos_flowgraph_running{kind,id}, talos_zmq_queue_depth{consumer}, talos_zmq_dropped_samples_total{consumer}, talos_decoder_cpu_seconds_total{norad_id}, talos_modulator_cpu_seconds_total{norad_id}, talos_uplinks_total{status}, talos_uplink_latency_seconds
P1
4 hr
PROPOSED
always-on-sdr
66
Deploy Prometheus + Grafana monitoring stack (Grafana Cloud or self-hosted)
P2
1 day
PROPOSED
v0.4.0 deferred
51
Lightweight waterfall capture — folded into #190 aux_waterfall_v1
Cross-station deduplication: when satellite S is visible from stations A and B, and station A is already tracking S, the Director should free station B to track a lower-priority satellite that only B can see — maximizing total coverage across the network
Phase C from the v0.6.0 plan: validate the high-rate (USRP, 10+ MSPS) tier,
add the cloud IQ network sink for off-station processing, and make the
modular pipeline the default. Legacy subprocess path stays behind the flag
for one more cycle, then is removed in v0.7.0.
Originally in v0.6.0, deferred to make room for SDR/TC&C. These are
visualization/UX upgrades that don't compete with the modular pipeline
work and can ship once the architecture has settled.
#
Item
Priority
Effort
Status
Source
46
CesiumJS opt-in 3D globe alongside existing Leaflet 2D map
P1
3 wk
PROPOSED
exec-summary
47
HTMX for non-real-time pages (stations, campaigns, settings)
P2
2 wk
PROPOSED
tech-roadmap
207
Legacy subprocess flowgraph pipeline removal (after one full cycle of TALOS_PIPELINE=modular as default in v0.6.2)