SGA Reporting System — Planning Hub

Strategic synthesis + 3-week sprint plan + research + audit + scaffolding for the Synergy / Headway / Daybreak / Lookout consolidation

Daybreak first Sprint window: 2026-05-15 → 2026-06-05 Workday API blocked 15+ artifacts shipped Owner: Scott Built: 2026-05-15
Current Phase
Phase 1
Foundation — Week 1 of 3
Sprint Days Remaining
21
Brooke-approved heads-down
Gate Dimensions Locked
8
5 with concrete thresholds
Research Words
12,700
80+ sources, 4 streams
Practices in Scope
260
SGA East + West + Gen4

What this is

SGA is consolidating 20+ standalone analytics prototypes into one coherent Reporting System organized in four buckets. This hub indexes all the planning, research, and scaffolding done during the 3-week heads-down sprint approved by Brooke on 2026-05-14.

Why Daybreak first

CriterionDaybreakHeadway
Data sources✅ PBI + DI + OSA + Neurality — all live❌ Workday API blocked on Rebecca/Jordan
External dependency riskNoneHigh — could stall sprint
Architectural valueProves gate-flipping pattern both systems shareInherits Daybreak's engine
Visible weekly winsYes — Brendan needs labor-style results MondayOnly when Workday lands

If Workday access lands inside 2 weeks → pivot Headway to primary, Daybreak hardens in Week 3.

The IP — Signal Effectiveness Engine

Phase 2, own GSD project Not a dashboard. A self-tuning learning system that captures every directive issued, infers execution from downstream signals (not self-report), measures causal effect on target metrics, and re-weights gate priorities so tomorrow's brief is smarter than today's. Lives in personal/SGA/signal-engine/, target start mid-June 2026.

Top 3 decisions surfaced this session

  1. Composite-AND is the universal pattern. Hospital ops + retail multi-site + clinical CDS all converged: single-metric alerts hit 90–96% override rates. SGA Daybreak fires PIT (Practice In Trouble Today) only on 2+ gates red OR 3-day persistence OR revenue+other composite. No single-metric paging at alert layer.
  2. Hybrid floor + cohort thresholds for every gate. Both absolute industry floor AND below-cohort-median must fail. TOC buffer-management semantics: Green / Yellow / Red zones. Cohort = size × specialty × PMS (Tier 1) → 12–24 cohorts of 8–25 practices each.
  3. Don't build a full contextual bandit at SGA scale. Research stream 3 found 260 practices × low fire counts isn't enough data for safe online learning. Borrow concepts (Thompson uncertainty, pessimistic priors, off-policy evaluation) without the live exploration loop.

Three triage decisions waiting for Scott

  • Headway path: Family A migration vs Family B + briefing bolt-on. Recommendation: Path 2 (Family B + briefing) to preserve People System cross-nav.
  • om-morning-brief: Keep or supersede by Daybreak v2?
  • om-dashboard: Keep or supersede by Daybreak v2?

The four buckets

Reshuffled from the original Headway/Daybreak/Lookout framework. Karen + Scott own the categorization session to assign every live deliverable to exactly one bucket.

BucketDefinitionSample tools
SynergyDrives the $16M EBITDA synergy target. Executive-visible $ impact.Labor Analysis, Headcount/RIS Scorecard, Fee Negotiations
HeadwayPeople system — staffing, labor, scheduling, performance. Blocked on Workday.Headway integrated GUI, Labor OM variant, RIS, HIS, Hiring Index, Replacement Index, Headcount
DaybreakDaily ops brief. Gates flip → alerts. Same-day intervention. Unified schema, three views (OM/ROD/Leadership).Daybreak v2 deck, OpenClaw OM Daily Report Engine, hygiene performance, OSA, Pacing
Lookout (future)Ad-hoc analytics, prototypes, R&D. Not yet production.Sentiment, IT AI Agents, Curodont, Procurement, Competitive Intel

The 8 Daybreak gate dimensions

DimensionThreshold typeData source
Call volumePeer logicCall tracking — Dakota + Amy
Conversion (call→appt)Peer logicCall tracking + Neurality booking timestamps
Scheduling availability (OSA)Absolute + peerOSA nightly scrape
Neurality appt mixAbsoluteDuckDB at personal/data-inbox/neurality/
Revenue / ProductionAbsolute (budget pace)PBI bridge
LaborAbsolute + peer (Synergy crossover)Labor Analysis pipeline
Doctor supplyAbsolute (composite gates)RIS Scorecard + Hiring Index
Hygiene reappointmentAbsolute + peerOM Daily Report engine

Architecture inheritance — do NOT redesign

AssetStatus
Daybreak v2 framework spec — 4-layer architectureExists
IPO Metric Tree — 8 OM-controllable leavesExists
OpenClaw OM Daily Report Engine — 273 practices @ 9:10 AM CTRunning on VPS — migrate to workspace
Metrics Registry — 10 measures + 4 dimsNeeds expansion for 8 new gates
PBI Bridge — VPS :3050, static-token modeWorking
Master Data Service — SQLite + JSON snapshot + Python lookupAuto-rebuild via watch.py
drillable-dashboard + action-briefing skillsCanonical standard

3-Week Sprint

Window: 2026-05-15 → 2026-06-05 (heads-down, per Brooke's approval)

Week 1 — Foundation In Progress

Goal: Every block needed to build Daybreak fast in Week 2.

  • 1.1 Dashboard auditDone 4 families identified, kill/migrate/keep verdicts
  • 1.2 Master-data viewer — Browse + edit web UI on snapshot.json, deploy to sga-master-data-v2
  • 1.3 Skill modularization — Extract gate-evaluator, scoring-engine, narrative-block, chart-builder, practice-resolver, dax-runner
  • 1.4 Metrics Registry expansion + DAX validation triage — Cohort defs Done; registry gap audit pending
  • 1.5 Deep researchDone 4 streams complete, SUMMARY.md written

Week 2 — Daybreak end-to-end Pending

Goal: Live Daybreak with real gate-flipping on validated data.

  • 2.1 Gate catalog — 8 YAML files under daybreak/framework/gates/ with hybrid floor+cohort thresholds
  • 2.2 Rules engine + PIT composite — gate-evaluator runs nightly; PIT alert composite (2+ red OR 3-day persistence OR revenue+other); anti-pattern guards (hidden thresholds, mandatory ack, alert volume caps)
  • 2.3 Daybreak UI rebuild — 5 tabs: Action Briefing → Active Alerts → OM Drill → Gate Coverage → Loop Closure
  • 2.4 OpenClaw engine migration — powerbi-reports-engine moves off VPS into personal/SGA/daybreak/engine/
  • 2.5 Daily run + tiered routing — 0600 OM → 0700 ROD → 0800 Leadership; 60-min auto-escalation

Week 3 — Hardening + branch on Workday Pending

  • Branch A (Workday API landed): Headway labor-scoring + OM variant ship
  • Branch B (still blocked): Headway data contract + second Daybreak metric tier + threshold tuning from Week 2 false-positive data

Phase 4 — Signal Effectiveness Engine Deferred

Own GSD project, target start mid-June 2026, lives at personal/SGA/signal-engine/. Design pre-committed (see Signal Engine tab).

The 5 gates that lock first (Phase 2)

Locked thresholds, per research stream 4. The other 3 stay under-review until benchmarks confirm.

GateGreenYellowRedSource
Collections rate≥97%92–96.9%<92%ADA HPI consensus
Hygiene reappt (post-taxonomy lock)≥85%70–85%<70%Dentistry IQ + DSO benchmarks
Provider days worked / FTE / month≥2017–19.9<17MGMA + SGA $300/$100 floor
Adjustment rate (PPO-cohort relative)top 50% in cohort50–75%bottom 25%DSO investor decks
Failed-appointment rate (no-show + same-day cancel)≤8%8–15%>15%Stream 4 combined definition

Three-tier time-clock routing (research stream 1, adopted verbatim)

TimeTierContent
0600 CTOMTheir gates only
0700 CTRODTheir region + escalation of OM gates not acknowledged by 0700
0800 CTLeadershipNetwork + escalation of ROD gates not acknowledged by 0800

Each tier has 60 minutes to acknowledge before auto-escalation. Forces accountability.

4 parallel research streams — 12,700 words, 80+ sources

Ran in background while planning + scaffolding proceeded. Output synthesized into research/SUMMARY.md.

Stream 1 — Real-time multi-site ops alerting

Hospital ops command centers, retail multi-site, clinical CDS.

Key findings

  • Composite-AND is universal — Hopkins, Domino's Pulse, Datadog, AHRQ all converged. Single-metric alerts: 90–96% override rates.
  • Three-tier time-clock routing — 0600 OM / 0700 ROD / 0800 Leadership pattern (adopted verbatim).
  • PIT composite formula — single daily alert per practice on 2+ red OR 3-day persistence OR revenue+other.
  • Hide threshold values from operators — prevents gaming. Show only Green/Yellow/Red.
  • Hopkins/M2C2 method — gold standard for causal attribution. Log every intervention → matched-peer DiD.

Sources cited

Hopkins Capacity Command Center, NEJM Catalyst (Tiered Huddles, Michigan M2C2), Datadog Composite Monitors, AHRQ PSNet Alert Fatigue, Manhattan Associates, NASA → Healthcare Mission Control

Stream 2 — Workforce gating systems

TOC buffer management, stage-gate, dental precedents, retail labor models.

Key findings

  • WFM platforms refuse to publish gating logic — Legion, UKG, Workday Adaptive all produce continuous recommendations only. SGA's AND-gated approach closer to clinical decision support pattern.
  • TOC buffer management = Green/Yellow/Red zones. Yellow exists to absorb common-cause variation, prevents single-month false positives.
  • Composite-AND intentionally low-sensitivity, high-specificity. Right tradeoff for politically sensitive headcount decisions.
  • Hybrid floor + cohort threshold pattern is defensible.
  • Dental precedents: Design Ergonomics 80%-capacity hygienist hire rule, Dentistry IQ 90%-vs-60% reappt = $1M-vs-$500K differential, MB2 $1.25M+4-op hire trigger.

Sources cited

Stage-Gate Intl, Velocity Scheduling (TOC), Legion, UKG, Design Ergonomics, Dentistry IQ, Proactive Chart, Frontiers Public Health, Overjet, ZenOne

Stream 3 — Signal Effectiveness Engine

Causal inference, contextual bandits, counterfactual estimation, execution inference, self-tuning ops systems.

Key findings

  • Airbnb ACE method is the closest published analog. ~2% systematic bias, A/A bootstrap CIs.
  • Layered stack: CausalImpact (BSTS) for per-fire attribution, ACE for continuous-dose, synthetic control + Spotify sensitivity for exec claims, RDD as free cross-check at every threshold.
  • NO full contextual bandit at SGA scale. Borrow concepts only.
  • Hierarchical Bayesian pooling = keystone. James-Stein shrinkage toward region + grand-mean priors. Without this, per-practice estimates pure noise.
  • Execution inference = 3-tier signal fusion (PMS logs / behavioral telemetry / outcome proxy) wrapped in per-(OM,gate) HMM.
  • Discount first 30 days of any directive — Hawthorne effect: 55–61% compliance variance from observers being present.
  • Surrogate index (Netflix KDD 2024) bridges short ops signal to long $ outcome. Validated 95% decision consistency on 200 A/B tests.
  • Architecture: new signal_engine Postgres schema, 5 tables, hot/warm/cold path.

Sources cited (50+)

Brodersen (Google BSTS), Airbnb engineering, Netflix decisioning, Spotify, Discord, Stitch Fix, Booking.com, DoorDash, Microsoft EconML, Uber CausalML, Toyota TPS, Amazon Working Backwards, Hawthorne empirical studies, arXiv/KDD/NeurIPS applied tracks

Stream 4 — Dental ops benchmarks

Published thresholds for 20+ metrics. Source-quality tiering.

High-confidence consensus

  • Collections rate 95–97% median (multi-source)
  • Staff comp 25–30% of revenue total, 5–8% admin
  • Provider days/FTE 17–22/month
  • Adjustment rate 30–45% PPO / 10–15% FFS

Sources disagree (lock definitions first)

  • No-show rate: Henry Schein One says 4%, Curogram says 15–20% (definition divergence — recommend "failed appointment" = no-show + same-day cancel combined, target ≤8%)
  • Hygiene reappt: 60% avg vs 90% top decile (Scott's 62% disputed — SAPS taxonomy must lock first)
  • EBITDA margin: Heartland 30–40% practice-level vs PDS 13.5–14.5% consolidated

No benchmark exists

  • 3rd-next-available (medical primary-care construct, not dental)
  • Active patients per FO FTE (one vendor benchmark only)

Source bias picture

Most "industry benchmarks" are 🔴 vendor-published or 🟡 consultant-aspirational. 🟢 sources: ADA HPI, MGMA, audited DSO investor decks.

Cross-cutting conclusions (all 4 streams agree)

  1. Composite-AND is universal. Banned single-metric alerts at alert-tab layer.
  2. Hybrid floor + cohort thresholds. Both must fail to fire.
  3. Causal attribution is non-negotiable from day one. Layer 4 directive store ships in Week 2, even though Signal Engine deferred to Phase 4.
  4. Design against documented anti-patterns: alert fatigue, single-metric trap, gaming, causal theater, cadence mismatch, vendor bias.

Dashboard Audit — Four Families, Not Three

Answers the question from Meeting 2026-05-14: "labor analysis has one type, RIS has different type, DAX queries different type."

30+ live HTML dashboards fingerprinted by CSS variable scheme, font choice, Action Briefing presence, threat-bar presence, and drillable-dashboard markers.

Family A — Drillable-Dashboard canonical WINNER for ops/alert dashboards

CSS signature: --accent:#1e3a8a slate. Reference impl: .tmp/zoho-sentiment.preflight/index.html. Pairs with action-briefing skill.

ProjectDeployBriefingThreat-bar
zoho-intel (sentiment)sga-zoho-intel-v2reference
daily-goal-barometersga-barometer-v2
hygiene-performance/dashboardsga-hygiene-sprint-v2
ai-roi-analysis/dashboardTBD
neurality-analysisTBD
fee-negotiations/.deploysga-fee-negotiations
cancellation-noshow-dashboardTBD
net-budget-dashboardTBD
rod-dashboardTBD

Family B — People System scorecard WINNER for people-tier scorecards

CSS signature: --amber:#F59E0B; --amber-bg:#FFFBEB. In-house design system, not in .claude/skills/. Cross-nav across the 5 dashboards already works.

ProjectDeployBriefingNotes
ris-dashboardsga-ris-v2RIS Scorecard v10, provider scoring from PBI
his-dashboardTBDHiring Index Scorecard, cross-linked to RIS
people-systemsga-people-v2Integration shell HIS+RIS+Headcount
headcount-dashboardsga-headcount-v23-metric framework, plotly
headwaysga-headway-v2Hybrid — Family B CSS but Family A briefing. Triage needed.
Headway open question Path 1 (migrate to Family A) loses People System cross-nav. Path 2 (stay Family B, propagate briefing to RIS/HIS/Headcount). Recommendation: Path 2.

Family C — Hand-coded power-analytics MIGRATE

Pre-skill era. Chart.js + annotation plugin, custom colors. Works fine; not urgent. Refactor before adding features.

ProjectDeployVerdict
labor-analysissga-labor-v2Migrate to Family B (people-tier deliverable)
labor-analysis-omsga-labor-om-v2Migrate to Family B alongside labor-analysis
kpi-metrics-dashboardsga-kpi-v2Migrate to Family A (drill-down value)
bonus-analysissga-bonus-v2Migrate to Family A (scenario explorer)

Family D — Inter-fonted standalone TRIAGE

Inter font, no skill signature. Quiet Ledger aesthetic adjacent.

ProjectDeployVerdict
daybreaksga-daybreak-v2Rebuild in Family A during Sprint Week 2
weekly-outputsga-weekly-output-v2Keep as-is — single-purpose Friday report
om-morning-briefTBDTriage — keep or supersede by Daybreak v2?
om-dashboardsga-om-dashboard-v2Triage — likely superseded by Daybreak v2
procurement-analysisTBDMigrate to Family A (6MB file, drill-down value)

Anti-pattern findings

  1. Inconsistent CSS vars. A=slate (#1e3a8a), B=amber (#F59E0B), C=navy (#1a1a2e). Ad-hoc picks create a fifth family. Lock var sets in skill files.
  2. kpi-metrics-dashboard is 1MB. Too much inline data. Should pull from snapshot.json.
  3. procurement-analysis is 6MB. Needs investigation — embedded images or massive inline data dump.

Peer-Cohort Definitions

Required for every gate using peer-logic threshold (Call Volume, Conversion, Labor, Doctor Supply, Hygiene Reappt). Closes Phase 1 §1.4 prereq.

Tier 1 cohort: size × specialty × PMS

Expected: 12–24 cohorts of 8–25 practices each at SGA scale.

DimensionBucketsMaster-data field
Size (active patients)<1k, 1k–2k, 2k–4k, 4k–8k, >8kactive_patients
Specialtygeneral, ortho, pedo, oral-surgery, perio, multi-specialtyspecialty
PMSEaglesoft, Open Dental, Dentrix, Curve, Oryx, otherpms

Cohort ID format: {size}-{specialty}-{pms} (kebab-case). Example: 2k4k-general-eaglesoft.

Empty / single-practice cohort fallback

If cohort has fewer than 4 practices, drop most-specific dimension in order: PMS → specialty → size. Log the fallback in gate output so brief discloses it.

Per-gate cohort assignment

GateCohort dimsWhy
Call volumesize onlyCall volume scales with active-patient base; specialty/PMS don't matter for raw call count
Conversionsize × specialtyVaries with specialty mix; PMS doesn't matter
OSAsize × specialty × chair-type (Tier 2)Cosmetic practices legitimately run longer waits
Neurality appt mixsize × specialtyAppt mix varies with specialty
Revenue / Production(absolute, no cohort)Budget pace is absolute % vs own budget
Laborsize × specialtyFO benchmarks scale with size; specialty affects workload
Doctor supplysize × specialtyHire triggers size+specialty driven
Hygiene reapptsize × specialty (post-taxonomy lock)Reappt varies with specialty

Tier 2 dimensions (deferred until Tier 1 stable)

DimensionBucketsWhen to add
RegionSGA East, Brendan Pool, SGA West, Gen4, FL, MISDP, CAPCC, KS/MO/TX/UTIf region matters more than network for ROD coaching consistency
Insurance mix>80% PPO, 50–80%, <50%, FFS-heavyIf adjustment-rate peer comparison misleads
Acquisition cohortlegacy-SGA, legacy-Gen4, 2023-acq, 2024-acq, 2025-acqWhen Signal Engine needs DiD on acquisition transitions
Days/week open5-day, 4-day, 6-dayIf schedule-fill distorts
Chair-type mixcosmetic-heavy, restorative-mixed, general-heavyFor OSA gate (stream 4 flagged this)

Validation requirements before Phase 2

  1. Cohort assignment dry-run — Verify no cohort has fewer than 4 members after fallback
  2. Cohort stability test — 4 consecutive days; flag any practice flipping cohorts ≥2× per week (likely master-data error)
  3. Peer-median sanity check — 30 days historical; flag cohorts whose median is worse than industry floor

GSD Scaffold for Daybreak

Project structure at personal/SGA/daybreak/.planning/. Five files written by gsd-roadmapper agent.

FileContents
INDEX.mdPointer to all planning docs + framework + strategic context
ROADMAP.mdPhases 0 (done) → 4 (deferred) with dependency graph
REQUIREMENTS.md30 numbered requirements, 100% phase coverage with traceability
SUCCESS-CRITERIA.mdPer-phase observable acceptance criteria
RISKS.mdRisks grouped by cross-cutting / external / technical / threshold / scope-creep

5-Phase Roadmap

PhaseWindowStatusGoal
Phase 0 — Existing frameworkPre-sprintCompletev2-spec, metric-tree, brief-schema, sample brief deployed
Phase 1 — Foundation + Registry2026-05-15 → 22In ProgressMaster-data viewer, skill modularization, Metrics Registry gap fill, DAX triage
Phase 2 — Daybreak end-to-end2026-05-23 → 29Pending8-dim gate catalog, rules engine, UI rebuild, OpenClaw engine migration
Phase 3 — Hardening + secondary tier2026-05-30 → 06-05PendingThreshold tuning, second metrics tier, compliance loop scaffolding
Phase 4 — Signal Effectiveness EngineMid-June onwardDeferredOwn GSD project at personal/SGA/signal-engine/

Top 3 requirements surfaced

  1. REQ-028 (Phase 2): Rules engine with composite-AND + streak-day persistence. Architectural keystone — without composite-AND logic, labor headcount-reduction flag cannot fire correctly.
  2. REQ-018 + REQ-019 (Phase 1): Registry stubs for all 8 gate dimensions + DAX validation of ~7 OpenClaw queries. If not done in Week 1, Phase 2 stalls day 1.
  3. REQ-031 (Phase 2): Task Scheduler chain with PushNotification on failure. Direct mitigation of 6-month "collapse" risk.

Top 3 risks surfaced

  1. R-02 (high): 6-month collapse risk — entire reason sprint exists. Mitigation: Phase 2 automation + Phase 3 hardening.
  2. R-10 (high): Call tracking blocker (Dakota + Amy) — uniquely affects 2 of 8 gates. Mitigation: stub YAMLs status: pending-data-source, ship other 6 gates, chase in parallel.
  3. R-40 (medium): Signal Effectiveness Engine scope creep — most exciting piece is highest-risk distraction. Hard rule in ROADMAP: Phase 4 is own project, own folder, mid-June.

Signal Effectiveness Engine

The IP — Phase 2 / own GSD projectThis is the self-improvement layer. NOT compliance-as-paperwork. A learning system that gets smarter every day.

The loop

Directive issued
  → Signal patterns observed
  → Execution inferred (not self-reported)
  → Target metric movement measured
  → Causal effect estimated
  → Gate priority re-weighted
  → Tomorrow's directives smarter than today's

Five jobs

  1. Did the action happen? Infer execution from punch data + confirmation timestamps + chart audit patterns. Not "did OM say yes" — "did signals shift."
  2. Did the practice improve? Target metric movement + externalities.
  3. Which signals cause improvement? Per directive type × cohort × context: effect size, confidence interval, causal inference (DiD, synthetic control, propensity matching).
  4. Re-weight gate priorities. Gates whose directives drive proven improvement get amplified. Weak causal signal gets demoted or retired.
  5. Surface hypotheses for human ratification. "Morning directives to Brendan's region close 2.3x more often." Review → bake into routing OR reject.

Methodology stack (pre-committed from research stream 3)

TechniquePurposeSource
CausalImpact (BSTS, Brodersen 2015)Per-fire-event attributionGoogle
Airbnb ACE methodContinuous-dose effects (closest analog at SGA scale)Airbnb engineering
Synthetic control + Spotify sensitivityHigh-stakes exec-deck claimsAbadie + Spotify
Regression DiscontinuityFree cross-check at every numeric thresholdEconometrics standard
Hierarchical Bayesian pooling (James-Stein)Per-practice shrinkage toward region + grand-mean priorsKeystone decision
3-tier signal fusion + HMMExecution inference (told-vs-done-vs-improved)Multiple
Netflix surrogate index (KDD 2024)Short ops signal → long $ outcome bridgeNetflix
Critical: No full contextual bandit at SGA scale Borrow bandit concepts (Thompson uncertainty, pessimistic priors, off-policy evaluation as deploy gate). Do NOT build the online learning loop — exploration tax is politically toxic and data scale doesn't support it.
Discount first 30 days of any directive Hawthorne effect literature: 55–61% of observed compliance variance disappears when observers leave.

Architecture

PathFrequencyJob
Hot (~30 min nightly, BEFORE 06:00 CT Daybreak)DailyBSTS + hierarchical Bayes + HMM update
WarmWeeklySurrogate index + sensitivity analysis
ColdMonthlyCausal forests + off-policy evaluation of candidate re-weightings

Schema

New signal_engine Postgres schema, 5 tables: directives, outcomes, attributions, weights, hypotheses.

Human ratification gate (Stitch Fix pattern)

Any proposed gate-weight re-weighting >30% w/w must be human-reviewed before going live.

Open Coordination Items

#ItemOwnerSeverity
1Four-bucket taxonomy ratification + tool sorting sessionKaren + ScottMedium
2Workday API access — Rebecca / Jordan pathMiles owns pushHigh
3Call tracking data source — blocks call volume + conversion gatesDakota + AmyHigh
4SGA East labor analysis inclusion (currently excluded)ScottMedium
5OM bonus tie-in formula (Meeting action #19)Brooke decisionMedium
6Top-25 offices priority pull (Meeting action #17)ScottLow
7Signal Effectiveness Engine scoping — own GSD projectScott (mid-June)Low

Triage decisions waiting for Scott

  • Headway path: Family A migration vs Family B + briefing bolt-on. Recommendation: Path 2.
  • om-morning-brief: Keep or supersede by Daybreak v2?
  • om-dashboard: Keep or supersede by Daybreak v2?
  • kpi-metrics-dashboard 1MB issue: Migrate now or defer?
  • DAX Dictionary: Source folder unidentified; need to fingerprint when located

Open questions from research (6 deepest)

  1. Call-tracking schema from Dakota + Amy — what fields? Call IDs? Booking outcomes? Recording transcription?
  2. SAPS hygiene reappt taxonomy — Brittney + Incline review. Lock before §2 hygiene gate fires.
  3. Donor pool construction for synthetic control — which practices are "comparable" for treated practice? Stream 3 #1 Signal Engine open question.
  4. Override capture mechanism — UI for OMs to mark "intervention attempted" so Signal Engine distinguishes "didn't try" from "tried, didn't work."
  5. Cosmetic-vs-restorative OSA split — 25-day OSA is bad-medical-good-cosmetic. Resolve via chair-type classification in master data.
  6. Acquisition-cohort tagging — when did each practice join SGA? Signal Engine needs this for causal models.

Source-material gaps (GSD scaffold flagged)

  1. Peer-cohort definition CLOSED — see Cohorts tab
  2. Action-verb owner free-text in v2 — accept for sprint, defer to PWA
  3. Treatment Acceptance + Hygiene Reappointment not in registry — TA deferred to Phase 3; Hyg Reappt uses Gen4 data in Phase 2
  4. Active-patient-base bucketing for call-volume CLOSED — 1k/2k/4k/8k boundaries (Cohorts tab)
  5. 3rd Next Available no registry entry — add stub in Phase 1 §1.4