CARLA Observability Toolkit (In Progress)
Simulation telemetry and metrics toolkit for CARLA with sprint-based delivery, dashboard planning, and experiment validation workflows.
PythonCARLAObservabilityTelemetryDashboards
Overview
This capstone is focused on building a practical observability layer for CARLA simulation workflows. The project emphasizes structured telemetry collection, measurable experiment evaluation, and clear reporting so model and scenario behavior can be analyzed with less guesswork. It is being delivered in phased sprints with explicit acceptance criteria.
Highlights
- Defined a 5-phase sprint plan to ship telemetry, dashboards, and validation in increments.
- Mapped key simulation and experiment metrics for consistent observability.
- Established repository and project-board workflows to keep execution visible.
- Added validation checkpoints to assess metric quality and experiment outcomes.
Architecture
- Telemetry collectors capture simulation events and performance signals.
- Metrics are normalized into analysis-ready structures for downstream dashboards.
- Dashboard and reporting layers present experiment behavior over time.
- Sprint execution is tracked through scoped stories and acceptance criteria.
Key Learnings
- Strong observability design is as important as model logic in simulation work.
- Incremental delivery reduces risk for multi-phase capstone projects.
- Useful metrics need clear definitions before collection starts.
- A visible sprint board improves accountability and planning decisions.
Outcomes
- Delivery plan: 5 phases and 90 total story points
- Current focus: Telemetry, dashboards, and experiment validation