Continuous Verification for Safety-Critical Software: Lessons from Vector's RocqStat Acquisition
embeddedCI/CDverification

Continuous Verification for Safety-Critical Software: Lessons from Vector's RocqStat Acquisition

bbehind
2026-01-31
9 min read
Advertisement

Integrate WCET timing analysis into embedded CI/CD with Vector's RocqStat—automated gates, artifacts, and certification traceability.

Fixing the invisible failure: why timing analysis must be part of your CI/CD now

Nothing wakes a development or operations team faster than a latent timing failure in a deployed embedded system: a missed interrupt, a jitter spike that trips a watchdog, or a control loop that overruns at a critical moment. In safety-critical domains—automotive ADAS, industrial controls, avionics—these failures are not just bugs; they are certification risks and potential safety incidents. Yet many embedded CI/CD pipelines still treat timing analysis and WCET (worst-case execution time) as one-off, manual tasks performed late in the project.

Vector Informatik’s January 2026 acquisition of StatInf’s RocqStat (announced alongside plans to integrate it with VectorCAST) marks a turning point: timing analysis is moving from siloed expert workflows into mainstream toolchains. For platform engineers and embedded teams, the question is practical: how do we fold static timing analysis and WCET estimation into automated CI/CD so timing becomes continuously verified, traceable, and auditable for certification?

The 2026 context: why timing verification is gaining urgency

By late 2025 and into 2026, three developments have pushed timing analysis into the spotlight:

  • Increased software complexity in safety-critical systems (more ECUs, more features, fused sensor stacks) raises exposure to timing violations.
  • Regulators and certifiers are demanding stronger evidence of functional timing safety—ISO 26262, DO-178C compliant projects frequently require explicit WCET evidence, and multicore/system-level timing is a growing audit focus.
  • Tool consolidation: acquisitions like Vector + RocqStat signal vendor roadmaps to build unified verification platforms that directly link unit/integration testing and timing analysis into a single workflow.

“Timing safety is becoming a critical …”
— Eric Barton, SVP Code Testing Tools, Vector (paraphrasing industry statements from January 2026)

RocqStat and WCET: what this integration enables

RocqStat is designed to compute safe WCET estimates using static analysis and hybrid techniques. Integrated with test toolchains like VectorCAST, it enables several practical improvements:

  • Unified evidence: link test coverage, execution traces, and WCET reports to the same artifacts and version tags.
  • Repeatability: deterministic WCET runs in CI give reproducible artifacts for auditor review.
  • Automation: gating rules can block merges/releases when WCET or timing regressions exceed thresholds.

Design patterns for folding WCET into embedded CI/CD

The goal is to treat timing analysis as a first-class verification stage in your pipeline, not an ad-hoc expert task. Below are repeatable design patterns that work across toolchains, compilers, and target hardware.

1) Shift-left static timing analysis

Run static WCET estimation early (on host builds) as soon as code compiles. Early static analysis catches pathological algorithms and API changes that expand execution paths before they reach hardware-in-the-loop (HIL) verification.

  • Integrate a static WCET tool (e.g., RocqStat) into the build stage; use the same compiler and optimization flags as the target to maintain fidelity.
  • Fail the pipeline on obvious timing path explosions (e.g., O(n^2) to O(n^3) algorithmic regressions) to avoid wasted downstream test cycles.

2) Deterministic builds and environment pinning

WCET accuracy depends on build reproducibility. CI must produce deterministic binaries:

  • Deterministic builds, pin compiler versions, linker flags, and toolchain vendor binaries.
  • Include reproducible build metadata in artifacts (git commit, build-id, toolchain hash).

3) Multi-stage timing verification

Combine static analysis, measurement-based testing, and system-level simulation:

  1. Static WCET estimation (fast, conservative).
  2. On-target measurement runs (HIL or instrumented target) to validate static assumptions and collect execution traces.
  3. Hybrid refinement: feed trace information back into static analysis to close gaps while preserving safe upper bounds.

4) Automated verification gates and thresholds

Define clear, automated gates in CI/CD:

  • Soft gate: warn on regressions of X% in WCET but still allow merge with reviewer approval.
  • Hard gate: block merge if WCET exceeds requirement deadline minus certified margin (e.g., WCET >= deadline * 0.85 for an ASIL-D ECU).
  • Store the gate decision metadata—who overrode, why, and link to mitigations—as a required artifact for certification.

Concrete CI/CD pipeline — an example flow

Below is a practical pipeline pattern you can implement in Jenkins, GitLab CI, or GitHub Actions. Keep each stage atomic and artifact-driven.

  1. Checkout — fetch code and submodules; tag build with commit and environment metadata.
  2. Build — cross-compile with pinned toolchain; produce instrumented and release binaries.
  3. Static analysis — run static code analysis and static WCET (RocqStat); generate WCET report (JSON, PDF).
  4. Unit & component tests — VectorCAST runs unit tests with coverage; attach trace files.
  5. On-target measurement — run tests on instrumented hardware / HIL; collect execution traces and timestamps.
  6. WCET refinement — merge trace coverage with static analysis to reduce pessimism while maintaining safe bounds.
  7. Gate evaluation — apply automated rules; publish pass/fail status and artifacts to artifact repository.
  8. Artifact storage — store signed artifacts: binaries, WCET reports, trace logs, coverage maps, SBOM/Provenance (SPDX, in-toto), and traceability matrix.

Each stage must publish machine-readable artifacts (JSON, JUnit XML) and a human-readable report. The WCET stage should also emit a signed provenance file that auditors can follow to reproduce the run.

What artifacts should you produce and store?

Certification relies on traceable, reproducible evidence. At minimum, generate:

  • WCET reports with configuration, assumptions, and measurement inputs (machine-readable and signed).
  • Execution traces (timestamped logs, instruction traces, path coverage).
  • Test results (unit/integration coverage, VectorCAST artifacts).
  • Traceability matrix mapping requirements <-> tests <-> code <-> WCET results.
  • Build provenance (toolchain hashes, compile flags, environment). Use notation compatible with SBOM/Provenance (e.g., SPDX, in-toto).

Practical advice for artifact handling

  • Store artifacts in an immutable, versioned artifact repository (artifact storage with retention plus checksum verification).
  • Attach human summaries for reviewers and machine-readable metadata for automation and audits.
  • Include a clear list of WCET assumptions: interrupt model, cache/pipeline modeling, RTOS scheduling, and core affinity.

Traceability and certification: mapping WCET into safety dossiers

Integrating WCET into CI/CD helps produce the documentation certifiers expect:

  • Link each requirement to test cases (VectorCAST) and to the WCET analysis of the code implementing that requirement.
  • Record the mitigation strategy for timing risk (e.g., increased scheduling priority, partitioning, watchdogs) and link to verification evidence.
  • Produce an auditable trail: requirement → code module (commit) → tests → WCET report → gate decision.

Automating this traceability reduces reviewer effort during ISO 26262 or DO-178C audits and shortens certification cycles. The joint VectorCAST + RocqStat approach is explicitly aimed at building these linked artifacts inside a single toolchain workflow.

Advanced concerns: multicore, caches, and compiler optimizations

Timing analysis faces complexity when moving to modern hardware:

  • Multicore interference: shared caches and buses make safe WCET estimation harder. Use temporal/spatial partitioning or multicore-aware WCET analyses.
  • Cache and pipeline modeling: static analyzers must model microarchitectural features. Ensure your WCET tool supports the target CPU model or provide validated measurement inputs.
  • Compiler optimizations and LTO: Link-time optimizations can change execution paths. Re-run WCET on final linked binaries and pin optimization levels for certified releases.

Practical workarounds include isolating timing-critical tasks on dedicated cores, using RTOS configurations for deterministic scheduling, and including microbenchmarking with controlled interference scenarios as CI experiments.

Operational practices: timing regression detection and triage

Continuous verification isn't just pass/fail. Make timing regressions actionable:

  • Run nightly baseline runs and publish trend dashboards that visualize WCET deltas per commit. Small cumulative increases often indicate creeping inefficiency.
  • Automate root-cause hints: if WCET increased, run difference analysis to show which functions, paths, or modules contributed.
  • Tag and quarantine commits that cause unreviewed timing increases—require design/peer review before merge.

Example triage workflow

  1. WCET regression detected by CI gating stage.
  2. Create a CI incident ticket with artifacts and diff heatmap.
  3. Automatic suggestion: run fine-grained static analysis on impacted modules and schedule targeted on-target tests.
  4. Assign to an engineer for fix; require re-run of WCET stage and verification before merge.

Case study: an automotive ECU team (illustrative)

Team Alpha (fictional, but representative) supports an ADAS ECU. Before integrating WCET into CI, timing issues were discovered late during HIL runs, causing rework and certification delays. Post-2025, they piloted a VectorCAST + RocqStat pipeline built into GitLab CI:

  • They ran static WCET on every merge request and stored JSON WCET artifacts alongside VectorCAST coverage packages.
  • Automated gates blocked merges where WCET exceeded 80% of the scheduling window; soft-warn threshold at 60%.
  • Nightly baseline runs with trend charts caught a 15% WCET drift from a third-party library update; automated diff lists pointed to an algorithmic change, which the team reverted within hours.

Results: the team reported fewer late-stage timing defects, a smoother ISO 26262 evidence collection process, and a measurable reduction in certification review time—something certifiers attributed to clearer, automated traceability.

Checklist: what to implement in the next 90 days

  1. Pin toolchain versions and create reproducible build recipes for CI.
  2. Integrate a static WCET tool (pilot RocqStat if available) into the build pipeline; produce signed WCET artifacts.
  3. Define gate thresholds (soft + hard) aligned to your safety margins and document them in the test plan.
  4. Publish a traceability template that links requirements, tests, code commits, and WCET results and automate population.
  5. Set up nightly baseline runs and a regression dashboard with alerting for deltas.

Future predictions (2026–2028)

Expect several emergent trends over the coming years:

  • Timing verification as code: WCET configurations, assumptions, and gating policies will be stored as code and reviewed in PRs.
  • AI-assisted path pruning: machine learning will help suggest the most critical paths to analyze and provide confidence estimates to guide human reviewers.
  • Stronger regulatory focus on system-level timing: certifiers will ask for integrated evidence across multicore systems and end-to-end latencies in sensor-to-actuator chains.
  • Toolchain consolidation: more vendors will match static WCET to test tools (as Vector is doing), making continuous timing verification mainstream.

Final takeaway: make timing verification continuous, visible, and auditable

Timing is not an optional verification step. As embedded systems become more software-defined and safety expectations rise, WCET and timing analysis must be integrated into CI/CD as automated, auditable stages. Vector’s acquisition of RocqStat signals that vendor ecosystems are aligning to support that transformation—but the practical work remains on engineering teams: design deterministic builds, automate WCET stages, publish artifacts, and enforce gates tied to certification margins.

Start small: add static WCET to your next merge request pipeline, store the report, and require a human review for any timing increase. Iterate from there toward measurement-based refinement and fully automated gating. The ROI is clear: fewer late-stage timing surprises, faster certification, and safer products in the field.

Call to action

If you manage embedded CI/CD or platform engineering for safety-critical systems, now is the time to act. Evaluate a pilot that integrates static WCET into your CI pipeline, collect signed artifacts for one release, and measure the certification overhead reduction. If you want a jumpstart, contact behind.cloud for a practical assessment—toolchain-agnostic, focused on traceability, and aligned to ISO 26262/DO-178C evidence best practices.

Advertisement

Related Topics

#embedded#CI/CD#verification
b

behind

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-31T17:18:49.784Z