Functional safety, CI/CD

Your build environment is your safety evidence

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Your build environment is your safety evidence</span>

If your build environment cannot be reproduced reliably, neither can your compliance. Whether you are building a medical device, an automotive controller, or an industrial automation system, certification requirements do not bend to development schedules or budget pressures, but how much overhead that certification carries is something you can control.

For many embedded teams, the bottleneck is not the certification process itself. It is the instability that builds up underneath it: fragile development environments, inconsistent toolchains, and build results that cannot be reproduced reliably over time. These problems do not just slow development. In a safety-critical context, they directly threaten the validity of your safety evidence.

Containerized workflows address this at the root. Not by replacing compliance, but by making it structurally easier to achieve and maintain.

The hidden cost of environment fragility

Embedded projects tend to start in good shape. Early on, the team is small, the toolchain is fresh, and builds are predictable. As the project scales, the cracks appear.

Different engineers run slightly different compiler versions. SDK configurations drift between machines. A developer in one office cannot reproduce a build failure that occurs reliably for a colleague in another. New team members spend days configuring their environment before writing a single line of code.

In a general software project, these are productivity problems. In a safety-critical project, they are something more serious.

Certification bodies expect deterministic behavior. They want evidence that the same inputs produce the same outputs, reliably, across the full product lifecycle. If your build environment is not stable, your results are not deterministic. And if your results are not deterministic, your safety evidence becomes questionable.

Environmental drift is not a theoretical risk. It is one of the most common and least visible sources of compliance problems in embedded development.

What containerization actually changes

Containers solve the environment problem by eliminating it. Instead of configuring toolchains manually on each developer machine, you define the environment once and package it: the compiler, the build tools, the static analysis tools, the testing infrastructure. Everything required to build and verify the software is codified in a reproducible image.

Every developer, every CI pipeline, and every build machine uses exactly the same environment. There is no drift, no "works on my machine," and no ambiguity about which tool version produced which result.

This consistency is the foundation that functional safety workflows actually require.

When containers are combined with CI/CD pipelines, the benefits compound. Feedback cycles accelerate, integration issues surface earlier, and collaboration across distributed teams becomes more reliable. IAR Build Tools running in Linux-based containers compile up to 2x faster than equivalent local setups, and static analysis runs up to 3.5x faster, which means teams get more safety-critical checks completed in less time, not more.

 Image: Modern embedded development workflow

But the most important property for safety teams is not speed. It is auditability.

Containers and functional safety: a natural fit

There is a common assumption that modern DevOps practices and functional safety requirements are in tension. The assumption is understandable. Safety standards like ISO 26262, IEC 61508, and IEC 62304 were developed long before containerization and CI/CD pipelines were part of the embedded vocabulary.

The reality is more nuanced. The underlying requirements have not changed. What has changed is that containerized workflows, implemented correctly, satisfy those requirements more reliably than traditional manual setups.

Consider what certification actually demands:

  • Deterministic builds. A container-based pipeline produces the same output from the same inputs, every time, regardless of who runs it or where.

  • Traceability. The development environment itself becomes version-controlled. The Dockerfile, the toolchain image, the analysis configuration — all of it is tracked in source control alongside the code it builds. Every build is linked to a specific, auditable environment.

  • Reproducibility over time. Safety does not end at the start of production. Systems in automotive, medical, and industrial applications routinely operate for 10 to 20 years. If a change is required five years after launch, you need to be able to reconstruct the exact environment that produced the original certified output.

With manually managed toolchains, long-term reproducibility is extremely difficult to guarantee. With containerized environments and IAR Long-Term Support Services (LTS Services), it becomes a structural property of the workflow.

Reproducible builds are not just a development convenience. They are auditable safety evidence.

Long-term stability: where most teams underinvest

The long-term dimension is where containerization provides value that is easy to underestimate at project start.

Without LTS-backed toolchains, every change to the development environment carries re-qualification risk. Update the compiler, and you may need to rerun validation. Change the static analysis configuration, and previously accepted evidence may no longer hold. These costs accumulate throughout the lifecycle, and they tend to be invisible until they become urgent.

With TÜV-qualified, LTS-backed toolchains deployed inside containers, the workflow remains stable across the entire product lifecycle. Updates happen in a controlled, version-controlled fashion. CI/CD pipelines remain deterministic and auditable. Certification overhead is minimized because the stability that certification depends on is built into the process.

This is not an abstract benefit. Teams that standardize on containerized, LTS-backed environments consistently report fewer surprises during certification audits, and less rework when products require maintenance years after initial release.

Consider a maintenance update required five years after a product launched. Without a containerized environment, the original build machine is gone, the OS it ran is unsupported, and the compiler version might no longer be available. Reconstructing the certified environment from scratch can take weeks, if it is achievable at all. With a versioned container image stored alongside the source, any engineer can pull the environment and produce the exact binary that was originally certified. That is not a convenience. That is your audit trail.

Making containers hardware-aware

One practical question that comes up quickly is hardware access. Embedded development does not happen in isolation. At some point, the software needs to run on a real target.

Containers can support this through USB and JTAG passthrough, or through TCP/IP-based debug servers such as I-jet, J-Link, or OpenOCD. In Windows Subsystem for Linux (WSL) environments, tools like usbipd can map USB devices as virtual NAT devices accessible inside a container.

 

Image: Interacting with physical targets 

Each approach has trade-offs, particularly in CI environments where multiple pipelines may compete for the same hardware. But the core point is that hardware access from containers is achievable, and when implemented, it means that even hardware-in-the-loop testing can be standardized, version-controlled, and made fully reproducible.

Getting started without disrupting what works

Transitioning to containerized workflows does not require a full-scale migration from day one.

The most effective approach is incremental. Start by containerizing the build environment for a single project or team. Validate that the outputs are identical to what the local setup produces. Integrate that container into your existing CI pipeline. Expand from there.

A well-structured container stack for embedded development typically follows a layered model: a base operating system image, a toolchain layer, and project-specific dependencies on top. This architecture means that the expensive base and toolchain layers are shared across projects, reducing storage requirements and pull times significantly.

For teams working across multiple semiconductor vendors — Arm-based targets from NXP, ST, Renesas, Infineon, TI — vendor-specific IAR container images are available with device support built in, keeping image sizes manageable while covering the full architecture range.

The shift worth making

The embedded industry has spent years treating certification as a final gate: something you prepare for at the end of the development cycle. The overhead that approach generates — revalidation, evidence reconstruction, manual tool qualification — is one of the reasons certification is seen as a bottleneck.

Containerized workflows, combined with certified toolchains and structured CI/CD pipelines, shift certification from a gate to a continuous property of the development process. Compliance is embedded in the workflow rather than appended to it. Reproducibility is guaranteed structurally rather than managed manually.

The result is not just faster certification. It is a more reliable foundation for the entire product lifecycle, from first build to the last software update a decade after launch.

Want to try it yourself?

Explore containerized workflows for your embedded project at github.com/iarsystems/modern-workflow or find cloud-ready containers for multiple architectures at github.com/iarsystems/containers.

What’s next?

Ready to make compliance a structural property of your workflow rather than a final gate? See how IAR supports containerized embedded development in practice or book a demo.