Behaviour and Performance Comparison between Ada Ravenscar and Rust RTIC for Hard Real-Time Systems

Tharen Emmanuel Candi

19th of March, 2026

Introduction

Real-time systems are distinct from general-purpose computing in that their correctness depends not only on the logical result of computation but also on the time at which these results are produced (Engblom et al. 2001). The criticality of a real-time system determines how problematic a missed deadline can be (Engblom et al. 2001). A primary architectural requirement for such systems on this spectrum is predictability (Puschner and Burns 2000); the feasibility of soundly establishing temporal behaviour, ranging from full determinism to bounded uncertainty.
 
To provide these guarantees, system designers rely on schedulability analysis, a methodology that determines whether a given set of tasks can meet their deadlines when executed on a specific platform. This analysis requires precise upper bounds on the execution time of code, known as Worst-Case Execution Time (WCET) (Wilhelm et al. 2008). This study provides a comparative evaluation of the real-time capabilities of Ada Ravenscar and Rust RTIC. Using Generic Environment Executive (GEE) as a reference real-time application case study (Hu and Gobbo 2020), both implementations are evaluated on an STM32F429I-Discovery board (ARM Cortex-M4).
 
This paper introduces the Ada Ravenscar and Rust RTIC frameworks, describes the GEE application, examines the RTIC implementation with emphasis on runtime abstractions and timing implications, analyses task definitions and synchronisation primitives, and uses the modelling and Analysis Suite for Real Time applications (MAST)(Harbour et al. 2001) to compare the differences in schedulability.

The Ada Ravenscar and Rust RTIC Frameworks

Ada and the Ravenscar Profile

The Ada programming language has long been used for high-integrity systems due to the available deterministic constructs, and built-in support for concurrency (Engblom et al. 2001) (Burns, Dobbing, and Vardanega 2017). The full Ada tasking model includes non-deterministic features that render static timing analysis infeasible. To address this, the Ravenscar Profile was defined as a restricted subset of Ada tasking (Burns, Dobbing, and Vardanega 2017).
 
Ravenscar enforces a static existence model, and a deterministic execution model with predefined, recurrent tasks and resources, prohibiting dynamic creation and abort statements to ease system analysability (Burns, Dobbing, and Vardanega 2017). The profile grants both exclusion synchronisation (mutual exclusion) and avoidance synchronisation (condition-based waiting) via protected entries and suspension objects. Concurrency control relies on Protected Objects (POs), which use the Immediate Priority Ceiling Protocol (IPCP) (Burns, Dobbing, and Vardanega 2017), to ensure mutually exclusive access, prevent deadlocks and bound priority inversion, enabling rigorous Response Time Analysis (RTA) (Burns, Dobbing, and Vardanega 2017).
 
The system employs preemptive fixed-priority scheduling (FPS), managed by a software kernel runtime that also handles context switching and time management. Temporal predictability is guaranteed by time-based semantics (like delays and deadline monitoring) with a periodic SysTick interrupt handler. The SysTick handler runs at the highest hardware priority, so that the scheduler bounds release jitter and accurately tracks time.(Vardanega, Zamorano, and Puente 2005).

Rust and the RTIC Framework

Rust is increasingly discussed for embedded systems due to its compile-time memory safety and modern tooling. The Real-Time Interrupt Driven Concurrency (RTIC) framework, formerly RTFM, is a DSL built on top of Rust that maps tasks to hardware interrupt vectors on ARM Cortex-M microcontrollers. Concurrency control uses the Stack Resource Policy (SRP), a Stack-Based Immediate Priority Ceiling Protocol (SB-IPCP) variant, to guarantee deadlock-free execution (Baker 1991; Lindgren, Dzialo, and Lunnikivi 2023; Eriksson et al. 2013).

RTIC defines two task classes that safely share a single stack:

To execute these tasks, scheduling is delegated to the hardware NVIC. Tail-chaining is a hardware optimisation implemented in Cortex-M processors such that when an exception enters a pending state while the processor is handling another exception of the same or higher priority, the pending exception is subsequently handled without un-stacking and re-stacking core registers (Yiu 2011). This reduces context-switch latency between successive interrupts (Yiu 2011). Because RTIC directly maps task priorities to hardware interrupt priorities, the NVIC automatically tail-chains into the next highest-priority pending task, natively enforcing the strict priority ordering required by the SRP (Lindgren, Dzialo, and Lunnikivi 2023).
 
Shared resources are protected by SRP critical sections. Each resource has a compile‑time-computed ceiling, and locking elevates the task’s effective dynamic priority via the BASEPRI register, which temporarily masks lower-priority interrupts. Furthermore, RTIC is deeply integrated with Rust’s async/await model; tasks are viewed as the synchronous sections between await points, thus, no shared resources are able to be held at points of suspension, which is enforced by the compiler (RTIC Community 2024). RTIC minimises memory usage by relying on a single stack, enforcing a strict LIFO (Last-In, First-Out) execution order where higher-priority tasks always complete before lower-priority tasks can resume.

Case Study: GEE Application

Application Definition

The original GEE application definition from Hu and Gobbo (Hu and Gobbo 2020) was used as the baseline and modelled with the MAST Suite (Harbour et al. 2001).
 
A transaction, as defined by MAST, models a set of interrelated tasks triggered by a single external event, such as a periodic timer or a hardware interrupt (Harbour et al. 2001; Hu and Gobbo 2020). By grouping tasks into a transaction represented as a graph of event handlers, the model can accurately capture the causal flow of events, precedence constraints, relative time offsets, and global end-to-end deadlines throughout the system (Harbour et al. 2001; Hu and Gobbo 2020).
 
The modelled GEE application is composed of the following four transactions:

Ada Ravenscar Implementation

The original application uses Protected Objects to manage all shared state and synchronisation. Each PO is assigned a priority ceiling (e.g., Request_Buffer with ceiling 9, activation_log with ceiling 13). A. Burns, B. Dobbing, and T. Vardanega offer a more detailed implementation description (Burns, Dobbing, and Vardanega 2017).

RTIC Implementation

We ported the GEE application to RTIC with the objective of preserving the original task set, activation patterns, and shared-resource interactions whilst expressing them using RTIC’s task model and synchronisation primitives. The implementation follows RTIC’s standard design pattern as defined in the official documentation (RTIC Community 2024), combining hardware tasks for interrupt-bound activities and asynchronous software tasks for periodic and event-driven behaviour.

Task Mapping

The exact static priority values from Hu and Gobbo were preserved (Hu and Gobbo 2020). These values were originally assigned according to Deadline Monotonic (DM) scheduling, an optimal fixed-priority algorithm where tasks with shorter relative deadlines receive higher priorities (Hu and Gobbo 2020).

Regular Producer

Implemented as a software task with priority 7, released by the monotonic timer via Mono::delay_until(next_time).await; it executes a fixed workload and conditionally triggers sporadic activity. Its shared resource is a read-only copy of the activation time.

On-Call Producer

A sporadic software task with priority 5 that remains suspended until data is available; activations are triggered by the Regular Producer. Its shared resource is a read-only copy of the activation time.

External Event Server

A sporadic software task with priority 11 that awaits on a signal from the hardware task ISR EXTI0 (#[task(binds = EXTI0)]). Its shared resources include the read-only activation time, as well as the SRP-based activation log.

Activation Log Reader

A sporadic software task with priority 3, that awaits on a signal from the regular producer. Its shared resources include the read-only activation time, as well as the SRP-based activation log.

Resource and Synchronisation Mapping

The Ravenscar Projected Objects and Suspension Objects were replaced using RTIC’s synchronisation primitives, maintaining equivalent interaction patterns:

Initial Release and Timing

All tasks are synchronised to a common activation time computed during initialisation. RTIC’s monotonic timer (SysTick-backed for consistency with the Ada Ravenscar runtime) is started in init, and the calculated start timestamp is distributed to all tasks as a shared resource. Each task performs an asynchronous delay (delay_until) to this common activation instant before entering its periodic or sporadic execution loop. This replicates the “common start” assumption used in the Ada version and ensures comparable phasing between implementations by establishing a common epoch, which also helps to enforce precedence relations across tasks (Burns, Dobbing, and Vardanega 2017).
 
A common activation instant causes the system’s critical instant (Vardanega, Zamorano, and Puente 2005; Hu and Gobbo 2020). By forcing the periodic Regular_Producer and the simulated Force_Interrupt to release simultaneously, maximum initial contention is achieved, exposing lower-priority tasks to their theoretical maximum preemption interference (Vardanega, Zamorano, and Puente 2005), thereby ensuring worst-case response times.(Hu and Gobbo 2020).

Comparisons of Runtime Semantics

IPCP versus SRP

Ravenscar mandates IPCP via the GNAT runtime (System.BB), where explicit PO_Enter/Exit calls dynamically raise the task’s active priority to the object’s ceiling. Conversely, RTIC implements SRP directly through hardware manipulation of the Cortex-M BASEPRI register. They are equivalent in preventing deadlock and bounding priority inversion, differing in implementation. RTIC’s hardware-backed approach reduces overhead of the management of shared resources to single instructions, effectively eliminating the software abstraction penalties incurred by the Ada runtime.

Resource Sharing and Blocking

Whilst Ada enforces strictly priority-limited blocking via IPCP, RTIC introduces a secondary, distinct blocking term; Non-Preemptive Blocking, Bnp, which arises from the use of global interrupt masking (PRIMASK) for critical sections within the source code. This mechanism is fundamental to the safety of RTIC’s core abstractions, including Channels, Signals, Timer Queues (used by delay_until), and Waker registrations. This enables safe, atomic access to shared state without mutexes, however, it inevitably creates short intervals where all interrupts are disabled, introducing release jitter for all tasks, including the SysTick monotonic Timer. Although the framework’s source code claims these critical sections are insignificantly small (RTIC Community, n.d.), their cumulative impact cannot be ignored in rigorous schedulability analysis.

System Timers and Scheduling Overhead

The RTIC implementation utilises a SysTick-backed monotonic timer. This configuration forces RTIC into a periodic interrupt model, negating the efficiency of its native tickless architecture (RTIC Community 2024) and representing a worst-case performance scenario. While the Ada runtime executes a constant-time O(1) check on the delay queue, the RTIC implementation employs a linked-list TimerQueue that requires traversal, resulting in O(N) complexity for N active timers. The documentation states that compared to the SysTick offered, the RP2040 is a "’proper’ implementation with support for waiting for long periods without interrupts" (RTIC Community 2024).

Mitigation Strategy: Minimising Global Masking

The reliance of rtic::sync on global critical sections introduces non-preemptive blocking that complicates hard real-time analysis. An approach to mitigate this would be to utilise an alternative architectural pattern that avoids the async/await paradigm in favour of explicit run-to-completion tasks.

In this model, the "wait" (avoidance synchronisation) provided by channels is replaced by a "spawn" mechanic. A producer task locks a shared buffer (protected via SRP/BASEPRI), enqueues data, and explicitly spawns the consumer task. The consumer then preempts, locks the buffer to retrieve the data, processes it, and exits.

This approach will significantly reduce non-preemptive blocking (Bnp), simplify modelling by limiting global masking strictly to the monotonic timer’s operations and aligning the system’s behaviour closer to the deterministic ideal of Ada Ravenscar.

Coupled vs. Decoupled Synchronisation Models

This case study highlights a fundamental divergence in synchronisation semantics, which is crucial to understand for the modelling of predictable real-time systems.
 
Ada Ravenscar employs Protected Objects as a unified abstraction for mutual exclusion, task synchronisation, and priority control (Burns, Dobbing, and Vardanega 2017). By coupling guard checks and blocking logic for not only shared resources but all message passing within a single construct, restricted to one entry with no queuing, the profile ensures clear, deterministic semantics for blocking times (Vardanega, Zamorano, and Puente 2005).
 
Conversely, RTIC treats mutual exclusion and synchronisation as orthogonal concerns. Shared resources are strictly protected by SRP-based critical sections, while suspension is handled via distinct asynchronous primitives (RTIC Community 2024). Decoupling synchronisation from data protection seems to be a by-product of the integration with Rust’s async model. Consequently, waiting becomes an explicit orchestration step distinct from resource access, rather than being an intrinsic, atomic property of the resource itself.

Measurement

The two ecosystems offer different capabilities and challenges for measuring performance and verifying correctness.

Execution Time

The Ada environment provides a built-in Execution Time clock (ETC) that measures CPU time only when a specific task is actively running. This is useful for accurate WCET analysis. RTIC lacks a direct equivalent.
 
The Data Watchpoint and Trace (DWT) cycle counter was used for execution‑time measurements in RTIC, including in‑application task_metrics.rs instrumentation and GDB breakpoint-based scripts to profile RTIC internals and critical sections. The GDB method is more error‑prone (breakpoint placement, runtime interference) and requires cautious interpretation (Arm Limited 2026).

Deadline Miss Detection

The Ada runtime includes a Timing Event facility that invokes a handler directly from the clock interrupt when a deadline is missed, providing built-in overrun detection. In the RTIC implementation, release_time is tracked to calculate the relative deadline and the job end is checked to log a miss via defmt:

if Mono::now() > deadline {
    defmt::warn!("On Call Producer missed its deadline!");
}

WCET Measurements

Synchronous Application Level

Task_metrics.rs wraps all synchronous, non-preempted code in start_subtracking() and end_subtracking(), logging the start-to-end cycle count from the DWT cycle timer using defmt semi-hosting.
 
System_overhead.rs records job start‑to‑end execution. System overhead is extracted by subtracting the cumulated task_metrics measurements from the job’s total execution time.

Asynchronous code and rtic-sync

In the context of RTIC and Rust’s ‘async‘/‘await‘ model, it is important to carefully consider WCET. For asynchronous operations like channel.recv().await or signal.wait().await‘, WCET is the total CPU cycles used to manage the state machine during the wait.
 
We rely on measurement‑based analysis; this can under-approximate the true WCET unless worst‑case hardware conditions are exercised. Key synchronous phases to measure are submission/polling (registering a ‘Waker‘, returning ‘Poll::Pending‘) and resumption (executor re‑polling and returning ‘Poll::Ready‘).
 
The async executor operates on interrupts. Each priority level has a dedicated dispatcher that polls all async tasks at that level. The executor maintains atomic booleans for running, pending state, and the task’s future. The waker is a function pointer that sets pending to true and pends the interrupt.
 
Overhead that must be measured includes waker.wake(), check_and_clear_pending(), future.poll().

System Overheads and Latency

The Ada Ravenscar implementation was re‑evaluated using the same DWT‑based GDB profiling technique for the runtime traps specified in the original analysis.

For the Ada Ravenscar runtime, the following runtime traps were measured:

There are no equivalent software routines in RTIC. Consequently, a new set of end-to-end latency tests was devised to benchmark RTIC system overhead. For consistency and direct comparison, the same end-to-end benchmarks were also applied to the Ada Ravenscar implementation.
 
The following end-to-end measurements were performed:

MAST Schedulability Analysis

Both the Ada independent‑task and offset‑based MAST models were reconstructed from Hu and Gobbo (Hu and Gobbo 2020). While the Ada and RTIC models share the same processor and fixed-priority scheduling policy, they diverge significantly in their representation of runtime behaviour and shared resources.

All MAST models were implemented as template files populated by a JSON dictionary of measured WCET values. The analysis was automated using Python scripts to generate model instances, execute schedulability tests, and compile reports. Following the baseline analysis, four systematic experiments were conducted to examine specific system features.

Modelling the Frameworks

The Ada Ravenscar model maps directly to MAST primitives, using Shared_Resource objects to represent the request buffer, activation log, and event queue. Each resource is assigned a specific priority ceiling, enabling the analysis tool to apply the ICPP directly (Burns, Dobbing, and Vardanega 2017).
 
Conversely, the RTIC model requires a composite approach to abstract its asynchronous runtime. Async primitives (channels, signals, and waker registration) are modelled as sequences of operations. To account for the priority inversion caused by RTIC’s reliance on global critical sections, operations invoking rtic::sync are explicitly modelled as locking a single global_critical_section resource with the highest system priority (which presumably could preempt the sys-ticker). Application-level shared state, such as the activation_log, retains its SRP/ICPP model. This constraint is necessary to capture the system-wide blocking inherent in the RTIC implementation.

Experimental Design

Experiment I: Workload Scaling

Objective: Determine the sensitivity of the system models to increasing computational load.

Method: The computational workloads of the Regular Producer, On-Call Producer, and Activation Log Reader are systematically increased by a scaling constant k, with a step size of 1, up to k = 30. Note that the empirical WCET values for the Whetstone benchmark differ between implementations due to the implementation of libm operations for sin, cosine, etc. To ensure a fair comparison in this experiment, the same base WCET values (derived from the Ada measurements) are applied to both models, eliminating discrepancy caused by arithmetic operations within the workload.

Experiment II: Maximum Sustainable Utilisation

Objective: Determine the maximum CPU utilisation achievable by each system before a deadline miss occurs.

Method: The procedure described in the reference study is applied to identify the Whetstone scaling factors that result in maximal CPU utilisation. In contrast to Experiment I, this experiment utilises the specific empirically recorded WCETs for each platform to evaluate the native performance limits of the Ada Ravenscar and Rust RTIC runtimes.

Experiment III: Scheduler and Timer Scalability

Objective: Evaluate the scalability of the scheduler and system timer under increasing concurrent task load.

Method: The task set is augmented by introducing additional synthetic periodic tasks that utilise the monotonic timer (delay_until) to wake and yield. The impact of the increased timer queue depth on the worst-case response time (WCRT) of the Regular Producer is measured to quantify the scheduling overhead.

Results and Discussion

Measured Runtime Overheads

The measured worst‑case execution times (WCET) for the runtime primitives serve as inputs to the MAST models. These values (Table 1) quantify the cost of the asynchronous primitives of RTIC’s. Notably, the internal critical sections (CS) are short, with the longest continuous blocking section being only 2.46μs(channel_recv).

Rust RTIC asynchronous primitive WCETs, and global-critical-sections.
Primitive CS WCET Total WCET
2-3 (lr)4-5 (cycles) (s) (cycles) (s)
delay_until 0 0.00E+00 150 5.56E-08
signal_take 25 1.39E-07 25 1.39E-07
try_recv 361 2.01E-06 4479 2.45E-06
waker_register 82 4.56E-07 1526 6.56E-07
channel_recv 443 2.47E-06 6005 3.11E-06
signal_wait 107 5.95E-07 1551 7.95E-07
channel_try_send 264 1.47E-06 6915 2.56E-06
signal_write 106 5.89E-07 2737 1.08E-06
systick 5.66667E-07 1.60E-06
End-to-end latency measurements for dispatch and interrupt overhead.
Event WCET (cycles) WCET (s)
RTIC context_switch (signal) 437 2.42778E-06
RTIC context_switch (spawn) 105 5.83333E-07
RTIC interrupt_entry 48 2.66667E-07
Ada Interrupt_Handling 87 4.83333E-07
Ada Sporadic_Switch 641 3.56111E-06

Interrupt and Dispatch Latency

End‑to‑end latency tests (Table 2) reveal a significant performance advantage for the RTIC dispatcher. The RTIC interrupt_entry overhead (266 ns) is approximately 45% lower than the equivalent Ada Interrupt_Handling trap (483 ns). This validates the architectural benefit of RTIC’s hardware-accelerated model. However, the performance gap narrows significantly when evaluating sporadic context switch.
 

Ada Ravenscar Runtime Overheads

Table 3 lists the overheads for the Ada Ravenscar runtime. ISR overhead recorded in this isolated manner is more equivalent to RTIC’s interrupt entry latency. The WCET of the context switch traps are also less than those reported for RTIC. All recorded values are a significant figure lower than those reported from Hu and Gobbo (Hu and Gobbo 2020), most likely due to the use of DWT with GDB scripting, reducing measurement overhead.

Ada Ravenscar runtime overhead WCETs measured via DWT-based GDB profiling.
Routine / Trap WCET (cycles) WCET (s)
context_switch_trigger 5 2.78E-08
pend_sv_handler 76 4.22E-07
sys_tick_handler 48 2.67E-07
ISR_Isolated 43 2.39E-07
Total Context Switch Overhead 81 4.50E-07
Hierarchical operation structure for WCET attribution. Indentation indicates sub-operations.
Operation Ada WCET (s) RTIC WCET (s)
Regular Producer 1.76E-02 2.71E-02
Overrun detection 1.56E-05 3.11E-07
RP Operation 1.76E-02 2.71E-02
Whetstone workload 1.75E-02 2.70E-02
Due activation 2.66E-06 1.89E-07
OCP Start / try send 5.23E-05 1.77E-06
Check Due 2.83E-06 1.83E-07
Activation Log signal 5.53E-05 1.18E-06
Output logging 8.33E-06 6.64E-06
Delay Until 2.18333E-06 5.56E-08
On-Call Producer 6.47E-03 9.97E-03
RB Extract / recv 2.69E-06 3.11E-06
Overrun detection 1.49E-05 3.11E-07
OC Operation 6.45E-03 9.97E-03
Whetstone workload 6.44E-03 9.94E-03
Output logging 8.33E-06 6.51E-06
Activation Log Reader 3.30E-03 5.00E-03
AL Wait 1.76E-06 7.95E-07
Overrun detection 1.42E-05 3.11E-07
ALR Operation 3.28E-03 5.00E-03
Whetstone workload 3.22E-03 4.97E-03
Activation Log read 4.92E-05 3.67E-07
Output logging 8.33E-06 6.49E-06
External Event Server 5.58E-05 2.84E-05
Ext Wait 1.76E-06 7.95E-07
Overrun detection 1.49E-05 3.11E-07
EES Operation 3.91E-05 2.73E-05
Activation Log Write 3.91E-05 4.22E-07

Table 4 breaks down the WCET attribution for the hierarchical operations used in the experiments.

Synchronisation Overhead Comparison

A comparison of synchronisation primitives demonstrates that RTIC’s decoupled architecture achieves performance broadly comparable to Ada’s unified model. The RTIC asynchronous channel operation (channel_recv, 3.11 μs) exhibits only a marginal disadvantage ( < 0.5 μs) compared to Ada’s Protected Object operation (RB Extract, 2.69 μs).

Baseline MAST Analysis

Table 5 compares the baseline Rate Monotonic (RM) analysis between Ada and RTIC. Results show shorter worst‑case blocking times for all transactions in RTIC, accompanied by increased jitter and response times. The RTIC implementation demonstrates significantly lower worst blocking times (2.01μs) compared to Ada (49.2μs). The difference in system slack and total utilisation is attributed to the implementation of the Whetstone workload.

Comparative MAST analysis: Ada vs. RTIC baselines.
Transaction Rmax Slack (%) Blocking Time
2-3 (lr)4-5 (lr)6-6 Ada RTIC Ada RTIC Ada / RTIC
rp_transaction 0.01875 0.02815 2727.3 1739.8 4.92e-05 / 2.01e-06
ocp_transaction 0.02422 0.03714 11979.3 7635.9 4.92e-05 / 4.56e-07
alr_transaction 0.02747 0.04214 29342.2 19143.8 0 / 0
ees_transaction 0.00011 0.00104  > 105  > 105 4.92e-05 / 2.01e-06
System Metrics Ada RTIC
System Slack 2711.3% 1737.9%
Total Utilisation 2.12% 3.58%

Therefore, the baseline was recomputed with a constant workload WCET across Ada and RTIC to focus on differences in modelling and runtime overheads (Table 6).

Comparative MAST analysis: Ada vs. RTIC baselines (consistent workload WCET).
Transaction Rmax Slack (%) Blocking Time
2-3 (lr)4-5 (lr)6-7 Ada RTIC Ada RTIC Ada RTIC
rp_transaction 0.01875 0.01861 2727.3 2738.3 4.92e-05 2.01e-06
ocp_transaction 0.02422 0.02409 11979.3 11980.1 4.92e-05 4.56e-07
alr_transaction 0.02747 0.02733 29342.2 29975.4 0 0
ees_transaction 0.00011 0.00104  > 105  > 105 4.92e-05 2.01e-06
System Metrics Ada RTIC
System Slack 2711.3% 2733.6%
Total Utilisation 2.12% 2.46%

With the normalised workload, RTIC demonstrates a minimally superior response times for periodic tasks and a persistent advantage in blocking duration. RTIC’s lock free synchronisation primitives, in particular signal and channel have less execution time modelled as shared resources, significantly lower blocking times.

Experiment I: Workload Scaling and Response Time

Figure 3 illustrates the scaling of maximum response times (Rmax) as the computational workload increases. Rmax grows linearly with the workload factor k. However, the absolute difference in Rmax between the two models grows with k. In the offset-based model, the largest discrepancy recorded was 0.187 at a workload factor of k = 29. For the independent models, 0.330 at k = 30. The independent models identify the system’s schedulability threshold at k = 28, where the system slack is reduced to 1.56% before eventually failing feasibility tests at higher workload factors.

Independent model comparison.
Offset model comparison.
Experiment I - Workload Scaling, maximum response time comparison for Independent vs. Offset transactions.
Experiment I - workload scaling v System slack (%) (Ind.).

As the workloads are scaled, increasing the utilisation of the CPU and decreasing the system slack (dominated by the regular producer transaction), the difference in slack between RTIC and Ada tends to zero. This suggests that with deadlines and workloads within the ms range, differences between the frameworks are insignificant for schedulability.

Experiment II: Maximum Utilisation and Scaling with Classic Rate Monotonic Analysis

Following the methodology of Hu and Gobbo (Hu and Gobbo 2020), the maximum sustainable utilisation was discovered by iteratively applying scaling factors to the Whetstone workloads of the periodic transactions. The process involves identifying the transaction with the minimum system slack, scaling its workload by that factor, and repeating the analysis for the remaining tasks until the system slack approaches 0%.
 
Table  7 presents the results of this saturation test. Both frameworks successfully reach the theoretical limits of schedulability for this task set, achieving approximately 64% total CPU utilisation before failing feasibility tests.

Maximum achievable utilisation with RM analysis: Ada Ravenscar vs. RTIC.
Transaction Rmax Slack (%) Blocking Time
2-3 (lr)4-5 (lr)6-7 Ada RTIC Ada RTIC Ada RTIC
rp_transaction 0.47387 0.46182 0.00 0.78 4.92e-05 2.01e-06
ocp_transaction 0.79499 0.79011 0.00 1.17 4.92e-05 4.56e-07
alr_transaction 0.99792 0.99468 0.39 1.95 0 0
ees_transaction 0.00011 0.00104 1928.1 14137.1 4.92e-05 2.01e-06
System Metrics Ada RTIC
System Slack 0.00% 0.39%
Total Utilisation 64.79% 64.01%

To reach the same utilisation level, RTIC requires lower scaling factors (e.g., k = 17 for rp) than Ada (k = 27). This is due to the higher base WCET of RTIC’s software-based floating-point library. However, the identical system-wide saturation point ( ≈ 64%) confirms that RTIC’s asynchronous runtime does not introduce non-linear scaling overheads.
 
As previously observed, there is a significant divergence in worst-case blocking times, Ada’s blocking term (49.2 μs) is an order of magnitude higher than RTIC’s (2.01 μs).

Experiment III: N Tasks vs Release Jitter

Figure 5 demonstrates that the resulting release jitter in RTIC remains significantly lower than that of Ada Ravenscar, even when the system is scaled to 100 concurrent tasks, despite the (O(N)) complexity of its timer queue.
 
The Ada Ravenscar implementation encountered memory exhaustion after the introduction of 25 additional tasks. This failure occurs because an independent memory stack is allocated for every individual task, which defaults to 20KB. Consequently, the cumulative memory toll rapidly exhausts the microcontroller’s available RAM.
 
Conversely, RTIC reduces per-task memory overhead through its SRP-based shared-stack architecture (RTIC Community 2024). This makes possibile the generation of hundreds of tasks without exhausting system memory. At high task levels, priority levels must be shared due the finite number of hardware priority levels granted by the NVIC (Lindgren, Dzialo, and Lunnikivi 2023). The results suggest that for memory-constrained systems requiring high task counts, RTIC provides superior scalability.

Experiment III - Release Jitter of a periodic task vs. Task Count.

Conclusion

RTIC offers low-overhead execution, but its use of global interrupt masking complicates formal schedulability analysis. RTIC’s ecosystem is still immature, lacking comprehensive modelling and benchmarking tools, a verified runtime with proven execution-time bounds, and thorough documentation. There is also no established best practice for hard real-time systems in RTIC; developers follow differing conventions, such as using async features for software-based preemption or relying on spawn-centric task invocation. Recent efforts have concentrated on developing a useful ecosystem of tools, for example the KLEE-based test bench for WCET analysis in RTIC (Lindner et al. 2018), and RTIC Scope, which provides real-time tracing capabilities (Sonesten 2022). Nonetheless, both of these tools are still immature, no longer maintained, and were incompatible with the latest RTIC version used in this project. In addition, the functionality that RTIC can offer is presently constrained by Rust’s macro expansion mechanisms (Madaoui et al. 2024). The macro-based design enforces a monolithic codebase that limits modularity, the integration of extensions (e.g., execution timers), and further introduces debugging complexity that heavily hinders community contributions and maintainability (Tjader 2021; Madaoui et al. 2024). The lock-free semantics of channels and signals are attractive to developers, but make system modelling more difficult by operating outside the conventional shared-resource model. Ada Ravenscar continues to be a stable, well-documented, and highly capable framework for building hard real-time systems, and delivers performance appropriate for embedded applications.

Acknowledgements

The author acknowledges the assistance of agentic AI tools in the development of the scripts used in this study. Specifically, AI assistance was utilised to quickly iterate the GDB scripts for DWT-based execution-time profiling and to develop the Python-based automation pipeline for the MAST experiments, including scripts for populating MAST model templates with WCET values, extracting results from output files, and generating data visualisations using Matplotlib.

Arm Limited. 2026. “Using DWT and Other Methods to Count Executed Instructions on Cortex-m.” 2026. https://developer.arm.com/documentation/ka001499/latest/.
Baker, T. P. 1991. “Stack-Based Scheduling of Realtime Processes.” Real-Time Systems 3 (1): 67–99.
Burns, A., B. Dobbing, and T. Vardanega. 2017. “Guide for the Use of the Ada Ravenscar Profile in High Integrity Systems.” YCS-2017-348. University of York.
Engblom, J., A. Ermedahl, M. Sjödin, J. Gustafsson, and H. Hansson. 2001. “Execution-Time Analysis for Embedded Real-Time Systems.” International Journal on Software Tools for Technology Transfer (STTT).
Eriksson, J., F. Häggström, S. Aittamaa, A. Kruglyak, and P. Lindgren. 2013. “Real-Time for the Masses, Step 1: Programming API and Static Priority SRP Kernel Primitives.” In 8th IEEE International Symposium on Industrial Embedded Systems (SIES), 110–13.
Harbour, M. González, J. J. Gutiérrez, J. C. Palencia, and J. M. Drake. 2001. “MAST: Modeling and Analysis Suite for Real-Time Applications.” In Proceedings of the 13th Euromicro Conference on Real-Time Systems, 125–34. Delft, The Netherlands.
Hu, G. J., and A. Gobbo. 2020. “MAST Analysis of a Ravenscar Precedence-Constrained Application with FPS and EDF Scheduling.” University of Padua, Italy.
Lindgren, P., P. Dzialo, and H. Lunnikivi. 2023. “Hardware Support for Static-Priority Stack Resource Policy Based Scheduling.” In 2023 IEEE 32nd International Symposium on Industrial Electronics (ISIE), 1–8. https://doi.org/10.1109/ISIE54958.2023.10228088.
Lindner, M., J. Aparicio, H. Tjader, P. Lindgren, and J. Eriksson. 2018. “Hardware-in-the-Loop Based WCET Analysis with KLEE.” In Proceedings of the IEEE Industrial Cyber-Physical Systems (ICPS).
Madaoui, Z., H. Lunnikivi, P. Dzialo, and P. Lindgren. 2024. “Towards Modularity of the Rust RTIC Real-Time Scheduling Framework.” In 2024 IEEE Nordic Circuits and Systems Conference (NorCAS). https://doi.org/10.1109/NorCAS64408.2024.10752441.
Puschner, P., and A. Burns. 2000. “A Review of Worst-Case Execution-Time Analysis.” Real-Time Systems 18: 115–28.
RTIC Community. 2024. “Real-Time Interrupt-Driven Concurrency V2.” 2024. https://rtic.rs/2/book/en/.
———. n.d. “RTIC Source Code Repository.” GitHub. https://github.com/rtic-rs/rtic.
Sonesten, V. V. 2022. “RTIC Scope: Real-Time Tracing Support for the RTIC RTOS Framework.” Master’s thesis, Luleå, Sweden: Luleå University of Technology.
Tjader, H. 2021. “RTIC: A Zero-Cost Abstraction for Memory Safe Concurrency.” Master’s thesis, Luleå University of Technology, Department of Computer Science, Electrical; Space Engineering.
Vardanega, T., J. Zamorano, and J. A. de la Puente. 2005. “On the Dynamic Semantics and the Timing Behaviour of Ravenscar Kernels.” Real-Time Systems 29 (1): 59–89.
Wilhelm, R. et al. 2008. “The Worst-Case Execution-Time Problem, Overview of Methods and Survey of Tools.” ACM Transactions on Embedded Computing Systems (TECS) 7 (3): 1–53.
Yiu, J. 2011. The Definitive Guide to the ARM Cortex-M3. Newnes.