TL;DR - Our system makes IL policies execute 3.2x faster than demonstration data.

🚀 Faster, 🌊 Smoother, ✨ Better (click for more⬇️)

⛵ SAIL System Overview and Challenges

SAIL Overview

Speeding up visuomotor policies is a full-stack problem. SAIL provides a recipe to address the challenges that arise from sped up execution -
(a) Policy Level: Starting with synchronized observations from robot state and camera inputs, the system generates (1) temporally-consistent action predictions through error-adaptive guidance (EAG) and (2) time-varying speedup factor.
(b) Controller Level: The predicted actions are scheduled for execution while accounting for sensing-inference delays, with outdated actions being discarded. The scheduled actions are executed by a high-fidelity tracking controller to track the trajectory at the specified time parametrization.

Challenge 1. System and Inference Latencies

Figure showing how latencies are handled

System latencies can cause OOD inputs, time-misaligned commands or pausing during inference.

Solution 1. Action Scheduling Ensures Smoother Execution

SAIL uses action scheduling to handle system latencies. Observations are synchronized and inference performed asynchronously. This enables smooth execution without pausing for inference.

Challenge 2. Action Inconsistency with Asynchronous Inference

Asynchronous inference can lead to diverging predictions that result in jerky robot movement.

Solution 2. Error Adaptive Guidance Enables Smoother Trajectories

SAIL conditions the new prediction on the previous one using Error Adaptive Guidance. This informs the policy of future actions allowing it to generate smooth and consistent trajectories.

Challenge 3. Deciding when to Speed Up and Slow Down

It is sometimes necessary to slow down the execution of the policy to ensure success in tasks that require high precision, or due to hardware limitations.

Solution 3. Adaptive Speed Modulation

SAIL uses Adaptive Speed Modulation (ASM) to adjust the execution speed of the policy based on the complexity of the action sequence.

Challenge 4. Controller Dynamics Shift During Speedup

Speeding up teleoperated commands increases tracking error and alters the motion profile.

Solution 4. Tracking Reached Poses

We address this by tracking reached poses with high gain, improving performance at higher speeds.

Simulation Rollouts

Pick up the can and put it in the box

Open the drawer, put the mug in it and close the drawer

Lift the block

Stack the red block on the green one

Evaluation

Quantitative Results


Real world: we test SAIL on 7 real-world tasks, with SAIL outperforming our most competitive baseline in most tasks. SAIL achieves up to a 3.2× speedup over the demonstration speed.

SAIL real results table


Simulation: we test on 5 simulated tasks from the RoboMimic and MimicGen task suites. SAIL can achieve up 3x throughout of baselines without sacrificing task success rate.

SAIL sim results table

High Gain Controller and Reached Pose Prediction

replay vs kp

We examine the effects of increasing controller gains and speed for replaying demos in simulation. Left: using commanded poses performs better when replaying at the original speed (c = 1) but using reached poses matches performance when using high gains. Right: A high-gain controller using reached poses performs better than one using commanded poses at a higher execution speed.

What makes a good action condition for guidance?

Action conditioned on unconditional action distribution

Action conditioned on temporally perturbed action

Yellow: Action condition, Grey: Unconditional action distribution, Red: Conditional action distribution. Guidance works best when the action condition is in unconditional action distribution.

Can task rollout trajectory with EAG.

Can task rollout trajectory without EAG.

Other Qualitative Findings

We show some interesting phenomena that we observed during the experiments:

Reactiveness: SAIL reacts much faster to changes in the environment and recovers quicker from failures.

Hardware limitations: acceleration is bounded by gripper speed. Faster grippers could lead to even higher speedup.

Controller impact: using the same controller as the demonstration leads to failure due to poor tracking. A high-gain controller with feedforward is necessary to achieve high speed.

Interesting failures: when tuned to high speed, SAIL can fail in interesting ways, such as throwing objects out of the workspace. Future work could explore how to handle these dynamics shifts.

Limitations

While SAIL shows promising results in both accelerating policy execution in simulation and real-world deployment, we do not explicitly tackle the dynamics shift of robot-object interaction. Future research could address this by developing methods to incorporate explicit dynamics modeling into policies, either through learning speed-dependent dynamic models or leveraging physics simulation during training. Adaptive Speed Modulation is also currently only applied in simulation, while a gripper heuristic for slowdown is used in real experiments. We found that ASM would not slow down consistently in the right segments due to the noisiness of real data. Future research could aim to develop upon this problem.

Acknowledgement

This work is supported by the State of Georgia and the Agricultural Technology Research Program at Georgia Tech, AI Manufacturing Pilot Facility project under Georgia Artificial Intelligence in Manufacturing (Georgia AIM), Award 04-79-07808, NSF CCF program, Award 2211815, and NSF Award 1937592. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.

BibTeX

 @misc{ranawaka2025sail,
      title={SAIL: Faster-than-Demonstration Execution of Imitation Learning Policies}, 
      author={Nadun Ranawaka Arachchige and Zhenyang Chen and Wonsuhk Jung and Woo Chul Shin and Rohan Bansal and Pierre Barroso and Yu Hang He and Yingyang Celine Lin and Benjamin Joffe and Shreyas Kousik and Danfei Xu},
      year={2025},
      eprint={2506.11948},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2506.11948}, 
}