Lightning: Adaptive Illumination Control for Robot Perception

University at Buffalo, Buffalo, NY
Lightning Teaser

(a) Oracle intensity schedule over time. (b) CLID relighting on robot sequences. Raw images are captured at 50% intensity and relit to generate images at 0-100% intensities.

Abstract

Robot perception under low light or high dynamic range is usually improved downstream—via more robust feature extraction, image enhancement, or closed-loop exposure control. However, all of these approaches are limited by the image captured these conditions. An alternate approach is to utilize a programmable onboard light that adds to ambient illumination and improves captured images. However, it is not straightforward to predict its impact on image formation. Illumination interacts nonlinearly with depth, surface reflectance, and scene geometry. It can both reveal structure and induce failure modes such as specular highlights and saturation. These challenges are further exacerbated by robot motion through a scene.

We introduce Lightning, a closed-loop illumination-control framework for visual SLAM that combines relighting, offline optimization, and imitation learning. This is performed in three stages. First, we train a Co-Located Illumination Decomposition (CLID) relighting model that decomposes a robot observation into an ambient component and a light-contribution field. CLID enables physically consistent synthesis of the same scene under alternative light intensities and thereby creates dense multi-intensity training data without requiring us to repeatedly re-run trajectories.

Second, using these synthesized candidates, we formulate an offline Optimal Intensity Schedule (OIS) problem that selects illumination levels over a sequence trading off SLAM-relevant image utility against power consumption and temporal smoothness.

Third, we distill this ideal solution into a real-time controller through behavior cloning, producing an Illumination Control Policy (ILC) that generalizes beyond the initial training distribution and runs online on a mobile robot to command discrete light-intensity levels. Across our evaluation, Lightning substantially improves SLAM trajectory robustness while reducing unnecessary illumination power.

Experimental Results

ILC's intensity schedule versus fixed baselines

Figure 5: ILC's intensity schedule versus fixed baselines. The imitation policy, deployed on a robot, outputs a per-frame light intensity (blue), and is compared against 0% (green-dashed) and 100% (red-dashed) fixed-intensity baselines. Insets show frames at the corresponding timestamps: (a) ILC increases illumination when entering a low-light region to preserve image utility; (b) ILC reduces illumination near a reflective whiteboard to mitigate specular saturation; (c) ILC chooses an intermediate illumination level to balance competing effects of low light and specular reflection.

We compare our method (Lightning) against fixed intensity baselines (0% and 100%). Lightning consistently outperforms both baselines in terms of trajectory completion ratio and Weighted RMSE.

Sequence 0% Baseline Lightning (Online) 100% Baseline Lightning Stats
Ratio (C) ↑ WRMSE ↓ Ratio (C) ↑ WRMSE ↓ Ratio (C) ↑ WRMSE ↓ Light μ (%) Power (W)
113dark 0.26 2.756 0.89 1.405 0.44 0.641 48.28 19.86
kitchenloop 0.48 0.744 0.91 0.393 0.88 0.425 60.42 22.45
start2corr 0.95 0.220 0.99 0.123 0.98 0.249 47.33 19.66

BibTeX

@inproceedings{lightning2024,
  title={Lightning: Adaptive Illumination Control for Robot Perception},
  author={Turkar, Yash and Sadeghi, Shekoufeh and Dantu, Karthik},
  year={2026}
}