An autonomous delivery ground drone struck a pedestrian in a crosswalk. Initial investigations point to a 3D segmentation failure in the perception system. To clarify the incident, the native LiDAR point cloud captured milliseconds before the impact was extracted, and the scene was recreated in Unreal Engine 5. The goal is to determine if the victim's reflective clothing caused the algorithm to mistakenly classify the pedestrian as a static object in the environment, such as a sign or a pole.
Technical workflow: Open3D, Foxglove Studio, and Unreal Engine 5 🛠️
The forensic process begins with extracting the raw point cloud using Python and Open3D, filtering out environmental noise and isolating the critical frame prior to impact. This cloud is exported in PLY format for analysis. With Foxglove Studio, the LiDAR sensor data is visualized synchronized with the vehicle's telemetry, allowing the identification of the pedestrian's trajectory and the planning system's response. Subsequently, the scene is imported into Unreal Engine 5, where the urban geometry is recreated and the point cloud is positioned. A reflectivity filter is then applied to the points, simulating the behavior of the pedestrian's textile material. The results show that the points corresponding to the reflective jacket exhibit anomalous intensity, similar to that of road signs, which led the 3D segmentation model to group them within the static object class, ignoring their movement.
Lessons for autonomous perception safety ⚠️
This case demonstrates that material reflectivity not only affects the sensor's range but can also induce fatal errors in semantic classification. The reconstruction in Unreal Engine 5 allows visualizing the algorithmic blind spot the vehicle had. For future systems, it is recommended to implement cross-validation between the point cloud and thermal or event camera data, as well as to train models with datasets that include pedestrians wearing high-visibility clothing. The combination of Open3D for forensic analysis and Foxglove Studio for real-time debugging is becoming the standard for accident investigation in mobile robotics.
Is it possible to determine through forensic 3D simulation whether the reconstruction of the LiDAR failure matches the pedestrian's actual trajectory in the crosswalk, or is additional analysis of the pre-impact point cloud needed?
(PS: In scene analysis, every scale witness is a little anonymous hero.)