The Glide device from Glidance represents a significant advancement in assisted mobility. Unlike a traditional cane, this autonomous robot uses a system of LiDAR sensors and stereo cameras to create a real-time three-dimensional map of the environment. Its navigation algorithm allows it to avoid obstacles at waist and head level, offering gentle physical guidance without requiring prior training from the user.
Volumetric Mapping and Edge Detection 🗺️
The integration of 3D technologies in the Glide is crucial for its safe operation. The device employs a spatial mapping process that generates a point cloud of the environment. This allows it to detect not only solid objects but also level changes such as curbs or stairs, a classic challenge for mobile robotics. During the design phase, engineers used 3D simulations to model thousands of risk scenarios, including low-light conditions and uneven surfaces. This virtual simulation allowed them to adjust braking and motor response parameters before producing a single physical prototype, reducing costs and accelerating compliance with safety standards.
Inclusive Design and Regulatory Compliance ♿
For a device like Glide to be viable, it must align with accessibility regulations such as EN 17161 or the ADA in the United States. Inclusive design here is not an embellishment but an engineering requirement. The robot must communicate its intentions non-visually, using haptic feedback and directional sounds. By protecting the autonomy of people with visual impairments, this type of 3D technology demonstrates that robotic innovation can be a direct tool for protecting vulnerable groups, guaranteeing their right to safe and dignified mobility.
What technical and ethical challenges does the integration of 3D robotics in devices like Glide pose to ensure the safety and autonomy of blind people in complex urban environments?
(PS: verifying status is like leveling the bed: if you don't do it right, the first layer (and rights) fail)