IIIT Hyderabad’s AI-Driven Car Revolutionizes Robotic Navigation
IIIT Hyderabad’s Self Driving Car is an electric vehicle that performs point to point Autonomous Driving with collision avoidance capabilities over a wide area. Equipped with 3D LIDAR, Depth Cameras, GPS systems and AHRS (Attitude and Heading Reference System which essentially means sensors on three axes to estimate its orientation in space), the car can also accept Open Set Natural Language commands and follow those commands to reach a desired destination. SLAM-based point cloud mapping is used to map the campus environment and a LIDAR-guided real-time state estimation allows for localization while driving.
Open Set Navigation
To better understand open set navigation commands, let’s first consider how humans often navigate – with minimal map usage, relying on contextual cues and verbal instructions. Many navigation directions are exchanged based on recognizing specific environmental landmarks or features, for example, “Take right next to the white building” or “Drop off near the entrance”. In a parallel context, autonomous driving agents need precise knowledge of their pose (or localization) within the environment for effective navigation. These are typically achieved using high-resolution GPS or pre-built High-Definition (HD) Maps like Point Cloud, which are compute and memory-intensive.
IIITH’s Efforts
They address it by exploiting foundational models that have a generic semantic understanding of the world which can be distilled for downstream localization and navigation tasks. This has been achieved by augmenting open-source topological maps (like OSM) with language landmarks, such as “a bench”, “an underpass”, “football field”, etc. which resemble the cognitive localization process employed by humans. These enable an “open-vocabulary” nature that allows navigation to places for which the model is not explicitly trained, leading to a zero-shot generalisation to new scenes.
Differential Planning
Mapping, localization, and planning form the key components in the Autonomous Navigation Stack. While both the modular pipeline and end-to-end architectures have been the traditional driving paradigms, the integration of language modality is slowly becoming a defacto approach to enhance the explainability of autonomous driving systems. A natural extension of these systems in the vision-language context is their ability to follow navigation instructions given in natural language – for example, “Take a right turn and stop near the food stall.” The primary objective is to ensure reliable collision-free planning.
USP: NLP+VLM
To achieve this capability, IIITH has developed a lightweight vision-language model that combines visual scene understanding with natural language processing. The model processes the vehicle’s perspective view alongside encoded language commands to predict goal locations in one-shot. However, these predictions can sometimes conflict with real-world constraints. For example, when instructed to “park behind the red car,” the system might suggest a location in a non-road area or overlapping with the red car itself. To overcome this challenge, they augment the perception module with a custom planner within a neural network framework. This requires the planner to be differentiable, enabling gradient flow throughout the entire architecture during training which eventually improves both prediction accuracy and the planning quality. This end-to-end training approach with a differentiable planner serves as the key sauce of our work.