IIT Bombay unveils new model to read satellite images using natural language prompts

Press Trust of India | September 3, 2025 | 07:37 PM IST | 2 mins read

IIT Bombay's Adaptive Modality-guided Visual Grounding (AMVG) model bridges the gap between how humans prompt and how machines analyse satellite or remote sensing imagery.

IIT Bombay develops Adaptive Modality-guided Visual Grounding model. (Image: Wikimediacommons)
IIT Bombay develops Adaptive Modality-guided Visual Grounding model. (Image: Wikimediacommons)

MUMBAI: The Indian Institute of Technology Bombay (IIT Bombay) has developed a model, Adaptive Modality-guided Visual Grounding (AMVG), which enables machines to interpret satellite and remote sensing images using natural, often ambiguous, human language prompts. AMVG bridges the gap between how humans prompt and how machines analyse satellite or remote sensing imagery, said an IIT Bombay study.

"Remote sensing images are rich in detail but extremely challenging to interpret automatically. While visual grounding has progressed significantly, current models fail to transfer effectively to remote sensing scenarios, especially when the commands are ambiguous or context-dependent," explained Shabnam Choudhury, the study's lead author and a PhD researcher at IIT Bombay.

With every passing year, the volume of remote sensing data continues to grow exponentially, and these images captured from large distances above Earth (satellites, drones, aircraft) are cluttered with tiny objects, atmospheric noise and scale variations. In these images, a building might appear like a runway, and a runway like a river.

Also read IIT Bombay, Rolls-Royce partner to boost academic research; offer internship opportunities in Bengaluru

IIT Bombay's ISPRS journal

The IIT Bombay study, published in the ISPRS Journal of Photogrammetry and Remote Sensing, demonstrates how AMVG acts like a sophisticated translation system, interpreting prompts in everyday human language and identifying objects reliably. While most models employ a two-step method for visual grounding - first, they propose regions, and then they rank them, AMVG uses four key innovations: Multi-modal Deformable Attention layer, Multi-stage Tokenised Encoder (MTE), Multi-modal Conditional Decoder, and Attention Alignment Loss (AAL).

Think of AAL like a coach, Choudhury said, teaching the model where to look, and if the model's "attention" drifts too far, it gently nudges it back. This is not just technological progress, its real-world implications range from disaster response and military surveillance to urban planning and agricultural productivity, she said. "One of the most exciting applications for us is disaster response," Choudhury stated.

Also read Only 33% of JEE Advanced 2025 qualifiers secure IIT seats despite allotment exceeding sanctioned intake

The researchers have open-sourced the entire model, making AMVG's complete implementation publicly available on GitHub. "Open-sourcing AMVG was a deliberate choice, and a deeply personal one too. We believe that real scientific impact happens when your work doesn't just sit behind a paywall.

By publishing our framework end-to-end, we're hoping to encourage transparency, reproducibility, and rapid iteration in remote sensing-visual grounding research," she said. However, AMVG still depends on the availability of high-quality, annotated datasets. "Its performance may vary across sensors or regions it hasn't seen before. Although it's more efficient than previous models, deploying it in real-time or on edge devices needs further optimisation," Choudhury added.

Follow us for the latest education news on colleges and universities, admission, courses, exams, research, education policies, study abroad and more..

To get in touch, write to us at news@careers360.com.

Download Our App

Start you preparation journey for JEE / NEET for free today with our APP

  • Students300M+Students
  • College36,000+Colleges
  • Exams550+Exams
  • Ebooks1500+Ebooks
  • Certification16000+Certifications