Department: Computer Science & Engineering
Faculty Advisor(s): Serge Belongie

Primary Student
Name: Mohammad Moghimi Najafabadi
Email: mmoghimi@ucsd.edu
Phone: 858-888-3337
Grad Year: 2016

Student Collaborators
Tsung-Yi Lin, tsl008@ucsd.edu

The problem of geographically localizing an image presents a wide array of challenges in computer vision and machine learning. The answer to the question "where was this photo taken" when GPS data is not available could depend on anything from the type of vegetation in the foreground to the shape of mountain peaks in the distance; the colors of a license plate to the pattern of bricks in a road; the height of a radio tower to the angle of a street crossing. In this respect the problem encompasses the core challenges of visual object recognition at a global scale. Accordingly, some progress has been made in this area by applying popular object recognition techniques at two ends of the spectrum: holistic image-as-texture based matching (e.g., "the photo depicts an Iberian scene") and local interest point based near-duplicate retrieval for famous landmarks (e.g., "the photo depicts the Sagrada Família"). If one wishes to pinpoint the location of a photo of an ordinary outdoor scene, however, one has little recourse but to find a human expert in that geographical area. Thus, the prescription for precise geolocation from images can be summarized as follows: make sure the scene is famous and use near-duplicate matching - extraordinary scene, ordinary algorithm - or hope you can find someone who knows the area - ordinary scene, extraordinary human. In this project, we propose to combine recent advances in geometric and semantic scene understanding with powerful human-in-the-loop methods to solve the general image based geolocation problem: ordinary scene, ordinary human, extraordinary algorithms. The framework builds upon the "Visual 20 Questions" approach, whereby human and machine share knowledge and direct one another's attention to salient parts and attributes in such a way that the abilities of both human and machine improve with every interaction.

Related Links:

  1. http://vfywproject.blogspot.com/
  2. http://invpanoramio.tumblr.com/

« Back to Posters or Search Results