Rebekah Valentine
Guest
Pokémon Go developer Niantic is hard at work building and training an AI to essentially be able to auto-complete real-world locations with only a limited amount of information. And it's using data collected by Pokémon Go players to do it.
In an official blog post spotted by Garbage Day and reported on by 404 Media, Niantic revealed that it's building something called a "Large Geospatial Model." You may already know what a "Large Language Model" is - it's Chat GPT. It's an AI that is trained on enormous amounts of existing text so that it can then produce text on its own that sounds normal, and conceivably like what a user might want to hear.
A Large Geospatial Model is essentially the same idea, but applied to the physical world. It's trained on what real-world places look like (a church, a park, a house, etc), and then it can use that data to produce information on what actual places it hasn't seen yet might look like. Niantic claimed this will be useful for technology such as AR glasses, robotics, content creation, and other things.
Or as Niantic put it:
But to make this work, Niantic needs lots of data to train that AI on, and it can only do so much on its own. Google's been collecting location data for years via Google Maps and those funny cars it uses to get street view info, but that's not sufficient in this case. Cars can only drive on roads, and Niantic needs pedestrian information from places cars can't go. Fortunately, Niantic has thousands of people globally pointing their phones at things and sending that information back via its various projects and apps, Pokémon Go included.
Specifically, Niantic said in its post that it's been building something called a Visual Positioning System (VPS), a technology that uses an image from a phone to determine the position and orientation of a location on a 3D map. The technology is supposed to allow users to position themselves in the world with "centimeter-level accuracy," which allows them to then see digital content overlayed on the physical world "precisely and realistically." Again, from Niantic:
But all of this technology exists because users are constantly scanning the world with their phones while using Niantic's apps, including Pokémon Go, and have been for years now. Niantic said it currently has 10 million scanned locations around the world, one million usable with its VPS service, and gets one million new scans every single week containing hundreds of discrete images. That's a lot of data.
For now, Niantic said it's using the data explicitly to develop its own technologies that it then turns around and implements into its existing products. However, in recent years there have been numerous concerns over how companies collect data, use it to train AI, and what those AI models might eventually be used for. While today Niantic's LGM work may just be limited to letting us drop cute Pokémon models in the world for other people to find, tomorrow its uses may grow increasingly complex.
IGN has reached out to Niantic for comment.
Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to [email protected].
In an official blog post spotted by Garbage Day and reported on by 404 Media, Niantic revealed that it's building something called a "Large Geospatial Model." You may already know what a "Large Language Model" is - it's Chat GPT. It's an AI that is trained on enormous amounts of existing text so that it can then produce text on its own that sounds normal, and conceivably like what a user might want to hear.
A Large Geospatial Model is essentially the same idea, but applied to the physical world. It's trained on what real-world places look like (a church, a park, a house, etc), and then it can use that data to produce information on what actual places it hasn't seen yet might look like. Niantic claimed this will be useful for technology such as AR glasses, robotics, content creation, and other things.
Or as Niantic put it:
Imagine yourself standing behind a church. Let us assume the closest local model has seen only the front entrance of that church, and thus, it will not be able to tell you where you are. The model has never seen the back of that building. But on a global scale, we have seen a lot of churches, thousands of them, all captured by their respective local models at other places worldwide. No church is the same, but many share common characteristics. An LGM [Large Geospatial Model] is a way to access that distributed knowledge.
But to make this work, Niantic needs lots of data to train that AI on, and it can only do so much on its own. Google's been collecting location data for years via Google Maps and those funny cars it uses to get street view info, but that's not sufficient in this case. Cars can only drive on roads, and Niantic needs pedestrian information from places cars can't go. Fortunately, Niantic has thousands of people globally pointing their phones at things and sending that information back via its various projects and apps, Pokémon Go included.
Specifically, Niantic said in its post that it's been building something called a Visual Positioning System (VPS), a technology that uses an image from a phone to determine the position and orientation of a location on a 3D map. The technology is supposed to allow users to position themselves in the world with "centimeter-level accuracy," which allows them to then see digital content overlayed on the physical world "precisely and realistically." Again, from Niantic:
This content is persistent in that it stays in a location after you’ve left, and it’s then shareable with others. For example, we recently started rolling out an experimental feature in Pokémon GO, called Pokémon Playgrounds, where the user can place Pokémon at a specific location, and they will remain there for others to see and interact with.
But all of this technology exists because users are constantly scanning the world with their phones while using Niantic's apps, including Pokémon Go, and have been for years now. Niantic said it currently has 10 million scanned locations around the world, one million usable with its VPS service, and gets one million new scans every single week containing hundreds of discrete images. That's a lot of data.
For now, Niantic said it's using the data explicitly to develop its own technologies that it then turns around and implements into its existing products. However, in recent years there have been numerous concerns over how companies collect data, use it to train AI, and what those AI models might eventually be used for. While today Niantic's LGM work may just be limited to letting us drop cute Pokémon models in the world for other people to find, tomorrow its uses may grow increasingly complex.
IGN has reached out to Niantic for comment.
Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to [email protected].