Creating Mobile AR experiences using ARCore

For a while now I have been watching the developments of AR (Augmented Reality) technology. Until recently, AR technology seemed relatively immature compared to other technologies like VR. It was hard to get your hands on hardware and the hardware that did exist was far from perfect.

With major players like Apple and Google entering the AR field, this has all started to change. In 2018 Google released ARCore, a powerful SDK that allows the development of AR applications on Android devices. ARCore uses the various sensors in modern smartphones to interact with the 3D world around us. The user can then use their smartphone device to overlay digital information over the real world.

To demonstrate the power of ARCore SDK, I’ve been working on developing Cosmos AR, an educational application to help teach the wonders of the cosmos. I was originally going to write the AR experience as a native Android App but I ended up deciding Unity would save me from writing a lot of boilerplate code.

Cosmos AR running on Android 10


Cosmos AR uses ARCore’s ability to place “position anchors” that tie the 3D virtual world to the real-life world. The way this works is that ARCore is setup to recognise a series of unique Cosmos AR playing cards. When ARCore finds one of these unique cards in the real world it creates an anchor point in the center of the card; as the card is being tracked this anchor point follows the card as you move it around in the real world.

In my case, I just use this feature to display different objects found in our cosmos but this powerful feature could also be used to, for example, make a board game or book come alive.

How does ARCore work? Well I could talk all day about the nuances of how computer vision and environmental tracking is used to create a realistic blend of the virtual and real worlds. The short of it is that ARCore uses a process called simultaneous localization and mapping (slam). As you move your phone through the environment it combines visual data from your camera with data from the inertial measurement unit to help map the room around you. It creates feature points to help understand where your phone is in 3D space and then places a virtual camera in the same place as the real camera on your phone; this allows virtual objects to align with the real world.

One more thing to note is that Google has spent a lot of time working on efficient techniques to map depth using a single RGB camera and the IMU data. The issue with this in my opinion is that the quality of data is far from great. Luckily modern phones now house special sensors just for measuring depth. The sensors are designed to simply report depth making the process of determining depth substantially simpler. At the moment, due to cost, only flagship phones are found with these sensors but soon this will be a standard on all phones.

Leave a Reply