Latest0.1.0
Homepagehttps://github.com/Seanalair/GreenAR
LicenseMIT
Platformsios 11.0
Authors

Version
License
Platform

Purpose

Mobile augmented reality is a brand-new field with many exciting possibilities. However, the hardware that runs mobile applications creates significant limitations. For example, it is impossible to share details about an augmented reality scene across multiple sessions or devices. Also, the device’s interpretation of the world around it is often sparse and inconsistent. GreenAR sets out to mitigate these limitations and enable mobile developers to create persistent and shared augmented reality experiences.

The Problems

Mobile augmented reality is powered by Visual Inertial Odometry. In simplified terms, this means that the device tracks its initial position, its current position, and the position of the recognizable features in the scene. The data the device gathers is not rich enough to allow it to "relocalize" into an existing scene from scratch – it relies on continuous knowledge of its own position in the scene to make sense of the data it has about other features in the scene.

Another limitation which must be navigated is ARKit’s inconsistent scene interpretation. When you attempt to find a collision with the real world, the only situation where the results are reliable is when you are relying on ARKit’s automatically detected planes. Since ARKit currently only detects horizontal planes, this means its possibilites are severely affected by this limitation. Even if you only want to create single-session, single-user experiences which are initialized from scratch every time they are run, you probably don’t want to limit them to a single tabletop.

The Solutions

In order to reinitialize an augmented reality scene from another session or initialize a copy of a scene received from another device, we need to be able to determine two things: the device’s current position in the scene and the device’s currient orientation relative to the scene. in ARKit, Orientation can be handled automatically by initializing the ARSession with an ARConfiguration whose worldOrientation property is set to gravityAndHeading. When an ARSession is initialized in this way, its coordinate space will be oriented such that the positive y axis points up, the positive z axis points south, and the positive x axis points east. In order to determine position, we track the location of features relative to a reference point and use that reference point to initialize new sessions.

In order to achieve more consistent interaction with the real world, GreenAR provides simple methods for creating a simple representation the real world environment using only a few critical collision points. Attempting to achieve consistent collision with the real world over the course of a session is very frustrating, and makes users feel as if the app is broken. One way to mitigate this is to limit this frustration during an isolated "calibration" phase during which the application gathers enough structured data to create a persistent mapping of the room which can be used instead of the unreliable collision API.

By combining these two solutions, a persistent room mapping can be loaded from a previous session or another device, and an app which was full of frustration and limited to a single user for a single session can become a shared, persistent, performant experience.

Installation

GreenAR is available through CocoaPods. To install
it, simply add the following line to your Podfile:

pod 'GreenAR'

Usage

To use GreenAR, import it into files where it will be used.

import GreenAR

For Mapping

Create a RoomMapping object to represent your room.

For an example of how to have the user map the corners of the room they’re in, check out the example project. For now, we’ll just use some hard-coded values.

     // Creates a room mapping consisting of a 2 x 2 x 2 cube
     let roomMapping = RoomMapping(floorCorners:[SCNVector3(0, 0, 0), SCNVector3(0, 0, 2), SCNVector3(2, 0, 2), SCNVector3(2, 0, 0)], wallHeight:2)

Now we have a room that measures 2 meters on each side. If we want to use it to test for collisions with the walls of this room, we can ask it to create an SCNNode representation of itself.

     /* 
        Creates a node containing planes for the walls, floor, and ceiling.
        By default, it is invisible and the edges of the planes touch.
        For collisions, this is best, but you can customize it for debugging.
        By making it visible and adding a corner inset, it's easier to see.
        See the method documentation for an explanation of other options.
     */     let mappingNode = roomMapping.createMappingNode(cornerInsetDistance: 0.25, visible: true)
     sceneView.scene.rootNode.addChildNode(mappingNode)

Now we can add a tap gesture recognizer to the scene view and get information about collisions with the walls of the room (as represented by the mapping we created) at the location of the tap.

    override func viewDidLoad() {
        super.viewDidLoad()
        let tapRecognizer = UITapGestureRecognizer(target: self, action:#selector(self.sceneViewTapped(_:)))
        sceneView.addGestureRecognizer(tapRecognizer)
    }

    @objc func sceneViewTapped(_ tapGestureRecognizer: UITapGestureRecognizer) {
        let pointInView = tapGestureRecognizer.location(in: sceneView)
        let hits = sceneView.hitTest(pointInView, options: nil)
        if let tappedNode = hits.first?.node {
            tappedNode.removeFromParentNode()
        }
    }

When you tap, you should see a triangular section of one of the walls disappear (planes are represented by a pair of triangles).

For Saving

If you want to save this mapping, it will need a referencePosition. For examples of a few different methods of getting one, check out the example project. Any SCNVector3 will do, but you will need be able to get an SCNVector3 representing the same location in-world when you want to load this space.

It is also possible to map other features of the room by creating a custom object which implements the Feature protocol and adding it to the room’s list of features. When the room is serialized or deserialized, its features will be as well. In this way, we can save and load spaces with arbitrarily complex contents.

Once you have that, you can either use the MappingManager helper class to save a json representation to the local file system:

    MappingManager.saveRoomMapping(roomMapping, fileName: fileName)

or get the JSON yourself to do with as you please:

    let jsonObject: [String: Any] = roomMapping.serialize()

For Loading

To load a room object, you need an SCNVector3 representing the location in your current session of its referencePosition and either the name of the file where it is saved on the device:

    let roomMapping = MappingManager.loadRoomMapping(fileName: fileName, referencePosition: SCNVector3(1, 2, 3))

or the JSON that represents the room:

    let roomMapping = RoomMapping(jsonObject: jsonObject, newReferencePosition:SCNVector3(1, 2, 3))

At this point, you can add the loaded room mapping to the scene as you would one your user had just created:

     /* 
        Creates a node containing planes for the walls, floor, and ceiling.
        By default, it is invisible and the edges of the planes touch.
        For collisions, this is best, but you can customize it for debugging.
        By making it visible and adding a corner inset, it's easier to see.
        See the method documentation for an explanation of other options.
     */     let mappingNode = roomMapping.createMappingNode(cornerInsetDistance: 0.25, visible: true)
     sceneView.scene.rootNode.addChildNode(mappingNode)

Requirements

Swift 4.0+, XCode 9.0+, ARKit

Example Project

In the example project, you’ll find concrete examples of how to have a user map a room, how to get a reference location, and how to share a room between multiple simultaneous users. To run the example project, clone the repo, and run pod install from the Example directory first.

Author

Daniel Grenier, [email protected]

License

GreenAR is available under the MIT license. See the LICENSE file for more info.

Latest podspec

{
    "name": "GreenAR",
    "version": "0.1.0",
    "summary": "A toolkit which enables the creation of persistent and shared mobile AR experiences with ARKit.",
    "description": "GreenAR provides the tools needed to transform an ARKit app from a single-session, single-user experience to one which can persist between sessions and be shared between users. Methods are provided for mapping real-world spaces, serializing such mappings, and relocalizing within them on a cold start using reference points. See the example application for ideas on how to use GreenAR.",
    "homepage": "https://github.com/Seanalair/GreenAR",
    "license": {
        "type": "MIT",
        "file": "LICENSE"
    },
    "authors": {
        "Daniel Grenier": "[email protected]"
    },
    "source": {
        "git": "https://github.com/Seanalair/GreenAR.git",
        "tag": "0.1.0"
    },
    "platforms": {
        "ios": "11.0"
    },
    "source_files": "GreenAR/Classes/**/*",
    "pushed_with_swift_version": "4.0"
}

Pin It on Pinterest

Share This