Target driven Augmented Reality, conceptually at least, is quite simple. You run an App on your mobile phone or pad, point it at a target and once the target is recognized by the system a digital overlay is applied. This overlay can be anything .. pictures, videos, 3D models, text, buttons, etc… My two biggest complaints about AR were that pointing a phone or pad at something gets old quick and the idea that every AR experience had to have its own App doesn’t scale. In order for AR to take wing in the consumer sector you will need affordable, durable and fashionable smart glasses and browser based AR instead of App based.
I remember raising these concerns at a wearable technology conference some time back and being treated like the camp heretic as a result. No matter. It didn’t take long before I read in the press that my concerns were shared by the best and brightest, and work was progressing to address these specific shortcomings of AR. While the consumer quality smart glasses are still in development, AWE.Media was already addressing the browser based AR concerns that I had – as much as the standards community and platform developers would allow.
AWE has an elegant browser based AR solution that for the longest time appeared to have a market all to itself. They have a superbly crafted user interface and have been working diligently to strengthen their platform’s value. For those who have time and the requisite technical skills, there are also public domain tools, such as A-Frame and AR.js, but I found no market based competitors in the space.
Until now, that is. Say hello to XR.+. XR.+ is a relatively young, target driven, browser based AR tool. As with AWE.Media, XR.+ creates AR experiences that take place in your browser, meaning you don’t have to download a new App for every different experience. Instead, you load a given URL into your browser and point your device at a target. Voila! The rest is AR magic.
Let’s take a closer look at XR.+ and see what we have to work with.
As with most platforms, XR.+ offers a variety of different plans. Please note that I won’t be calling out which plan a given function is available in during this review, so you’ll need to refer to the options to listed below to see what you need.
XR.+ has the base concepts of Scenes and Collections. Each Scene is comprised of a target, referred to as an AR Pattern, display assets and a URL. A Collection is a group of Scenes. When you first create a Scene you are given a choice of importing a 3D model, an image or a video.
The models can be either FBX, OBJ or DAE formats, all of which are very common and widely supported. The model size is limited to 3 Mb for the free account, 6 Mb for the Plus account and 12 Mb for the Pro account. If you are working with simple 3D models that have been “manually” created, then the size limitations shouldn’t be an issue. However, if you are using models that are generated via Photogrammetry, you’ll need to reduce them significantly before uploading.
The image formats supported are JPG and PNG. MP4 is the only video format supported. While the meagre selection of formats could be annoying, they are all extremely common and just about any other format can easily and quickly be reformatted to a supported format. The only problem that I see at this point is that 5Mb is relatively small for a video. As well, many people may not be happy that they have to store their video on the AR tool and would rather reference it on YouTube or Vimeo. This would significantly ease the capacity issue and also allow for additional video features, such as advertisements and localisation.
XR.+ allows you to both static models and animations in the overlay. Let’s start our experimentation with a 3D model. When you select this option you are given a typical file selection dialog that is looking for the accepted file formats. I chose to use the scan of a grilled chicken that I had recently made.
When the 3D model is selected, you are asked whether or not you want to import the materials used in the creation of the object. Here is where you can start to get a feel for the technical nature of XR.+. Ideally, you would want to hide this complexity from the user, as you can expect that the majority of them probably won’t really know what it means to import materials. Heck, they got a model from the art folks, that’s all they know.
One can also see from the controls on the left, the focus is on functionality not aesthetics.
Once the 3D model is loaded things get even more complex. My chicken may have looked like a chicken when it was in my Photogrammetry tool, but it is all white after being uploaded. This is because you now have to apply the Color Map. If you don’t know much about 3D modelling, then you might ask, what is a Color Map? Again, the technical focus of the tool shows its face. Other tools that I have used (AWE and Sketchpad, for example) load a ZIP file that contains the OBJ file and Color Map, without requiring separate steps.
Once the Color Map is uploaded, I just need to reposition the chicken and we’ve got a useable 3D model.
XR.+ has numerous settings for the 3D model that you can use to control how the model looks and sounds when displayed.
At this point, however, I am going to take the easy way out and say that explaining each of these settings is beyond the scope of this review. If you would like to know more, I suggest you sign up for a Free account and experiment. Suffice to say, the level of control allowed in XR.+ is a lot more than I have seen in other AR tools. Personally, I would prefer to hide the complexity from the average user and let the aficionados configure to their heart’s content under an “Advanced” tab.
If you would like to use an image as your asset, then you click on the “IMPORT AN IMAGE OR VIDEO” button and select the file that you would like to use.
Once the image is uploaded you have the same positioning controls that you had with the 3D model, with the exception of the Light Scaling.
Uploading a video is just the same as uploading the image. Unfortunately, there is a hidden requirement that isn’t mentioned along with the format restrictions, which is that the size should be no larger then 512 x 512. It would be a bit more user friendly to have this requirement visible in the same space that the format limitations are shown.
Once you have a video with the correct size, you see it in the same display as the 3D model and Image, with a few control options as well.
The next step in the process is to edit the Preview. Here you can select the backdrop to be used with the asset image. You can choose a stock photo, upload your own photo, or use what is called the “Skybox”, which is basically an empty grey background. You can also control the X, Y and Z positioning of the Asset relative to the background.
The purpose behind the preview is that it becomes the Thumbnail view that you see on the Scenes summary as well as the display when you use the Scene in a Collection with the Selector option.
The next step is to select your Branding, assuming that your account supports this feature. The Branding determines whose logo is going to be shown when the user scans the target. As the Pro is the only account type that supports this and I can hardly see too many professional implementations that would accept vendor Branding, it looks like the push is on to “Go Pro”.
To use Branding, you first have to configure a custom Branding under the BRANDINGS tab. Here, you select not only the logo, but also the various colours for the display.
By using the “Select a branding” section, you can determine which branding will be applied to the top left of the screen. The coloured dots represent the various colors associated with the custom Branding, however the usage is not intuitive. Clicking on the dots had no visible effect, yet makes the system think the configuration has been modified and should be saved. Another example of weak UI focus.
Under the Branding tab you are also able to create buttons. If you do create buttons, they will be displayed only if you tap on the logo once the URL has been loaded. Once the are displayed, they can be hidden again by tapping the “X” at the top right of the display. Unfortunately, there is no indictor to inform the user that the Scene has buttons available to it, so if they aren’t informed through a source external to XR.+, then they’ll never know. As well, there is no ability to control the positioning of the buttons and the text is forced into uppercase. I think this feature would benefit from a rethink. The buttons are good to have, but there should be some indication regarding the fact that buttons are available and one should have better control over their placement and appearance.
Additionally, you can replace the Help screen under the Branding tab.
The next step, if you have a Pro account, is to create an AR Pattern, typically referred to as a Target or Marker in the AR universe.
The AR Pattern would actually have been uploaded earlier. Although it is relatively trivial to go back and add an AR Pattern, it would add to the user friendliness if a new AR Pattern could be added directly from the AR Pattern tab in the workflow. XR.+ has a few guidelines on good pattern behaviours to help ensure a good user experience.
Now that you’ve done all the work, it is now time to publish it. Here you have numerous configuration options, including the display of social share buttons (without the ability to configure which social share buttons are to be used, however). It is important to note that if you don’t make the scene private, it will possibly show up on the XR.+ Scenes summary. When in production, most folks I know would not want to have their content displayed outside of the specific context it was designed for.
What I find really interesting is that you can take the AR experience and host it on your own system. While this had an obvious administrative overhead associated with it, it is a really nice touch to allow the user to White Label the AR experience.
Under the Publishing Tab there are a set of “creative tools” that can be enabled. With the creative tools you can allow the user to take snapshots of the display as well as customise the colouring of the various materials used in a 3D Model. If your scene is animated, you will also be allowed to enable the animations player, which lets the user choose which of the various animations to display.
Below, is an example of the color editor being enabled.
Below is an example of the animations player being enabled.
Now that everything has been done, how well does XR.+ work? It is important to note that if you are using an iPhone or iPad, Apple forces you to use Safari. Although there are open standards that Apple could support if they wanted to, they choose to use standards that are currently only supported on their platform and are not yet available on Chrome, Firefox or other browsers. AWE.Media has the same restriction, so this is not an issue with XR.+.
The AR Pattern is usually quickly scanned if you 1) use the correct URL and 2) have no occlusion. When I experimented with scanning my tests, the AR Pattern was quickly recognized even if it was rotated or viewed at an angle. However, the AR Pattern was very quickly lost if even a slight portion of it was not visible. This means that you either have to maintain an uncomfortable and unnatural distance to the AR Pattern, or you have to keep your hands very stable.
Additionally, there was no ability to rotate or zoom the 3D model once it was displayed. For the video, there are no controls to allow you to stop, start or rewind.
Viewing in VR
In addition to viewing a Scene on your mobile phone or pad, you can also view any Scene in a VR headset. VR is disabled by default, however can be enabled for any Scene by toggling the VR button at the top of the editing screen. Once VR is enabled, you will be able to set the location and rotation of the Overlay under the Edit Model tab.
I experimented using a DayDream headset and compatible phone and am not really seeing the value in this. It simply displays the Overlay in VR without allowing for any interaction. Maybe I missed something?
XR.+ offers the ability to create a Collection, which is a number of Scenes grouped together. A Collection can be driven by using a Selector or by tracking the markers of the Scenes contained in the collection.
If you use a Selector, upon scanning the AR Pattern you will shown a series of Overlays as defined by their individual Previews. Simply tap on any given overlay to see it displayed.
If you choose to use Multiple Markers, once the URL is loaded you will get a selection of AR Patterns to scan. Personally, and to make XR.+ more competitive with existing AR CMS tools, I think it would be better if the determination of which marker to scan were invisible to the user by default. The system should be able to recognise that it has scanned an AR Pattern and automatically select which one from the Collection. This is pretty much the way the rest of the world works….
One of the more interesting and Enterprise focused features on XR.+ is the ability to create teams to work on projects. There are very few AR tools currently support the concept of teams.
To use Teams, simply have your team members create accounts on XR.+ and give you their user ID. You then invite them to join the Team via the user ID.
What’s to like
- XR.+ is very fast and has the basic functionality that you might want from this kind of tool.
- The layout is very easy to understand and master
- It is great that it supports teams
- It has the very unique feature of allowing you to host the experience on your own servers
- Although it could be overwhelming for the beginner, the level of control available for 3D models is very good
- Functionally, XR.+ appears to be very stable
What’s to not like
- The User Interface gives the impression of a focus on function, not on the user experience
- The Help facility is a bit thin
- No end user controls for 3D Models or Videos
- No ability to add text or buttons to an overlay
- No ability to add multiple assets to an overlay
- OBJ uploads color map separate from Model instead of allowing everything to be put in a single ZIP file
- Only available in English