Author: Edward Caulfield
Release Date: January 27,2017

 

Introduction

The purpose of this review is to provide a functional view of the various CMS tools available for Augmented Reality that don’t require programming skills to obtain a polished finished product – referred to as Easy AR CMS in this document.  Participants in this review were selected by the following criteria:

  • All configuration must be browser based. With the exception of the AR Scanners themselves, no products requiring downloadable tools were reviewed.
  • They have a product that is currently available. No “dreamware”, no beta software.

Vendors Reviewed

The vendors reviewed in this document are:

No vendors were ambushed in the making of this Product Review.  All vendors created accounts for me with the knowledge that the information gathered would be published.  All vendors were given a two week period prior to publication to review the information in this document for technical accuracy and journalistic fairness.  Although almost everyone managed to respond in time for release, I am sure that changes will need to be made down the line and I will accept corrections and product updates for as long as this Product Review is posted.

Having been in Customer Service for over 30 years, I admit to a special spot in my heart for high quality support. Even though everyone knew they were dealing with a Product Review, the quality of support when I reported bugs or had questions about the tools ran the gamut, from completely ignoring my emails, to not reading the email fully before responding, suggesting that I go to their user forum to open a bug report, or simply providing technically incorrect responses. I will take this opportunity  to award a five star rating to Takondi (TARTT) for providing the best support and interaction by far.

Although there are over a dozen vendors positioning themselves in the Easy AR CMS market, only six responded to the opportunity to participate in this review. If your favorite tool is not part of this review, please let me know.  I will be updating the review with new information over time.

 

Test Configuration

All products were tested with Chrome version 55. OS X Sierra 10.12.1 and Windows 10 Pro were used interchangeably for the configuration work.  Scanning of the Markers/Triggers/Targets/Pages was done on an iPhone 5, an iPad 2, both running iOS 10.2, and a HTC Desire 320 running Android version 4.4.2.  These platforms were not chosen for any specific technical reason – they are simply the tools that I had at my disposal for testing.

Object recognition was not included in this review.  Only printed and on-screen Triggers were tested.

While only the Easy AR CMS functionality was tested, all of the tools reviewed have extended programming capabilities, which were not covered.

Trigger scanning has its challenges.  Images which are used as Triggers should ideally have non-symmetrical structure and plenty of variation in color.  Wikitude has a very good summary of best practices for images.

The testing consisted of implementing the basic functionality of the tools, as far as my computing environment would allow.  The Triggers used were:

Triggers

     

Cat

Dog

Pumpkin

     

City

Tree 1

Tree 2

 

Fun

 

Because of the simply overwhelming variations available, I did not thoroughly test every possible function.  With the exception of the most common functions, I assumed that if a function was present then it worked.  I also did not try to find the limits of each application.  It bears to reason that if you have 1,000 Triggers and many of them are very similar, your Trigger scanning times will increase and you may suffer in recognition accuracy.

The purpose of this review is not to find bugs, rather to understand a product’s functions and features, as well as their relative strengths and weaknesses.  Bugs are only mentioned in this review when they are glaring.

Overlays, Events, Actions

Most tools held true to the concept of Overlays that would react to an Event with a given Action.  What became the challenge is that these metaphors were very often used inconsistently.  One vendor, for example, has an Overlay that is called a Button.  This Button can have text or a background image, respond to a Tap Event and launch an Action, such as “Open URL”.  Others have Overlays called “Open Link”, allow for a background image or text, and respond to Tap Events.  The fact that it is a Button is implicit in its behavior.

To that end, just about everything could potentially be considered a Button, and some vendors look at the world that way.  They differentiate the Buttons based upon data entry masks and edit checking controls. Because of this, I am considering everything a Button with related Data Types, which responds to Events with Actions. For example, to play a video you would have a Button with a Video Data Type that would respond to a Tap Event and have the Action of Play Video.

When evaluating Actions, some vendors indicated that they could perform almost any action with the help of an appropriately formatted URL, which is fair enough.  Because of this, I am only documenting Actions which have a specific dialog and data entry validation.  For example, “Send an Email” is only considered an Action when the dialog has edit checked fields that are relevant to sending emails.

 

Summary of Results

The Easy AR CMS market is in a very tumultuous state right now.  Although I reviewed six vendors, there were just as many getting ready to come to market.  The market is young with a relatively low investment hurdle, and gives the impression that everyone is struggling to find ways to make themselves unique and valuable, while at the same time staying true to the “zero learning curve” goal.

Every company reviewed has shown a firm grasp of the need for elegance and simplicity in their user interface.  However, many have subtle interface problems that show the pressure is on to deliver good scanning rates and other platform fundamentals more than to offer a well-refined user experience.  I expect that this will take care of itself in time.

While almost everybody offered the basic Overlays of Images, Videos and 3D modelling, some were very thin on Events and Actions.  About half allowed for replacing Trigger images and many would convert the City and Tree Triggers from Portrait to Landscape, which I found very annoying.  While there is a technical explanation for this, it apparently boils down to the tool ignoring JPG orientation tags.  The good folks from Wikitude pointed me to Gimp correct the image rotation and it worked perfectly for me.

The ability to resize and position Overlays was common, however the ability to rotate the Overlay or control the opacity was less common.  One can argue that this can be done out of the tool anyways, but I think it speeds things up to be able to do this in the tool.

When it comes to video support, many tools only supported the MP4 format.  There are numerous conversion tools available, so migrating your video to MP4 shouldn’t be a big deal, but I don’t see the necessity.  I would expect that tools would either provide common CODEX options or simply convert the video to MP4 once it was uploaded – as some tools actually do.

Working with tools that offered 3D modelling, I consistently had issues loading a very simple 3D model of my own creation.  Limited format choice and poor error handling was also very common.  TARTT and Wikitude only accept WT3 format for models, which is not supported by numerous modelling applications. To address this, Wikitude Studio provides a free of charge encoder that will translate FBX files to WT3 files.  If your modelling application won’t save to the FBX format, there is fortunately a free of charge FBX converter provided by Autodesk that will convert most common formats to FBX.

Unfortunately, when I attempted to create a very simple 3D model in Autodesk I had to first export it to DXF, and then convert to FBX.  When I attempted to import this FBX file into the Wikitude 3D Encoder, I was told that the file was empty.  My frustration with 3D modeling is something that I am experiencing on all of the tools that I am reviewing.  I suspect that the primary flaw lies in my lack of expertise in 3D, however I would really expect that this shouldn’t be so complex.

When working with uploaded media, some tools implemented “hard” size limits and others did not.  However, the real size limit is not defined by the tool but by the experience that you want your customers to have.  A 200MB video uploaded over 3G makes no sense, from both the time it takes to cache the video and the possible impacts on the user’s mobile data plan.

While every tool eventually allowed you to enter a URL for a button function, not one had a “Test” button to ensure that the URL is properly formatted and the link is valid, unless they were picking data up from the URL, such as with YouTube, Vimeo or SoundCould links.  Such an easy thing to do, yet not done by anyone.  It would also be an interesting feature to do periodic link checking on the full Overlay database and provide administrative reports on dead links.

As well, not a single tool actually tested the URL upon scanning or clicking to ensure that it wasn’t dead.  This was always left to the target browser.

Additionally, in most computing environments a button should give some indication that it has been clicked.  None of the tools reviewed did this.  Thus, if you click a button and it does nothing, you don’t know if the problem is that the click wasn’t registered or the action failed without an error message.

For Image Carousels, no tool allowed for an automatic timed transition between images. For Audio, it is interesting that no vendor supports the ability to loop.

 

Scanning

Scanners had their unique set of issues.  Some scanners required that you point at the Target before you initiate the scan, otherwise they would time out within a few seconds.  I would expect that this could be controlled when a White Label scanner is created, however, I did not test this – so beware and test before you buy!!  Ideally, you want your scanner to never time out.

A more subtle issue is that many scanners required that the scanner app either be restarted or refreshed after you change content in the AR experience.  In a test environment, this is more of an annoyance than a problem.  However, with a published campaign this can be an issue because it means that either you cannot change the campaign once it has been published or you run the risk of some users getting the “old” campaign and others the “new” campaign.  This is probably less of an issue on Android, as the application ends as a normal part of the application life cycle.  However, with iOS the application never ends unless it is manually killed by the user – so how is the user to know that they need to kill and restart the scanner app to get the current experience?  While not a critical issue, it just strikes me as a customer unfriendly approach.

 

White Label Scanner & SDK

All vendors offer the option of re-branding their scanner and tying it to a campaign or set of campaigns.  Everybody also offers the option of an SDK to embed the scanning functionality into a customer specific application.

 

Localization

At this point, just about everything done for AR marketing appears to be tied to a local market. While much AR content is graphical in nature, text and audio material is likely to be present if there is a “call to action”, as is common in marketing.  This means that if you are in Munich you can expect the AR content to display in German, which is okay if you are comfortable excluding your tourist market.

Ideally, it would be desirable that any text or audio displayed as part of the AR experience be tied to the language of the device being used to view the content.  That is, if my smart phone is configured for Italian, then AR content is also in Italian regardless of where I am unless I tell it to do otherwise.  An additional interesting feature would be able to have content tied to a given date & time, so that a short-lived promotion could easily be placed on top of an existing, long term display without going through the hassle and risk associated with duplicating the content.

 

Robust Reporting

While some vendors didn’t feel comfortable enough to open the kimono and allow me to access a fully enabled system so that I could review their reporting, some simply didn’t seem to have very much by way of reporting options available at all.  What would the customer want to know beyond the obvious histogram of “how many scans?”  How about:

  • Scan failure information and crash reporting
  • Overlay statistics (how many times specific buttons were pressed, etc.)
  • Average scan times
  • Time of Day, Day of Week, Regional Holiday detection
  • Delays for loading Overlays
  • Geographic location of scan
  • Network type and quality during scan
  • Scanner OS details
  • Language details
  • etc…

Additionally, I would think for Marketing applications it would be desirable to create a mechanism that allows you to determine how frequently a scan converts to revenue or another desired outcome.

 

Scoring

When I first started working on this review, my intention was to rate each product according to a set of weighted criteria and rank them based upon total points.  After working with the tools, however, I dropped that approach.  I think what is going to matter for most people is functionality.  Product quality is a given.  Documentation and support can typically be assumed to be satisfactory.  Functionality, however, is King.  If the tool cannot do what needs to be done, then it’s automatically out of consideration.

Which tool is best for you?  It depends upon what you need.  If your organization warms to big brand names, then Aurasma, a subsidiary of HP, is probably high on your list.  If you need to recognize numerous very similar Targets and can accept a small logo on the page, then ZapWorks is probably better for you.  If you want “Scratch & Win” or “360° Panoramas”, animation with 3D models, user manipulation of 3D models, or Beacons & GPS Points, then PixLive is a good tool to look at.  If you work in teams or want to localize your campaigns, then TARTT is where you should start (sorry, I couldn’t help myself…).

I can only emphasize that whatever you have by way of needs, the best platform for you may change in a month or a year.  Everybody is constantly improving their tools and working hard to be the best in their market.  To see a detailed analysis of each product, click on the product name in the table below.  A Feature Comparison can be found in this table.

Product

Unique Strengths

Aurasma
  • Backed by major technology company
  • Robust set of Actions
  • Very good on-line help
  • Unique editor features
Layar
  • Large selection of Overlay items

PixLive

  • Very unique selection of Overlay items
  • Animated 3D models
  • Allows manipulation of 3D models in Scanner
  • Comprehensive AR CMS solution with Beacons and GPS Points
  • Good error handling and edit checking

TARTT CMS

  • The only “Team Ready” tool in this review.  It recognizes and blocks simultaneous content editing
  • Good editor UI
  • Animated 3D models
  • Only tool that allows for in-tool content localization (English, German, French or Italian) based upon device language
  • Roll-Back and Audit Trail capabilities
Wikitude Studio
  • Very good Target recognition
  • Good on-line documentation
  • Only tool supporting Smart Glasses
ZapWorks
  • “Breaks the mold” with Zapcodes.  Image scanning quality becomes a non-issue.  Sharing AR Experiences digitally is very easy.
  • Super fast and flawless scanning
  • Good error handling and edit checking
  • Most feature-rich scanner tested
  • Good reporting facilities

I would love to have your ideas on how this review could be better and welcome your comments.

 

Thoughts on AR/MR

While everyone I speak with indicates that the AR/MR market is doing well right now, there are two key issues which I believe are going to come home to roost in the future.  These issues, if not effectively addressed, will eventually slow market growth and opportunity for everyone.

The need for vendor independent standards – the vendor specific AR Scanner must go. Even though most marketing companies are currently adamant that they want to maintain control of the AR experience for their products, eventually the dam has to burst.  There will come a point in time where in order to use AR based marketing applications, a customer will have to sort through dozens of different AR Scanner apps to find the one for the store they are in at the time.  While AR is still a novelty, customers will be willing to fish out that special application to show their friends this cool new technology or to enjoy the spectacle of 3D shoes floating on their Smart Phone.  Eventually, however, the gloss wears off and customers will demand that AR experiences run flawlessly under their everyday browser – be it Chrome, Firefox or whatever…

The IEEE has a web page with over 100 standards related to Augmented Reality, however, it appears to be a few years old and I am hard pressed to find any standards that would be used for either Recognition, Browser or Display Device (i.e. Smart Glass) APIs.

Even though a de facto standard exists by way of the Android Wear API, which is used to interface applications to Android based Smart Glasses, I cannot see Microsoft or Apple rapidly warming to this API.

The mobile world serves as a good basis of what to expect for AR/MR. iOS and Android capture the lion’s share of the app market. Then, after a very large gap in market share, comes Microsoft devices.  After this is a long trail of wannabe platforms with microscopic market share.  There is little room or tolerance in the Smartphone market for more than two deployment platforms and there is no logical reason for the world to treat AR Scanners and Smart Glasses any differently.  For the AR/MR application base to grow substantially, we need development tools which bind into common recognition and display standards.  We also need a way to bring these into our favorite web browser.

It looks like the Open Geospatial Consortium has a start on this with ARML 2.0. And while not exactly AR, WebVR is developing the very important browser interface technology.  The most focused organization that I have found for developing standards for AR is AR Community, which has “Open and Interoperable Augmented Reality Experience” as its mandate.

Smart Glass Support – I can hear the raspberries coming already. Yes, you are right – it is very early in the game to be supporting Smart Glasses.  They are too expensive, they are too ugly, they have too many problems, they are primarily for industry, bla, bla, bla.  This is all true for today.  However, as any student of technology knows, what is true for today isn’t necessarily true for tomorrow.

There is not a person I talk to who doesn’t agree that long term, Smart Glasses will significantly challenge Smart Phone sales.  It is just a matter of when, not if and Vuzix just fired the opening salvo with their introduction of the M3000. So, the question is, when do you start preparing for the obvious future?  I certainly wouldn’t throw the whole farm at the project, but I think it would be wise that at the very least each AR CMS vendor target at least two platforms – Android Wear API and whoever else you want to bet on – and get production quality interfaces rolling.  The future always comes sooner than anyone thinks.