It was only a few short months ago that Apple introduced ARKit at WWDC 2017. Although the claim Apple made that ARKit turns iOS into “the largest AR platform in the world” is a pile of manure, they met with little critical backlash. Since the announcement, thousands of developers have downloaded the ARKit Beta release and started seeing what it could do. The results are pretty impressive, specially when you consider that this is a Beta release of 1.0 software.
If we roll back a few years to Google’s release of Tango, there’s a world of difference in the experiences:
- ARKit Beta software was almost immediately available for any iPhone 6 or later, along with some iPads, but Tango required a phone with a very specific configuration. At the Tango announcement, there was no phone, rather a Tango development Pad that you could buy from Google. Three years later, there are only two phones being sold that support the Tango architecture and the development Pad is no longer sold.
- When Tango was announced, there was no significant reaction from the technical community – certainly the unique hardware requirements had something to do this this, but maybe this is also because Tango was announced years before Pokémon Go and HoloLens arrived, both of which seem to have done a great deal to awaken people to the potential of Augmented Reality. Google’s seemingly relaxed approach to pushing Tango adoption doesn’t help either.
- Tango is a complete architectural specification for phone manufacturers. ARKit is a library of routines that runs on millions of already deployed devices.
- Although there are a lot of people who have done some interesting things with Tango, this was quickly dwarfed by the attention the Augmented Reality community gave to ARKit and the stream of ARKit based demos that continue to be created.
Add to this the widely held expectation that the iPhone 8 will have some AR specific capabilities and it is very easy to get the impression that ARKit & the coming iPhone 8 just ate Tango’s lunch.
But is that really the case? It could be argued that the technology behind ARKit, Visual Inertial Odometry (VIO), could just as easily be implemented on Android as it is on iOS. And while this is theoretically correct, it is technically much more daunting. Apple has a very high performance architecture that ensures a high quality experience, which comes at a relatively high economic price. Android serves a much broader market, with phones that start as low as fifty bucks and go up to $10,000 or more. For fifty bucks you’re not going to get the performance of an iPhone, and the technology behind ARKit requires that performance. So, the first thing you have to do is separate the 1,000+ Android phones into compatible / non-compatible categories. Who’s going to do this? Vendors will be left to decide for themselves and because truth is always relative and the free market is often a race to the bottom, you could expect a mess. Maybe Google could provide an objective benchmarking tool? Maybe, if they do anything at all.
There is also the argument that mobile application developers usually try to target both Android and iOS. Because one of the holy grails of multi-platform development is application consistency, the developers want to be careful about what technologies they support. Imagine a multi-platform app coming out that had a version which provided different experiences on iPhone and Android, and then add that the enhanced experience is only available on some iOS platforms. This is not a “good thing”. It will lead to misunderstandings, complaints and eventually some unhappy customers.
For software developers, and eventually consumers, platform consistency is very important. ARKit breaks this and forces a developer to decide if the feature difference is significant enough to be worth the bother. While iOS-only apps may race to support ARKit, I expect most multi-platform developers will take a while to become enthusiastic.
Finally, there is the question of “Where is the added value?” There are many great demo apps that show off ARKit capabilities, but a demo app isn’t a finished product. To go from a demo app to a finished product you have to show sufficient differentiation to make it worth your time to add ARKit specific capabilities and live with the associated headaches in development, testing and customer satisfaction. Beyond spatial measurement and plane detection (a derivative of spatial measurement), what can you do with ARKit that is better than other existing technologies?
With ARKit, Apple has certainly put themselves in the Augmented Reality catbird seat. But what good it is being in the catbird seat if the market doesn’t care? If and when spatial measurement via Augmented Reality becomes a market requirement for Pads and Smartphones, then Apple will have an undeniable advantage. However, while Augmented Reality is picking up incredible momentum in the Enterprise market, it is still just an interesting toy for most consumers, and consumers are where the big money is.
I imagine that just the buzz from ARKit and iPhone 8 will give Apple a nice kick in revenue and market share. But the big differentiator will be in the apps that uniquely leverage ARKit functionality and aside from selling furniture and jewellery, I am not seeing much. Will consumers buy a new iPhone because Pokémon Go plays better on it? I imagine some will, as gaming is a huge industry with lots of expensive toys, but not many. What compelling consumer apps do you see that would uniquely leverage ARKit capabilities?