Audi: Augmented Reality
Best practice interaction design
In some cases all it takes is some best-practice heuristic rules to improve a product for users
Audi Vision is an augmented reality mobile phone app. The idea was for the app could use the camera phone to recognise Audi assets – such as photos within brochures – and use this image as a trigger to download additional content.
The existing app was functional but not elegant. I organised discussion with the technology provider to understand the capabilities and limitations of their solution – and then re-considered the existing user experience within the app.
I found opportunities for significant streamlining of the user journey and interaction design. Instead of an experience where the user had to use interface elements to select and control their journey, the new solution used the technical capabilities of the image recognition solution to automatically recognise active augmented reality content.
The Audi Vision mobile augmented reality app had existing capabilities which allowed users to use the camera on their mobile device to display supplementary content within Audi brochures when an image trigger within the brochure was recognised.
Audi UK wanted to add additional capabilities to the app so that it could recognise a wide variety of periodically changing augmented reality content within other ‘real world’ assets as well, such as; newspapers, billboards and so on.
At the same time, I raised that there were opportunities for improving the user experience and overall usability of the product – and these were taken into account within the scope of the work.
- Consultant user experience architect: determining technical product limitations
- interaction designer / wireframer
The project process
My work commenced by undertaking a review of the new requirements and assessing the user journeys and experiences which were present within the existing product.
After assessment of the scope-of-work, I had a number of in-depth chats with the supplier of the mobile augmented reality image recognition solution, and dug in to understand what the capabilities and limitations of the technology was. The most basic technical issue was that there was a great deal of possible file-size variation within the set of images which would comprise the ‘trigger’ for augmented reality content. Related to this, there were two basic user issues which diminished the overall smoothness of the experience for users:
- users were required to browse a list and manually select the ‘real world’ product which they were viewing so that the trigger image library could be loaded into application memory
- image libraries could be quite large, and were pre-loaded into storage memory upon start-up of the app – regardless of whether or not the user was interested in / or had available all of the possible assets which were enhanced with augmented reality content
I produced new journeys which did away with both of the above situations. In the revised app the local library of trigger images would be diff-checked against a web-dB upon launch of the app, and the user informed that new assets were available for download – asking if they wanted to do this download now, and estimating time. Next, the trigger images were essentially used as content identifiers – so that when users viewed an asset with the enabled image, they would either be driven directly to a download (ie. in the instance that it was a billboard with only one active image trigger), or they would be asked if they wanted to download all of the triggers / content items present within a brochure.
Outcomes / results
Over all, quick guerilla testing of the prototypes indicated that the experience was greatly improved.
A good example of how heuristically led rules – accompanied by a deeper poke around through technical capabilities can help streamline experiences for users.