Sign up for our newsletter - Navigating the horizon of business technology​
Technology / Emerging Technology

Creating “Extended Reality” Experiences for the Web Just got Easier with Google’s WebXR API Release

Google has trailed an upgraded WebXR application programming interface (API) for its Chrome browser, giving developers the ability to create a wide range of “Extended Reality” (XR) experiences on mobile devices and desktops.

What is WebXR?

WebXR is an API that allows developers to create XR experiences; a catch-all term that spans AR, VR and newly-developed immersive technologies.

The initial version of WebVR was announced in 2016, with the aim of bringing VR content to the web, through the use of a wide range of headsets. The standard evolved for a year, but last year the Chrome team said the API was being reworked.

The updated version was announced in February as “WebXR” and supports XR, not just VR. Developers can make web apps that take advantage of ARCore on Android and Apple’s ARKit on iOS. An upgraded version of the API is available for Android on the Chrome 67 beta, which also comes with numerous other upgrades.

White papers from our partners

WebXR is now an “origin trial”, i.e. it is moving closer to wide roll-out.

Yamagata Group has developed content for the Hologarage, an AR application for mechanical engineers on Microsoft’s Hololens.

For developers, this means that they can utilise this API to create XR features for the web, using its pre-built functions which specialise in creating software for this area.

Demand for XR continues with momentous growth predicted over the next few years as reported by IDC. The data specialists anticipate worldwide spending on AR and VR to reach $17.8 billion in 2018 and grow 98.8 percent annually to 2021.

WebXR brings this growing market segment to the big stage of the web, with both Chrome and Mozilla’s support. Mozilla has also now integrated WebVR into its widely used browser, Firefox.

What Exactly can the API Do?

Firstly, it is important to note that whilst support for WebXR is increasing, it is still early days. The official specification is still very much a living document which states “The version of the WebXR Device API represented in this document is incomplete and may change at any time”.

Despite this, the current version of API possesses some powerful features including:

* Canvas rendering context

* Coordinates system

* XR device interface, representing an XR hardware component

Using this, developers can consolidate layers and coordinates into meaningful XR experiences that can be rendered onto the canvas, making use of a device’s inherent hardware capabilities.

As mentioned, due to its recent debut and emergence, the API is doubtfully fit for use in real-world applications at this point in time.

However, with the rate of growth in this area, coupled with the interesting use cases that developers can create with this for businesses that are increasingly hungry to apply them, real-world applications may not be far away.

(IDC anticipates that the largest of the commercial sectors using XR in 2018 will be distribution and services at $4.1 billion, led by the retail, transportation, and professional services industries. The second largest sector will be manufacturing and resources at $3.2 billion, with balanced spending across the process manufacturing, construction, and discrete manufacturing industries.)

Where can I Access the API?

Both the API documentation and concrete examples of WebXR in action can be seen here: https://immersive-web.github.io/webxr-samples and here: https://immersive-web.github.io/webxr/

What else is New?

Sensor data is used in many native applications to enable experiences like immersive gaming, fitness tracking, and XR.

This data is also now available to web applications using the Generic Sensor API. The API consists of a base sensor interface with a set of concrete sensor classes built on top.

Google provided links to the sensor specs and examples of how they might be used on its Chromium blog, with the details as follows.

Accelerometer: Use the motion of the device to move around in a 3D video.

Gyroscope: Use the orientation of the device to implement a table-top maze.

Orientation Sensor: This is what’s called a fusion sensor meaning it combines readings from two or more sensors, in this case the accelerometer and the gyroscope. Whereas a maze implemented using only the gyroscope might only move the location marker in two dimensions, one implemented with the orientation sensor could require the user to physically turn the device to turn a corner.

Motion Sensors: This is a fusion sensor that includes a magnetometer as well as the accelerometer and the gyroscope. The most obvious use case for this as a virtual compass.

 
This article is from the CBROnline archive: some formatting and images may not be present.

CBR Staff Writer

CBR Online legacy content.