VR display technique saves the stomach by exploiting the eye's limits

VR display technique saves the stomach by exploiting the eye’s limits
Foveated rendering as analysed by Mr Roth and colleagues: The centre of the image is sharp, and detail reduces between the inner radius and outer radius. Credit: Journal of Eye Movement Research

An investigation into a way to provide a virtual reality experience that appears both visually sharp and quick has uncovered interesting findings, giving promise to the holy grail of non-queasy VR.

Thorsten Roth and Dr Yongmin Li of Brunel University London's Department of Computing, together with Martin Weier and collaborators in Germany (at the Bonn-Rhein-Sieg University of Applied Sciences and Saarland University), carried out a user study of their novel image rendering technique. They found a sweet spot of image quality beyond which any additional detail wasn't noticed as an improvement by participants – and in some cases seemed to make things worse.

Virtual reality can make us feel sick because of a lag – termed latency – between our eye movement and when the visual display changes. For computers and consoles, displaying very high-resolution graphics in an attempt to mimic reality can be a drain on resources and creates an even worse lag.

Making this latency smaller reduces nausea and allows video games and other experiences to feel more real.

Blurring over the details

Mr Roth and colleagues' technique builds on foveated rendering, which takes advantage of one of the main limitations of the human eye. For people who have no physical or neural damage, the centre of the field of vision (the fovea) is the sharpest, with visual acuity reducing towards the field's outer regions. This is why we turn our heads to follow a point of interest, rather than trying to use the periphery of our vision.

"We use a method where, in the VR image, detail reduces from the user's point of regard to the visual periphery," explained Mr Roth, "and then our algorithm – whose main contributor is Mr Weier – then incorporates a process called reprojection.

"This keeps a small proportion of the original pixels in the less detailed areas and uses a low-resolution version of the original image to 'fill in' the remaining areas."

A video demonstrating the researchers' foveated rendering technique which was analysed in this study. Credit: Institute of Visual Computing

Each study participant wore an Oculus Rift DK2 VR headset, adapted to include an eye tracker which closely mapped the movement of each eye. They inspected 96 VR videos, each 8 seconds long, with different combinations of subject matter, eye movement (fixed, steadily moving or free movement) and degree of foveated rendering – small, medium or large-sized areas of sharp detail in the centre of the field of vision, or having the whole of the field in sharp detail.

Perception misconception

After viewing each video, users were asked whether what they saw was free of visual artefacts: blurriness and flickering edges that are tell-tale signs of low-quality moving images.

Interestingly, the for the foveated rendering was their medium-sized area: an inner radius of 10° and outer radius of 20° around the centre of vision. With any more detail in the periphery of the vision, participants didn't find a noticeable improvement, and sometimes felt it instead resulted in a lower-quality image.

Mr Roth commented: "We showed that it's not possible for users to make a reliable differentiation between our optimised rendering approach and full ray tracing, as long as the foveal region is at least medium-sized."

The study also unearthed the presence of a visual tunnelling effect when users were following a moving target. The mental load of the task that has to be carried out means that visual artefacts are effectively filtered by human perception, making them largely imperceptible.

Summing up, Mr Roth said: "Our method can be used to generate visually pleasant VR results at high update rates. This paves the way to delivering a real-seeming VR experience while reducing the likelihood you'll feel queasy."


Explore further

Illusion reveals that the brain fills in peripheral vision

More information: 'A Quality-Centered Analysis of Eye Tracking Data in Foveated Rendering', by Thorsten Roth, Martin Weier, André Hinkenjann, Yongmin Li and Philipp Slusallek, is published in the Journal of Eye Movement Research: dx.doi.org/10.16910/jemr.10.5.2

The foveated rendering technique analysed in this paper was originally described in a previous paper by the authors: Weier et al. (2016) Foveated Real-Time Ray Tracing for Head-Mounted Displays, Computer Graphics Forum, 35(7): doi.org/10.1111/cgf.13026

Provided by Brunel University
Citation: VR display technique saves the stomach by exploiting the eye's limits (2017, October 30) retrieved 20 October 2019 from https://phys.org/news/2017-10-vr-technique-stomach-exploiting-eye.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
18 shares

Feedback to editors

User comments

Oct 30, 2017
This applies to raytracing, but all the hardware actually used by video games and computers is built and designed for rasterization, which is a different approach to rendering 3D images.

https://en.wikipe...risation

Essentially, raytracing allows for varying the resolution across the field of view by shooting a varying density of rays, which hit the 3D scenery and thus "sample" the image. As you shoot less of them you calculate less, which makes it faster.

Rasterization has no such benefit because it takes the 3D scenery and transforms all its geometry into polygons from the viewer's point, as if someone were to make a flat mosaic. It has to load up all the relevant models and textures, transform them and sqush them flat.

With rasterization, in order to reduce detail around the foveal area, you actually have to do more computation to apply a blur algorithm after all the geometry and textures have been transformed and drawn. That makes it slower.

Oct 30, 2017
Well with all previous and post comments in mind, I would like to point out that vehicle (and from my experience flight) simulation has utilized a form of this technique driven mainly by the fact we didn't have the processing and storage capabilities, in the way back when, versus what exist now.

Nature tends to reduce detail over distance and using our limited ocular capabilities, we function to accommodate that. It is our expectation. So when you design something that violates the nature the creatures are used to. Well I guess your not done with your design.

Latency in control input to visual scene is the main reason people queasy.

If you want them to really puke, put them on a motion system that is out of phase with the other two.

Anyway, congrats, welcome to the late eighties.

KBK
Oct 30, 2017
This means you can have a 4k panel per eye, and have the ability to do a full 4k per eye if it ever becomes possible..but to, for now...have the visual appearance of 4k per eye when feeding it a 2k per eye data package. Thus being realistically doable now at 90-120hz per eye.

Then when horsepower finally permits, to do the same trick at a higher rate, like 8k per eye and fed a 4k data package.

One device must 'go first' to make it viable for the market and that will be the headsets and increased resolution in said headsets.

We already have a 2.5k per eye OLED personal headset for widescreen IMAX video viewing coming out at the ~$500us range. so it is eminently doable. I'm super pumped for that one...

https://www.kicks...-headset

Cinera's "pixels per degree," which is the number of pixels dedicated to each degree of human vision, is 39 on the Cinera, compared to 9.8 on the Vive and 11.5 on the oculus rift.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more