Eye-tracking technology knows when you’re bored or interested in VR

CIOL Writers
New Update
CIOL This eye-tracking technology can tell how bored or interested you’re in VR

Instead of solely witnessing the other person on a screen or a stage, VR (virtual reality) has the potential to bring digital enthusiasts together in any kind of digital encounters, where the users have the freedom to look in any direction. However, that's also one of its biggest narrative hitches in the case of storytelling. How does a narrator retain control when the viewer is free to decide where to look?


The answer, it seems, is in the eyes, where a user communicates with a virtual environment through sight alone.

CIOL Eye-tracking technology knows when you’re bored or interested in VR

Eyefluence, a company that's entrenched in optics, AI, machine learning and mechanical engineering, has built an interface with the idea to convert looking into action. The software enables the user to use eyes to do anything that could be done with a finger on a smartphone. With a display in front of you, you would be able to navigate a menu, launch applications, pan, zoom, and scroll, and even slip in information simply by looking. No more typing, clicking, swiping or even talking.


With an immersive storytelling application, this eye-machine interaction can give away what a viewer is feeling in a moment or when the eyes are distracted or focused. The software takes advantage of those clues to know when you’re interested in a scene, captivated by a character or feeling bored.

In a short film called The Language of Looking, Eyefluence CEO, and founder Jim Marggraff says that eyes are the fastest-moving organs in our body and tries to explain the difficulty of immersive storytelling and how sight can be used to fire up an interface that pushes the narrative forward in virtual reality. The movie is a part of the annual #FutureofStorytelling summit that brings innovators together to discuss the challenges of storytelling in a digital world.

With an objective to solve problems like teleportation, locomotion, nausea or maintaining a sense of flow in the story - faced by storytellers, the software looks at the participant's behavior to decide when it needs to move them along or when to deliver key points on the story level.


Speaking to Engadget, Marggraff said, "Typically, as a storyteller, you want complete control; you guide the viewers' eyes, their moods, so they're sensitive to the beats of the story as it unfolds. Essentially, every scene directs them and manages their emotions throughout. But now, by reclassifying the user as a participant, when you let them have consequence in the story, you give them a degree of autonomy."

He further added, "We give you the means to not just see a function, like a message or a browser, but activate it and move into a new space. For instance, you can search for photographs and find them more rapidly than before. It's a mixture of purposeful and non-purposeful motions, to be able to search through a list of 1,000 names and find the one you're looking for with your eyes only without scrolling, flicking or tapping. It's quicker than even speaking to get things done".

The company has also collaborated with Rival Theory, the VR content studio that generates characters for virtual reality storylines. Rival theory creates characters that are in a live film or rendered. A rendered character can have memory, personality that evolves over time in relation to you as a participant.

For example, if the character is a child who loses his best friend in an accident and you console them, that child character forms a connection with you based on eye interaction. The AI forms a memory of it and it can come back anytime.

Is it the future of storytelling, or future of VR itself?