
Considering Human Perception in User Experience
We welcome you to join us for this eWEAR Seminar on Thursday 6/10 from 9:00 am to 10:00 am PDT
Registration: Please click here to register
Speakers:
Brooke Krajancich
9:00 am to 9:30 am
“Understanding the human visual system for more immersive/perceptually realistic virtual and augmented reality displays”
David Sirkin
9:30 am to 10:00 am
“Prototyping interactions with cars and robots”

Brooke Krajancich
Ph.D. Candidate In Electrical Engineering, Stanford University
Bio
Virtual and augmented reality (VR/AR) wearable displays strive to provide perceptually realistic user experiences, while constrained by limited compute budgets, hardware, and transmission bandwidths of wearable computing systems. This presentation describes two different ways in which a greater understanding of the human visual system may assist in achieving this goal. The first looks at how studying the anatomy of the eye reveals inaccuracies in how we currently render disparity depth cues, leading to objects appearing closer than intended, or in the case of AR, poorly aligned with target objects in the physical world. However, this can be corrected with gaze-contingent stereo rendering can, enabled by eye-tracking. The second derives a spatio-temporal model of the visual system, describing the gamut of visible signals for a given eccentricity and display luminance. This model could enable future foveated graphics techniques with over 7x the bandwidth savings than those today.

David Sirkin
Executive Director, Interaction Design, CDR, Stanford University
Bio
Most of today’s technologies, including cars and robots, function fine, but they struggle to interact well with humans. Yet it’s not difficult to prototype interactions. Doing so allows you to (a) quickly learn how people respond to your ideas, early in the development process, and (b) change and retest those ideas before spending too much time and effort designing them into products.
In this talk, we’ll discuss ways to explore and design interactions with future cars, and maybe with a robot or two, using Wizard of Oz techniques. Topics include on-road autonomous simulators, crowdsourced studies of interfaces, and challenges to understanding drivers’, pedestrians’, and everyday users’ behaviors and states.