Considering Human Perception in User Experience

We welcome you to join us for this eWEAR Seminar on Thursday 6/10 from 9:00 am to 10:00 am PDT

Registration: Please click here to register

Speakers:
Brooke Krajancich
9:00 am to 9:30 am
“Understanding the human visual system for more immersive/perceptually realistic virtual and augmented reality displays”

David Sirkin
9:30 am to 10:00 am
“Prototyping interactions with cars and robots”

Brooke Krajancich

Ph.D. Candidate In Electrical Engineering, Stanford University

Bio

Brooke Krajancich is a PhD candidate in the Electrical Engineering Department at Stanford University. She is advised by Professor Gordon Wetzstein as a part of the Stanford Computational Imaging Lab and the inaugural class of Knight-Hennessy Scholars. Her research focuses on developing computational techniques that leverage the co-design of optical elements, image processing algorithms and intimate knowledge of the human visual system for improving current-generation virtual and augmented reality displays. Brooke moved to California for graduate school after receiving her Bachelor’s (with first class honors) in electrical engineering and mathematics at the University of Western Australia.

Abstract

Virtual and augmented reality (VR/AR) wearable displays strive to provide perceptually realistic user experiences, while constrained by limited compute budgets, hardware, and transmission bandwidths of wearable computing systems. This presentation describes two different ways in which a greater understanding of the human visual system may assist in achieving this goal. The first looks at how studying the anatomy of the eye reveals inaccuracies in how we currently render disparity depth cues, leading to objects appearing closer than intended, or in the case of AR, poorly aligned with target objects in the physical world. However, this can be corrected with gaze-contingent stereo rendering can, enabled by eye-tracking. The second derives a spatio-temporal model of the visual system, describing the gamut of visible signals for a given eccentricity and display luminance. This model could enable future foveated graphics techniques with over 7x the bandwidth savings than those today.

 

David Sirkin

Executive Director, Interaction Design, CDR, Stanford University

Bio

David Sirkin is Executive Director for Interaction Design at Stanford’s Center for Design Research. His research focuses on the design of physical and social interactions between humans and robots, and autonomous vehicles and their interfaces. He and his research group regularly collaborate with local Silicon Valley and global technology companies, and their work has been covered by the Associated Press, Economist, New Scientist, San Francisco Chronicle, and Washington Post. He is also a Lecturer in Mechanical Engineering, where he teaches interactive device design and design research methodology. David grew up in Florida, near the Everglades, and in Maine, near the lobsters.

Abstract

Most of today’s technologies, including cars and robots, function fine, but they struggle to interact well with humans. Yet it’s not difficult to prototype interactions. Doing so allows you to (a) quickly learn how people respond to your ideas, early in the development process, and (b) change and retest those ideas before spending too much time and effort designing them into products.

In this talk, we’ll discuss ways to explore and design interactions with future cars, and maybe with a robot or two, using Wizard of Oz techniques. Topics include on-road autonomous simulators, crowdsourced studies of interfaces, and challenges to understanding drivers’, pedestrians’, and everyday users’ behaviors and states.