Innovations for touch in VR
We welcome you to join us in-person and on Zoom for our January eWEAR Seminar.
Date: Monday, January 23rd from 12:30 pm to 1:30 pm PST
Location: Stanford University (Y2E2 Building, Room 299) & on Zoom
Lunch will be provided at 12:00pm for in-person attendees & a chance to talk with the speakers after the seminar.
Registration: Please click here to register
Safety Protocol: For visitors coming to campus please review the Stanford University Covid-19 Policies. Face coverings are strongly recommended for everyone attending.
Speakers:
Hojung Choi
12:30 pm to 1:00 pm
“Deep learning classification of touch gestures using distributed normal and shear force”
Kyun Kyu (Richard) Kim
1:00 pm to 1:30 pm
“AI-enhanced electronic skin that rapidly reads hand tasks with limited data”
Hojung Choi
Ph.D. Candidate in Mechanical Engineering, Stanford University
BioHojung Choi is a PhD candidate in the Biomimetics and Dexterous Manipulation Laboratory (BDML) advised by Professor Mark Cutkosky at Stanford University. His research is in bringing the sense of touch to robots or humans interacting in virtual environments through the development of tactile sensors for both prehensile and non-prehensile areas. He has been leading a multi-disciplinary research team including corporate partners such as Meta. Hojung is a recipient of the Kwanjeong Fellowship from South Korea.
When humans socially interact with another agent (e.g., human, pet, or robot) through touch, they do so by applying varying amounts of force with different directions, locations, contact areas, and durations. While previous work on touch gesture recognition has focused on the spatio-temporal distribution of normal forces, we hypothesize that the addition of shear forces will permit more reliable classification. We present a soft, flexible skin with an array of tri-axial tactile sensors for the arm of a person or robot. We use it to collect data on 13 touch gesture classes through user studies and train a Convolutional Neural Network (CNN) to learn spatio-temporal features from the recorded data. The network achieved a recognition accuracy of 74% with normal and shear data, compared to 66% using only normal force data. Adding distributed shear data improved classification accuracy for 11 out of 13 touch gesture classes.
Kyun Kyu (Richard) Kim
Postdoctoral Scholar, Chemical Engineering, Stanford University
BioDr. Kyun Kyu (Richard) Kim is currently a postdoctoral fellow at Stanford University in Zhenan Bao research group. He received his Ph.D. and M.S. from Seoul National University in 2021 and 2016, and B.S. from Korea University in 2014, Mechanical Engineering. He developed a series of soft human skin-like electronic devices which are enhanced by AI algorithms that incorporate both hardware and algorithmic efficiency. These devices comprise soft skin sensors that conformably adheres with the user’s skin, replacing conventional devices that are both bulky and complex. When combined with AI algorithms, these devices enable a single sensory component to generate highly informative signals that would otherwise require numerous sensory units.
Technologies for human-machine interface plays a key role in human augmentation, prosthetics, robot learning, and virtual reality. Specifically, devices for tracking our hand enables a variety of interactive and virtual tasks such as object recognition, manipulation and even communication. However, there remains a big gap in capabilities compared to human in terms of precision, fast learning, and low-power consumption.
In this talk, I explain the newly developed fast-learnable electronic skin device which enables user-independent, data-efficient recognition of different hand tasks. This work is the first practical approach that is both lean enough in form and adaptable enough to work for essentially any user with limited data. It consists of direct printable electrical nanomesh that is coupled with an unsupervised meta-learning framework. The developed system rapidly adapts to various users and tasks, including command recognition, keyboard typing, and object recognition in virtual space.