Seminar in Eye tracking in surgery and natural environments on 13.12.2016
13.12.2016 9:45- 11:30 Joensuu Science Park, TD106B
Eye tracking research and its applications have a long tradition at UEF.
The Interactive technologies research group invites you to take part in a seminar with two excellent speakers, prof. Jeff Pelz (Rochester Institute of Technology, USA), and prof. Bin Zheng (Surgical Simulation Research Lab, University of Alberta, Canada). The topics of the talks are related to the recent advances of eye-tracking in medical applications and natural environments. The seminar is a follow up event after MSc. Shahram Eivazi’s public dissertation defense taking place on Monday 12.12.2016 at 12:00 in Joensuu Science Park, titled: “Eye gaze patterns in micro-neurosurgery” http://www.uef.fi/en/-/kokeneen-mikroneurokirurgin-katseen-seuraaminen-auttaa-alan-koulutuksen-uudistamisessa)
9:45 Introduction, Roman Bednarik
9:50 Bin Zheng: Human Factors in Surgery
Abstract: In this talk, I will start by introducing the human information processing model when performing tele-operation. Recent research at SSRL will be highlighted with a focus on the visual, haptic and cognitive problems of surgeons. Eye-hand coordination evidences collected from eye-tracking and motion tracking will be presented to reveal of the pattern of surgical expertise. Implication for human-computer interaction will be discussed.
10:30 Jeff Pelz: Measuring complex behavior: New tools for gaze analysis
Abstract: Mature data-analysis tools are available to researchers using existing eye trackers in restricted laboratory conditions, but new wearable eye trackers are creating huge data sets that are not compatible with existing tools. The new systems can monitor complex behaviors in natural environments that were inaccessible to previous eye tracking systems because of their inherent constraints on environment, movement, and behavior.
I will describe approaches to measuring observer behavior in these unconstrained environments using methods from machine vision (e.g., multi-view geometry and SLAM) to code gaze targets spatially, and methods relying on semantic labeling that do not rely on fixed 3D spatial locations. The latter approach allows coding of dynamic scenes without the need for explicit object tracking and is more flexible and extensible than object-based coding schemes.
More information: Roman Bednarik, firstname.lastname@example.org, 0414 306116