What type of research are you doing?
We are interested in a wide variety of aspects in visual cognition, particularly visual attention and visual memory during scene perception. The lab’s core research areas therefore include top-down guidance in scene search, neural representation and development of scene knowledge, as well as action-perception interactions in real-world scenarios.
We use a variety of methodologies in the lab, including psychophysics, stationary and mobile eye-tracking, as well as EEG recordings.
How and why are you currently using eye tracking glasses?
An intricate part of natural behavior is engaging and interacting with our environment. When we search for a pen, for example, we usually not only move our eyes, but move our body around in space and eventually grasp the sought after object.
Many of these natural interactions are not well captured by computer-based experiments, where our participants usually need to sit still in front of a screen. The SMI Eye Tracking Glasses allow us to shift gears by testing our hypothesis also in more ecologically valid, real-world experimental paradigms. We use such gaze information not only for higher experimental control, but also as dependent variable.
In a recently published article (featured by the Psychonomic Society), we investigated how physically engaging with objects as part of a search task influences identity and position memory for relevant/ irrelevant objects. Object handling facilitated the prioritization of relevant location information, but this might have come at the cost of deprioritizing irrelevant information.
Which questions would you like to investigate using the HTC Vive? Why? Which advantages do you expect by moving into VR?
Real-world studies are definitely more challenging and time consuming both in their execution and their analyses. In addition, the number of environments is limited (except for when my grad student Dejan Draschkow invites participants to search for objects in his own apartment!), experimental control is difficult, and one cannot change the experimental environment while the participant is doing the task. These shortcomings can be largely avoided by using VR environments.
A peculiarity of the Scene Grammar Lab is that we like to play with objects in scenes, e.g. test how observers react to floating beer bottles. This kind of manipulation is sadly not possible in real-world paradigms (at least not to our knowledge), so latest improvements VR offer a whole new playground for me and my lab.
In general, VR promises to be the best way to convert many of our desktop-based paradigms into something closer to reality without losing experimental control and flexibility. This of course won’t be a perfect substitution for real-world studies, but will be an important step in moving closer studying the grammar we use to guide attention and action in our world.