item3

What's New | Greetings | Schedule | Members | Contact | Japanese

2024

December
November
October
September
August
July
June
May
April

March
February
January

2023

December
November
October
September
August
July
June
May
April

March
February
January

2022

December
November

October
September
August
July
June
May
April

March
February

January

2021

December
November
October

September
August
July
June
May

April

March
February
January

2020

December
November

October
September
August
July
June
May
April

Before April 2020

Schedule in May 2021


The 52nd Perceptual Frontier Seminar: Non-invasive exploration of the brain with visual, tactile, and auditory stimuli

Date and time: Monday, 10 May 2021, 16:40-18:10
Venue: Talks are online. After the talks, we will get together in Kazuo's Office, Room 709 on the 7th Floor of Building 3, Ohashi Campus, Kyushu University, Fukuoka, Japan <http://www.design.kyushu-u.ac.jp/kyushu-u/english/access>
Language: English
Organizer: Kazuo UEDA (Kyushu Univ./ReCAPS/Research and Development Center for Five-Sense Devices)

Program

16:40-17:10 Modality specificity of amodal completion
Hiroshige TAKEICHI*, Keito TANIGUCHI**, Hiroaki SHIGEMASU**
*RIKEN and ReCAPS, Kyushu University, **Kochi University of Technology

In order to elucidate the concept of amodal completion, which refers somewhat paradoxically to visual perception of occluded therefore invisible parts of objects, a virtual-reality experiment was conducted to examine the role of haptic information in amodal completion. Fifteen participants performed a visual word recognition task on partially-missing five-letter words that did or did not accompany a visually and/or haptically presented occluder as the stimuli. Word recognition was facilitated by a visual cue to depth separation both forwards and backwards by binocular stereopsis but not by a haptic cue, which suggests that amodal completion can be based on retinotopic segmentation alone but not by three-dimensional visual or haptic depth perception and is a modality-specific rather than an a-modal processing. (This presentation is based on the FY2020 graduation thesis of the second author.)

17:10-17:30 A short introduction to gaze metrics as early detection biomarkers in Schizophrenia
Alexandra WOLF*, Kazuo UEDA**, and Yoji HIRANO***
*JSPS International Research Fellow (ReCAPS, Kyushu University)
**Dept. Human Science/ReCAPS/Research and Development Center for Five-Sense Devices, Kyushu University
***Department of Neuropsychiatry, Kyushu University

Eye‐trackers noninvasively measure one's unconscious eye behavior towards presented stimuli. With modern eye‐tracking instrumentation, scientists aim to reflect the viewer's attention and consequently disclose a part of the viewer's thought processes. Recorded and analyzed gaze parameters help in answering questions about which elements attract one's eyes, in what order, and how often. Despite an increasing focus on eye behavior within cognitive neuroscience and an active import of eye‐tracking paradigms in other fields, vision science lags behind in clinical research. However, a promising trend has started to emerge: the study of visual information processing by re‐examining paradigms with modern eye‐tracking devices.

17:30-18:00 Checkerboard speech
Kazuo UEDA*, Riina KAWAKAMI**, Hiroshige TAKEICHI***
*Dept. Human Science/ReCAPS/Research and Development Center for Five-Sense Devices, Kyushu University
**Dept. Acoustic Design, Kyushu University
***RIKEN and ReCAPS, Kyushu University

"Checkerboard speech" is a kind of degraded speech discarding 50% of original speech, to study spectrotemporal characteristics of speech perception. Here we show that 20-band checkerboard speech maintained nearly 100% intelligibility irrespective of segment duration in the range from 20 to 320 ms, whereas 2- and 4-band checkerboard speech showed a trough of 35% to 40% intelligibility between the segment durations of 80 and 160 ms (n = 2 and n = 20), and that mosaicked checkerboard speech stimuli showed less than 10% intelligibility except for the stimuli with the finest resolution (20 frequency bands and 20-ms segment duration). The results suggest close connections with the modulation power spectrums of the stimuli, a spectrotemporal interaction in speech perception, and perceptual cue integration based on temporal fine structure.

After the talks, we will get together in Kazuo's office, Room 709 on the 7th Floor of Building 3.

What's New | Greetings | Schedule | Members | Contact | Japanese

Last updated:
Copyright (c) 2013-2024 Research Center for Applied Perceptual Science, Kyushu University. All rights reserved.