Schedule in December 2014
The 19th Perceptual Frontier Seminar: Temporal Aspects of Perception and Communication
Date and time: Tuesday, 2 December 2014, from 15:15 to 17:30
1. Measurements as a metaphor for understanding sensory processing
Measurements are the basis of science. I am interested in the basic process of how scientists carry out measurements and using this as a way to understanding the processing of sensory information.
2. Theory and modelling of auditory gap detection
The detection of gaps provides insight into the temporal properties of the auditory system. A model for gap detection is presented involving the use of the auditory peripheral response. The model provides a way to understand why across-frequency gap detection is more difficult than gap detection in a single frequency band.
3. Dynamics, pre- and retrocative cognition in visual perceptual mechanisms
Using a well established oscillatory priming paradigm, and by varying temporal parameters (presentation frequency and phase), I show that cognition arising before the event, while identical to that arising subsequent to the event, is a function of a very particular interaction in process timing.
4. Influence of temporal factors on the efficiency of public speaking in English by Japanese EFL learners
The purpose of this study was to explore speech rate and pauses of English public speaking by Japanese EFL learners. The participants were 9 first-year students who took an EFL writing course at a national university in Japan. Students delivered a speech at a slow speech rate (114 words or 169 syllables per minute), and paused at a moderate rate (27 times per minute). Individual difference was found for the average duration of pauses. These results are expected to be useful for the improvement of teaching and learning English public speaking of Japanese EFL learners.
5. Perceptual roles of power-fluctuation factors in speech perception: A new method of factor analysis
A special method of factor analysis was developed to analyze and resynthesize power fluctuations of speech signals divided with 20 critical-band filters. Spoken sentences of British English, Japanese, and Mandarin Chinese were analyzed. The effects of number of factors were examined by varying the number from 1 to 9. Three or four factors turned out to represent a common pattern across these languages. Power fluctuations were resynthesized from the obtained factor loadings and factor scores of Japanese speech in order to generate noise-vocoded speech for a listening test. The noise-vocoded speech stimuli driven by the 4 factors were fairly intelligible: a total of 9 listeners identified 87% of the morae correctly.
6. Speech intelligibility mapped onto a time-frequency resolution plane
The present investigation focuses on speech intelligibility of mosaic speech, in which a speech spectrogram was segmented like a checkerboard. Experiment 1 was a preliminary one, and in Experiment 2, the mosaic speech was synthesized from 108 Japanese spoken sentences uttered by a female speaker. Six segment durations, 10, 20, 40, 80, 160, and 320 ms, and seven steps of frequency resolution, 1, 2, 4, 5, 10, and 20 critical bandwidths and the resolution derived from factor analyses, were combined to yield 42 conditions. Three different sentences were assigned to each condition. The stimuli were randomly presented to two participants. Each stimulus was presented three times in succession in a trial. The participants were instructed to write down what they heard without guessing. The percentages of intelligibility (mora accuracy) were plotted on a time-frequency resolution plane. Both time and frequency resolutions affected the intelligibility. A contour line corresponding to 50%-mora-accuracy except the factor analyses condition was estimated on the plane. The line came close to the points, (80, 2) and (40, 5), where each point represents a segment duration and a critical bandwidth in this order. This kind of representation should open a way to find an optimal solution for speech information reduction, for example.
Auditory Research Meeting
Date and time: From 13:30, 20 December 2014 to 11:40, 21 December 2014
20 December 2014
English Session: Language and Speech
21 December 2014
(Four Japanese presentations are scheduled in the morning.)
Contact: Kazuo UEDA, Kyushu University/ReCAPS, E-mail: ueda [at] design.kyushu-u.ac.jp, Phone & Fax: 092-553-9460; Nao HODOSHIMA, Tokai Univeristy, E-mail: hodoshima [at] tokai-u.jp, Phone: 03-3441-1171, Fax: 03-5475-5502