item3

What's New | Greetings | Schedule | Members | Contact | Japanese

After March 2016

2016

March

February

January

2015

December

November

October

September

August

July

June

May

April

March

February

January

2014

December

November

October

September

August

July

June

May

April

March

February

January

2013

December

November

October

September

August

July

June

May

April

Schedule in July 2014


The 17th Perceptual Frontier Seminar: Preview Talks for ICMPC 13-APSCOM 5

Date and time: Thursday, 31 July 2014, 13:00-14:20
Venue: Room 709 (Kazuo UEDA's office), 7th Floor, Building 3, Ohashi Campus, Kyushu University

Talkers: Yoshitaka NAKAJIMA, Gerard B. REMIJN, Emi HASUO, Zhimin BAO, Yuko YAMASHITA, and Satoshi MORIMOTO.

Note: ICMPC signifies International Conference on Music Perception and Cognition; APSCOM signifies Conference for the Asia-Pacific Society for Cognitive Sciences of Music.


Special lecture: "Bioacoustics: Acoustical environment analysis and innovative measurement techniques applied to auditory research," supported by ReCAPS

Date and time: Tuesday, 17 July 2014, 13:00-16:20
Venue: Room 512, 1st Floor, Building 5 (13:00-14:30); Room 322, 2nd Floor, Building 3 (14:50-16:20), Ohashi Campus, Kyushu University
Lecturer: Hiroshi RIQUIMAROUX (Doshisha University)
Language: Japanese

Photos


The 16th Perceptual Frontier Seminar: Finding Intimate Terms with Speech Production and Frog Taste

Date and time: Monday, 14 July 2014, 15:00-18:00
Venue: Room 601, 6th Floor, Building 3, Ohashi Campus, Kyushu University

How to get to Ohashi Campus: <http://www.design.kyushu-u.ac.jp/kyushu-u/english/access>
Location of the Build. 3 in the Campus: <http://www.design.kyushu-u.ac.jp/kyushu-u/english/about/campusmap>
Organizer: Yoshitaka NAKAJIMA (Kyushu University/ReCAPS)

Program

1. Perceptual roles of power fluctuation factors in speech
Takuya Kishida*, Yoshitaka Nakajima**, Kazuo Ueda**, and Gerard B. Remijn**
*Graduate School of Design, Kyushu University, Japan **Dept. of Human Science/Research Center for Applied Perceptual Science, Kyushu University, Japan

The purpose of this study was to improve a method of factor analyses of power fluctuations obtained from critical-band filtered spoken sentences, and to specify the number of factors which is necessary to synthesize intelligible speech.  Four factors, which accounted for most of the power-fluctuations of speech sounds in British English, Japanese, and Mandarin Chinese, were successfully obtained with the improved method of analysis.  A perceptual experiment showed that with the power fluctuations conveyed by the small number of factors as four, 85% of morae of the synthesized speech were correctly identified by three listeners.

2. Infant speech development with Chinese-, Japanese-, and English-learning infants
Zhimin BAO*, Yuko YAMASHITA*, Kazuo UEDA*, and Yoshitaka NAKAJIMA*
*Kyushu University

The present investigation aimed at acoustically comparing developmental changes observed in the infants raised in the three different linguistic environments. Infant speech at three age---15, 20, and 24 months of age---groups, each included 3-5 infants, was recorded for the analysis. The results suggested that the vocal tract structures of the infants acquired an adult-like configuration between 15 and 24 months of age regardless of language environment.

3. Avoiding predation with odorous and non-fatal skin secretion by the wrinkled frog, Rana rugosa
Yuri YOSHIMURA* and Eiiti KASUYA*
*Kyushu University

The adult wrinkled frog, Rana rugosa, has a warty skin with odorous mucus secretion that is not fatal to the snake, Elaphe quadrivirgata. Rana rugosa and Fejervarya kawamurai, which resembles R. rugosa in its appearance and has mucus secretion, were fed to the snakes, to observe how they behaved differently to the two species. Compared to F. kawamurai, R. rugosa was less frequently bitten or swallowed by the snakes.

4. On the relationship between acoustics, kinematics and transmitted information of human speech
Willy WONG*, Rohan BALI*, and Pascal VAN LIESHOUT**
*Dept of Electrical and Computer Engineering, the University of Toronto, **Dept of Speech-Language Pathology, the University of Toronto

We investigated the relationship between vocal tract function and speech articulators during continuous speech using a 3D electromagnetic articulograph.  We found that the kinematics of the articulators were sufficient to predict with high accuracy the acoustic speech output.  Our interest in this area is primarily fundamental: is there a simple relationship governing the relationship between speech kinematics and speech acoustics.

5. Variation of shimmer along the tonotopic axis of the ear
Hilmi DAJANI*
*the University of Ottawa

Shimmer is used to objectively assess phonatory dysfunction, but this measure does not take into account auditory processing. In separate studies, we investigated the relationship between shimmer around the first four formants (F1–F4) and in the broadband unfiltered speech waveform, and the correlation between shimmer in speech-evoked brainstem responses and shimmer around F1–F4. The results indicate that there is variation in shimmer along the tonotopic axis of the ear, and that shimmer information around F3 and F4 is not well captured in standard shimmer measurements based on the broadband unfiltered waveform.

Photos


The 15th Perceptual Frontier Seminar: Preview Talks

Date and time: Friday, 11 July 2014, 16:00-18:00
Venue: Room 709 (Kazuo UEDA's office), 7th Floor, Building 3, Ohashi Campus, Kyushu University
How to get to Ohashi Campus: <http://www.design.kyushu-u.ac.jp/kyushu-u/english/access>
Location of the Build. 3 in the Campus: <http://www.design.kyushu-u.ac.jp/kyushu-u/english/about/campusmap>

Program

1. Measurements of subjective length of filled duration determined by dynamic random dots
Erika TOMIMATSU*, Yoshitaka NAKAJIMA*, and Hiroyuki ITO*
*Kyushu University

The purpose of the investigation is to reveal whether the duration filled with dynamic random dots is perceived to be longer than the duration filled with static random dots. Participants adjusted an empty duration delimited by two random-dot flashes to make it subjectively equal to the duration filled with the dynamic or the static random dots. The results show that the durations filled with dynamic random dots were perceived to be longer than the durations filled with static random dots even though the dynamic stimulus pattern occupies the same spatial area during the presentation.

2. Event-related potential study on intra- and inter-modal duration discrimination: Effects of performance level
Emi HASUO*, Emilie GONTIER**, Takako MITSUDO*, Yoshitaka NAKAJIMA*, Shozo TOBIMATSU*, and Simon GRONDIN**
*Kyushu University, **Laval University

The present study examined brain activities related to difficulties in duration discrimination based on intra-modal and inter-modal intervals. Event-related potentials were recorded while participants discriminated the durations of time intervals marked either by two auditory signals (AA; intra-modal interval) or by an auditory and a visual signal (AV; inter-modal interval), and there were two levels of discrimination difficulty (easy and difficult). A negative component (contingent negative variation, CNV), which appeared between the two markers at fronto-central sites, was larger for AA than for AV, but was not influenced by discrimination difficulty;  a principal component analysis seemed to separate brain activities related to modality and difficulty differences, and those related to time perception in general regardless of the modalities.

3. Computational model-based analysis of context effects on chord processing
Satoshi MORIMOTO*, Gerard B. REMIJN*, and Yoshitaka NAKAJIMA*
*Kyushu University

The purpose of this research was to clarify a computational process to construct musical expectancy from a preceding chord context. In a behavioral experiment, participants listened to chord sequences and evaluated how well the last chord of each sequence belonged to the preceding sequence perceptually. Results suggested that participants updated their internal tonal assumptions from observed sequences, and these assumptions dominated the musical expectancies for the subsequent chords.

4. Auditory Grammar in music
Yoshitaka NAKAJIMA*, Takayuki SASAKI**, Kazuo UEDA*, and Gerard B. REMIJN*
*Kyushu University, **Miyagi Gakuin Women's University

Auditory events and auditory streams are often considered basic units of auditory organization, and we have been interested in how they are organized perceptually. We thus postulated that auditory streams are made up of 4 types of elements, onsets, offsets, fillings, and silences, which follow a simple grammar (Nakajima et al., 2014, Auditory Grammar, Tokyo: Corona Publishing, in Japanese). How this grammar works in relation to the perception of musical materials will be demonstrated.

Photos


What's New | Greetings | Schedule | Members | Contact | Japanese

Last updated:
Copyright (c) 2013-2019 Research Center for Applied Perceptual Science, Kyushu University. All rights reserved.