Please use this identifier to cite or link to this item:
|Title: ||An FMRI study of how a supra-modal decoding network examines interpretation|
|Other Titles: ||經由fMRI檢測口譯測驗時腦部活動之研究|
|Authors: ||王有慧;Wang, Yu-Hui|
|Issue Date: ||2015-05-04 09:16:14 (UTC+8)|
|Abstract: ||本論文探討以母語為中文的學生在接受英語口譯題目時(聽力及閱讀)大腦的激活反應：以功能性磁共振成像(functional magnetic resonance imaging, fMRI)來探索第二語言解碼聽覺及視覺訊號時之腦部激活情形，以及破解二種訊號時共區之腦網絡。實驗素材取自翻譯專業資格（水平）考試(China Accreditation Test for Translators and Interpreters, CATTI )，經校正長短及難度的一致性後在fMRI中螢幕上或耳機中播出。實驗總長度為2078秒，其中聽力實驗1014秒，閱讀實驗1064秒。實驗對象皆為中文母語人士，並慣用右手。|
的經典Ｗernicke-Geschwind Model模型相異。非傳統語言區與右腦的激活可能與語 言流利度有關(Leonard, et al., 2011)。
Ｗernicke-Geschwind, Friederici and Kotz, and Leonard, Torres et al.) 顯示其偏重顳葉，本實驗中的對象則在閱讀時則偏重枕葉。
五、破解聽力題及閱讀題時，激活度左腦皆高於右腦。共區網絡中枕葉及前額葉的激活, 顯 示受試者理解語言時仰賴視覺及控制。
This dissertation investigated the involvement of supra-modal network when Chinese subjects processed English interpretation exams (listening and reading). The materials in the present research were derived from CATTI, programmed by E-prime, then transmitted through the headphone set (listening) or displayed on the screen (reading) in an fMRI machine. Each module contained two runs with a total period of 2078 seconds for listening (1014 secs) and reading (1064 secs).
The collected data were analyzed by using SPM8, and the results were reported as the following:
1. ROIs of the listening tasks included SFG, MFG, STG, SOG, MOG, and the cerebellum in the left hemisphere (LH); SFG, MFG, STG, SPG, cerebellum and caudate in the right hemisphere (RH).
2. ROIs of reading tasks were found in MFG, SOG, MOG, and hippocampus in LH; MOG in RH.
3. The supra-modal network involved MFG, SOG, and MOG in the left hemisphere. (p >.001)
This paper has achieved several insightful findings as listed below:
1. Listening tasks evoked greater and broader activation as compared to reading tasks, which might infer that even with the same level of difficulty, auditory materials were more challenging to process than visual materials for bilingual participants.
2. ROIs of listening tasks displayed a bilateral distribution, while the activation associated with the reading tasks remained to be left lateralized.
3. The bilateral activation associated with the listening task was different from the classical Wernicke-Geschwind Model. Also, the activation of non-classical areas of the right hemisphere may be associated with the difficulty of the method, auditory transmission.
4. For these Chinese subjects, reading activation tended to be occipital-lobe focused, while in previous models (Models of Wernicke-Geschwind, Friederici and Kotz, and Leonard, Torres et al.), subjects (native/ nearly native with recruited languages) heavily relied on the temporal lobe.
5. The supra-modal network suggested that the recruited subjects capitalized on the occipital lobe and frontal lobe to achieve semantic processing regardless of the input modules.
6. The supra-modal network appeared different from the cognitive parts in the Wenickes-Geschwind Model, which might suggest that the subjects adopted a different network than subjects in previous study.
Among a wealth of models of language processing, most of them were conducted with subjects acquiring the second language up to native or close to native level. This leads to the doubt of its generalizability to subjects learning a new language as a L2 or an FL. The present research verified that the recruited late bilingual subjects adopted a different network to process languages, opposed to models in the existing literature. Meanwhile, novel findings in the present experiment revealed that listening was comparatively more difficult for these subjects than reading. Moreover, the occipital lobe remained crucial for the recruited Chinese bilinguals to process materials with high levels of difficulty. Visualization is thus highly emphasized during future language instructions, especially for advanced materials, and for subjects of similar language backgrounds in the present research.
|Appears in Collections:||[英文學系暨研究所] 學位論文|
All items in 機構典藏 are protected by copyright, with all rights reserved.