英語閱讀能力測驗之內涵與閱讀能力之評析:以大學學科能力測驗與指定科目考試為例
No Thumbnail Available
Date
2009
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
本研究使用語言學家Nuttall的閱讀技巧分類之修正版來探討近六年(2002-2007年)大學學科能力測驗及指定科目考試英文閱讀測驗試題中欲測試的閱讀技巧為何,以及歷屆考生在各類題型上的表現。
本研究採質化與量化分析法。質的分析採內容分析法,將一百六十七個考題依照Nuttall的閱讀技巧分類之修正版分類。量化分析則使用電腦統計軟體SPSS 13.0來檢測各種閱讀技巧類型在考試中出現的頻率及分布,雙因子變異數分析用來個別檢測二種考試中各類題型之答對率間有無顯著差距及其差距是否每年相同,同時並了解高分組與低分組學生於各類題型上表現的差距。
本研究之主要研究發現如下:
(一)研究結果顯示,在二種考試中,試題欲測量的閱讀技巧可以分為八類(「由上下文推測字彙意義」、「辨認連接詞」、「辨認細節」、「辨認功能價值」、「辨認文章組織結構」、「辨認作者的預設立場」、「推論」以及「辨認主旨」。
(二)「辨認細節」這類考「由下往上」(bottom-up)的技巧的題目類型在二種考試中最常被考,因此可推論二種考試都比較偏好考「由下往上」(bottom-up)類型的技巧,而出現次數最少的題型則為「辨認文章組織結構」。此外,大學學科能力測驗及指定科目考試的英文閱讀測驗的最大差異在於閱讀技巧類型出現的頻率、出現處以及分佈。在大學學科能力測驗中每年都會出現的技巧是「由上下文推測字彙意義」以及「辨認細節」此二種題型,而在指定科目考試的英文閱讀測驗每年都出現的則只有「辨認細節」此類題型。
(三)二因子變異的分析研究顯示在大學學科能力測驗以及指定科目考試中,不同題型對於答對率並沒有顯著影響,亦即不同題型答對率的高低在六年當中並不一致。
(四)在二種考試當中,低層次的閱讀技巧(local skill)最能區隔高低分組的學生。在大學學科能力測驗當中,所有的題型在高低分組的答對率差距方面都有達到最低標準,然而在指定科目考試方面有二類題型的鑑別度低於最低標準,像是「推論」以及「辨認作者的預設立場」。此結果暗示這二類題型可能對所有的考生來說太難,以致於無法適當區隔高低分組的表現。
根據上述分析結果,本研究最後提出一些教學建議以供參考。
The present study aimed to adopt a revised version of Nuttall’s taxonomy to investigate the reading skills measured in the SAET (Scholastic Achievement English Test) and the DRET (Department Required English Test) administered from 2002 to 2007, and to explore how test takers (all examinees, high achievers, and low achievers) performed on different types of items. Both qualitative and quantitative analyses were adopted. The qualitative analysis was conducted by categorizing each of the 167 reading comprehension items into reading skill type in the revised Nuttall’s Taxonomy. For the quantitative analysis, SPSS 13.0 statistical package was used to examine the frequency distribution of the item types. The two-way ANOVA test was applied to the SAET and the DRET to see whether there were significant differences among the passing rates of different question types and to investigate whether these differences were consistent throughout the years. Also, the discrimination indexes were analyzed via the two-way ANOVA to see how the high achievers and low achievers differed while answering different types of reading questions each year. The results of this study are summarized as follows: First, the findings showed that in both tests, eight types of reading skills were measured: “Word Inference from Context,” “Recognizing Cohesive Devices,” “Recognizing and Interpreting Details,” “Recognizing Functional Value,” “Recognizing Text Organization,” “Recognizing Presuppositions Underlying the Text,” “Recognizing Implications and Making Inferences,” and “Recognizing and Understanding the Main Idea.” Second, the most frequent items tested in the SAET and DRET are items on “Recognizing and Interpreting Details,” which indicated that this type of reading skill is favored in both tests. However, “Recognizing Text Organization” is the least tested skill. In addition, the similarities and differences between the SAET and DRET lay in the frequency, occurrences, and distribution of reading skill item types. Two types of items occurred every year in the SAET, including local items on “Word Inference from Context” and “Recognizing and Interpreting Details.” However, only “Recognizing and Interpreting Details” occurred every year in the DRET. Third, the ANOVA analysis showed that there was no significant effect of items on the examinees’ average passing rates. In other words, the ranking of passing rates of different item types in the SAET and DRET were not consistent throughout the years. Finally, in both SAET and DRET, items on local skills best discriminated high and low achievers. In the SAET, the discrimination indexes of all item types reached the ideal discrimination index whereas in the DRET two types of items had unsatisfactory discriminatory power: items on “Recognizing Implications and Making Inferences” and “Recognizing Presuppositions Underlying the Text.” This indicated that these two types of items were probably too difficult for most examinees and did not appropriately distinguish the high and low achievers. Based on the aforementioned analysis of results, some pedagogical implications for reading instruction and testing in senior high schools were provided.
The present study aimed to adopt a revised version of Nuttall’s taxonomy to investigate the reading skills measured in the SAET (Scholastic Achievement English Test) and the DRET (Department Required English Test) administered from 2002 to 2007, and to explore how test takers (all examinees, high achievers, and low achievers) performed on different types of items. Both qualitative and quantitative analyses were adopted. The qualitative analysis was conducted by categorizing each of the 167 reading comprehension items into reading skill type in the revised Nuttall’s Taxonomy. For the quantitative analysis, SPSS 13.0 statistical package was used to examine the frequency distribution of the item types. The two-way ANOVA test was applied to the SAET and the DRET to see whether there were significant differences among the passing rates of different question types and to investigate whether these differences were consistent throughout the years. Also, the discrimination indexes were analyzed via the two-way ANOVA to see how the high achievers and low achievers differed while answering different types of reading questions each year. The results of this study are summarized as follows: First, the findings showed that in both tests, eight types of reading skills were measured: “Word Inference from Context,” “Recognizing Cohesive Devices,” “Recognizing and Interpreting Details,” “Recognizing Functional Value,” “Recognizing Text Organization,” “Recognizing Presuppositions Underlying the Text,” “Recognizing Implications and Making Inferences,” and “Recognizing and Understanding the Main Idea.” Second, the most frequent items tested in the SAET and DRET are items on “Recognizing and Interpreting Details,” which indicated that this type of reading skill is favored in both tests. However, “Recognizing Text Organization” is the least tested skill. In addition, the similarities and differences between the SAET and DRET lay in the frequency, occurrences, and distribution of reading skill item types. Two types of items occurred every year in the SAET, including local items on “Word Inference from Context” and “Recognizing and Interpreting Details.” However, only “Recognizing and Interpreting Details” occurred every year in the DRET. Third, the ANOVA analysis showed that there was no significant effect of items on the examinees’ average passing rates. In other words, the ranking of passing rates of different item types in the SAET and DRET were not consistent throughout the years. Finally, in both SAET and DRET, items on local skills best discriminated high and low achievers. In the SAET, the discrimination indexes of all item types reached the ideal discrimination index whereas in the DRET two types of items had unsatisfactory discriminatory power: items on “Recognizing Implications and Making Inferences” and “Recognizing Presuppositions Underlying the Text.” This indicated that these two types of items were probably too difficult for most examinees and did not appropriately distinguish the high and low achievers. Based on the aforementioned analysis of results, some pedagogical implications for reading instruction and testing in senior high schools were provided.
Description
Keywords
閱讀技巧, 閱讀測驗, 試題分析, 大學學科能力測驗英文考科, 大學指定科目考試英文考科, reading skills, reading comprehension tests, item analysis, Scholastic Achievement English Test (SAET), Department Required English Test (DRET)