HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2014 | Volume: 14 | Issue: 8 | Page No.: 828-832
DOI: 10.3923/jas.2014.828.832
Test-unwiseness Strategies: What Are They?
Abdullah Al Fraidan

Abstract: Previous literature used to talk about students use some tricks (test wiseness strategies) to maximize their scores without possessing the required knowledge of the construct being tested. This study introduces a new category of test-taking strategies namely test-unwiseness strategies where students lose some deserved marks for using these strategies. Fifteen male and 15 female English Language major university students took three vocabulary tests (the classical multiple choice, a cloze test, a gap filling test with a pool of alternatives). Think aloud protocols and retrospective interviews were used and qualitatively analysed. The results showed that students used five unwise strategies: Changing a correct answer into wrong, selecting unknown answers without attempting to process the question for meaning, not reading the test instructions carefully, not reading the question in full or reading after the gap and use bad time management. The analysis showed some reasons behind using these strategies and suggested ways to overcome this. Beside to proposing a newly adopted model of test-taking strategies.

Fulltext PDF Fulltext HTML

How to cite this article
Abdullah Al Fraidan , 2014. Test-unwiseness Strategies: What Are They?. Journal of Applied Sciences, 14: 828-832.

Keywords: test-unwiseness, vocabulary tests, test validity and Test-taking strategies

INTRODUCTION

Test-taking strategies have crawled so fast to form an independent field of numerous studies in relation to assess the validity of tests and construct validity in particular (Cohen, 2011). English proficiency is a highly desired trait in many non-English-speaking countries and performance on language tests can often determine occupational opportunities an, prior to that, educational opportunities that could truly determine the course of an individual’s life (Cheng, 2008; Pour-Mohammadi and Abidin, 2012a). Language test validity has thus become a topic of intense scrutiny in research and in practice, as determining the ability of these tests to truly measure language proficiency is a question not only of extreme practical importance given the employment demands of the modern world but also of extreme ethical importance given the tests’ impact on people’s lives. Test taking strategies present some barriers to language test validity and thus these must also be examined to derive truly valid and meaningful results from such testing.

Defining test taking strategies can be more difficult than it might initially seem, given the number of parameters involved in these strategies. However, Cohen (2006) notes that a consensus of among researchers in the field of what a strategy is has started to grow. Different theoretical constructs have been applied to the identification and definition of test taking strategies by different researchers and in different perspectives. While these different models are not necessarily mutually exclusive they do present radically different means of assessing and analyzing test taking strategies (Cohen, 2006; Amer, 2007; Pour-Mohammadi and Abidin, 2012b).

These different approaches can make the implications of test taking strategies on the validity of language tests also quite varied and difficult to measure. The pressures to achieve, as noted, are quite high and instructors can also contribute to the knowledge and use by students of test taking strategies which also affects not only the rate of test taking strategy use but also the effectiveness and the degree to which it can tamper with language test validity (Amer, 2007; Cheng, 2008; Lee, 2011). Some general test-taking strategies, such as skipping over more difficult answers and completing easier answers first and taking the time to review answers to ensure they are correct, can actually be seen in some ways as increasing test validity in that this leads to more accurate assessments of actual knowledge held by the test taker (Amer, 2007; Pour-Mohammadi and Abidin, 2012a). The use of such facilitative strategies can be seen “legitimate” because they are seen either a part of the construct being assessed or they do not harm the construct. Other types of test taking strategies, however, undermine test validity and ultimately test the student’s ability to strategize and manipulate the design and circumstances of the test rather than more comprehensively and accurately measuring language proficiency (Cohen, 2006; Lee, 2011; Pour-Mohammadi and Abidin, 2012b). These unfavourable strategies are called test-wiseness strategies. Students maximize their scores not out of their knowledge about the construct or subject matter being assessed but rather through exploiting biases in the test, for example. Thus the scores can be deceptive to testers as they are not showing us the real knowledge, ability or proficiency of students.

The present study tries to introduce a third category called “test-unwiseness” and their place and impact on language tests. Unlike test-wiseness, students do not maximize their scores but rather they lose some marks due to applying these strategies like consuming up the allocated time of a reading comprehension test in L1 translation of the text leaving no time for answering the questions (Cohen, 2006).

The motivation behind this study was the researcher wanted to see what students actually do while taking language tests and reasons behind some students losing marks. The researcher spotted signs of students use some strategies that hinder students from getting the right answer.

TEST-UNWISENESS: WHAT ARE THEY?

In a simple definition, they are strategies which are applied to language tests and cause students not manifest their actual or real language ability, competence or knowledge of the construct being assessed. Therefore, they are seen as a real hinder to language test validity.

Figure 1 is a model presented by Al Fraidan (2011) adopted from the Wen and Johnson (1997) view of the PPP model.

The original PPP model is a one way model meaning that the presage which is seen as the first stage, would have an effect on the second stage which is process and consequently the process would have an effect on the product. However, Al Fraidan (2011) has proposed that there might be a two way interaction between the process and product, meaning that reaching a product might backwardly affect the process to get the product modified, confirmed or changed. When students propose an answer, the answer itself can cause students to use some strategies like checking, reviewing, or changing the answer if it does not fit. During this double way interaction test-unwiseness can exist. One obvious example of a test unwiseness strategy is when changing a right answer for a wrong one for the wrong reason (Al Fraidan, 2011).

Al-Hamly and Coombe (2005) investigated whether the practice of answer-changing on multiple-choice questions is beneficial to Gulf Arab students’ performance. They found that 44% of answer changes were from Wrong to Right. Second on the list were Wrong-Wrong answer changes at 37%. The lowest percentage was the Right-Wrong category, 19%. They found the lower the score, the more Wrong-Wrong answer changes were made which could be due to the possibility that students were guessing or not taking the test seriously. Although students are always influenced by traditional perceptions, like “go with your first response”, findings suggested encouraging students to change answers judiciously after scrutinizing their original answers for more plausible alternatives.

Fig. 1: Al Fraidan (2011) proposed underlying model of TTS

STUDY MODEL

The present study formulated newly adopted model of how students answer some test items. This model shows the two way interaction in the PPP model and the existence of test unwiseness strategies.

The present model shows direct attempts to answer a test item (a1, a2) leading either to a correct or an incorrect answer. This happens when students automatically process the answer without utilising observable strategies. Lines (b1, b2) show that students use some strategies before the attempt their answers. While lines (c1, c2) show the two way interaction between the process and product. This means that students can go back to utilise some strategies after assigning an answer to a test item and this can lead to confirm an existing answer (be it right or wrong) or changing it to right or a wrong answer. The last can be seen as the place of where test-unwiseness can occur.

METHODS

Fifteen male and 15 female English major students from the Department of English Language at King Faisal University in Saudi Arabia have been trained to use think aloud protocols before collecting the actual data. Participants were given three vocabulary questions in one test (the classical multiple choice test, a cloze test and a sentence based test where participants choose the answer from a pool of alternatives for all the items) each question consisted of 16 items. The tests were teacher-made tests as the researcher sought to simulate real-life situations and elicit “natural” strategies that students usually employ and avoid artificiality of strategies use. The three types of were selected because they have been found as the top two frequently used vocabulary tests in Saudi Arabia Al Fraidan (2011). The test lasted for one complete hour. Retrospective interviews followed the think aloud sessions for verifications and eliciting further information. Students received intensive training on how to use think alouds.

RESULTS AND DISCUSSION

The protocols were transcribed, translated, transliterated and then painstakingly analysed. Two coders segmented and coded all the strategies. The agreement between the two coders was fairly high (90%). Green (1995) recommends an agreement of 85% as a minimum. The analysis revealed this list of test-unwiseness strategies:

Changing a correct answer into wrong
Selecting unknown answers without attempting to process the question for meaning
Not reading the test instructions carefully
Not reading the question in full or reading after the gap
Bad time management

Changing a correct answer into wrong: Some participants reported that they knew the correct answer, wrote it in the test paper and then changed the answer into a wrong one. They reported that they did this as they did not fully know the meaning of the alternative. However, from the researcher analysis of the think aloud protocols, he spotted some signs of students partial knowledge of some of the words reported unknown by the participants. For example some one verbalised “the word sack means something bad like an end, oh I do not know that word”. This led the student to change the answer from correct to wrong. Although he showed some sign of knowing part of the word meaning. Since the kind of achievement tests used here and in the researcher context did not have margins of partial knowledge, the true scores of students cannot show the real knowledge possessed by these students and can be misleading when interpreting them or especially when used for high stake decisions like admission entry or level placement.

Another reason spotted by the researcher for changing an answer is having a flaw in the test which distracted or confused the candidates from the correct answer. In one of the test items the stem was missing the article “an” right before the gap. Some students wandered around the correct answer but they eventually chose a wrong one.

Selecting unknown answers without attempting to process the question for meaning: This is merely like blind guessing and mostly was due to lousiness and laziness. Students did not want to spend some time thinking of the item either because they felt not motivated to answer or because they had some test apprehension as they did not study very well for the test and consequently felt unsure of themselves. Another reason reported by the students was the difficulty of the item.

In addition, poor students might avoid choosing an answer because it contains an unknown word even when they know the other choices are probably wrong. Addamegh (2003) showed that Saudi students in a four alternative vocabulary MCQ test tended to choose the words that looked odd. They were sometime successful in doing so, though.

Not reading the test instructions carefully: One of the most common causes of poor performance in test-taking is failure to follow directions no matter how clear and good they are (Rawl, 1984). Students sometimes do not read questions thoroughly and end up answering a question that is different than the one asked. For example, in reading tests, many students are much more likely to turn to their own memories or experiences than to the text for their answers.

This mostly happens when student rush into answering the items without reading the test instructions. An example of this is when some students answered the cloze test by inserting words from the text itself thinking this is the way how to do it. Another source that can contribute to this is the unclarity of test instructions. The reason justifying this behaviour is student rely on their intuition and their familiarity with the test format by looking at the format alone without reading the instructions.

Not reading the question in full or reading after the gap: Related to the above and due to hastiness, (testing time was more than enough) students attempted to quickly answer the test items without reading the question in full or reading what is after the gap. An example of this when students answered with the word “bridge” for this item without continuing reading the rest of the sentence.

In order to cross the river, we had to build a …………. bridge.

Cited in Cohen (1998), a study by Homburg and Spaan (1981) found poor performers in cloze-tests were those who among other things do not use ‘forward reading’ (i.e., utilizing the context following the blank) to find clues for supplying the missing word. They were doing surface matching between information in the text, in the item stem and in the multiple-choice alternatives-without processing any of these stimuli for meaning. Respondents were performing tasks by analogy to previous tasks without noticing what may be slight changes in the response procedures.

Bad time management: Poor use of time results in many students’ failing to complete the test, or rushing through it and jumping at the first answer for each question. Students should work as rapidly as possible without sacrificing accuracy.

The painstaking analysis of the data showed that some students used bad time management skills in the following forms:

Spending a lot of time on difficult items while they can skip and answer items they knew. The researcher discovered that some students knew the answer of some items but they could not reach because of test time ended while they are struggling with the first few difficult items
Likewise some students spent a lot of time translating the cloze test into Arabic and that wasted their time

From the above results, one surely need to stress the importance of assessment literacy among teachers and raising the strategic competence of students.

In terms for teaches, it is obvious that they need to carefully right their tests as any flaw can impact scores validity. They need to be aware of the importance of having test specifications of their tests and pay attention to the clarity of their test instructions and test in whole. Also the need to follow recommendations of writing and designing tests, one of which is starting the test with the easy items. From the above results starting with difficult items caused some students to lose some marks.

As for students, educators should focus on how to teach students not to do these test-unwiseness strategies and show them from real and authentic examples how student lost marks for doing such behaviours.

The difference in approach between the test-wise and test-naive students could be due to differences in cognitive monitoring. Test-wise students go through metacognitive success while test-naive students go through cognitive failure. Highly successful test-takers are more likely to use metacognitive strategies than moderately successful ones, who in turn use these strategies more than unsuccessful test-takers (Phakiti, 2003).

The proposed model (Fig. 1) show that students can sometime revise their answers through a two way interaction and this study proved that students changed their answers from correct to wrong. Also in the presage stage where institutional factors (represented here by the teacher made tests) and students lexical proficiency affected students strategies and choice of answers.

Research on this topic should continue by trying to discover more test-unwiseness strategies on such kind of tests and on other test format and different language skills. This would help to empirically find solutions to some test-taking problems student face while taking tests and give more insights about how test-unwiseness strategies can affect test validity.

CONCLUSION

Simply put, as test taking strategies grow more sophisticated the validity of language tests becomes increasingly threatened and new test designs are responded to by new strategizing. A lack of cohesion in the approaches used to study test taking strategies and to practically apply knowledge obtained through academic research is also cited as problematic (Cohen, 2006). Without an appropriate way to measure and assess test taking strategies and their impact on test validity, more valid language testing methodologies cannot be developed.

It has been proved here that students exhibited different strategies, namely here test-unwiseness strategies. These strategies led students not to secure marks that they could secure. More awareness by teachers and students of this phenomenon is needed. Moreover, more research on this topic should follow to give more insights about the relationship between this new type of strategies and test validity.

REFERENCES

  • Al Fraidan, A., 2011. Test-Taking Strategies of Efl Learners on Two Vocabulary Tests. LAP Lambert Acad. Publ., USA., ISBN: 9783845470306, Pages: 472
    Direct Link    


  • Al-Hamly, M. and C. Coombe, 2005. To change or not to change: Investigating the value of MCQ answer changing for Gulf Arab students. Language Testing, 22: 509-531.
    CrossRef    


  • Amer, A.A., 2007. EFL/ESL test-wiseness and test-taking strategies. Ph.D. Thesis, Sultan Qaboos University.


  • Cheng, L., 2008. The key to success: English language testing in China. Language Testing, 25: 15-37.
    Direct Link    


  • Cohen, A.D., 1998. Strategies and Processes in Test-Taking and SLA. In: Interfaces Between Second Language Acquisition and Language Testing Research, Bachman, L.F. and A.D. Cohen (Eds.). Cambridge University Press, USA., ISBN: 9780521649636, pp: 90-111


  • Cohen, A.D., 2006. The coming of age of research on test-taking strategies. Language Assess. Quarterly, 3: 307-331.
    CrossRef    


  • Cohen, A.D., 2011. L2 Learner Strategies (Ch. 41). In: Handbook of Research in Second Language Teaching and Learning: Vol. II-Part V. Methods and Instruction in Second Language Teaching, Hinkel, E. (Ed.). Abingdon Press, England, pp: 681-698


  • Green, A., 1995. Verbal protocol analysis. Psychologist, 8: 126-129.
    Direct Link    


  • Homburg, T.J. and M.C. Spaan, 1981. ESL reading proficiency assessment: Testing strategies. Proceedings of the 15th Annual Conference of Teachers of English to Speakers of other Languages, March 3-8, 1981, Detroit, MI., USA., pp: 25-33.


  • Lee, J., 2011. Second language reading topic familiarity and test score: Test-taking strategies for multiple-choice comprehension questions. Ph.D. Thesis, University of Iowa.


  • Pour-Mohammadi, M. and M.J.Z. Abidin, 2012. Does instructing test-taking strategies significantly enhance reading comprehension test performance? The case of Iranian EFL learners. Int. J. Linguist., 4: 293-311.
    CrossRef    


  • Pour-Mohammadi, M. and M.J.Z. Abidin, 2012. Test-taking strategies, schema theory and reading comprehension test performance. Int. J. Humanities Social Sci., 1: 237-243.
    Direct Link    


  • Phakiti, A., 2003. A closer look at the relationship of cognitive and metacognitive strategy use to EFL reading achievement test performance. Language Testing, 20: 26-56.
    CrossRef    


  • Rawl, E.H., 1984. Test-taking strategies can be the key to improving test scores. NASSP Bull., 68: 108-112.
    CrossRef    


  • Wen, Q. and R.K. Johnson, 1997. L2 learner variables and English achievement: A study of tertiary-level English majors in China. Applied Linguistics, 18: 27-48.
    CrossRef    


  • Addamegh, K.A., 2003. EFL multiple-choice vocabulary test-taking strategies and construct validity. Ph.D. Thesis, Department of Language and Linguistics, The University of Essex, Colchester, UK.

  • © Science Alert. All Rights Reserved