реферат, рефераты скачать
 

Types of tests used in English Language Teaching Bachelor Paper


According to Bynom (Forum, 2001), validity deals with what is tested and degree to which a test measures what is supposed to measure (Longman Dictionary, LTAL). For example, if we test the students writing skills giving them a composition test on Ways of Cooking, we cannot denote such test as valid, for it can be argued that it tests not our abilities to write, but the knowledge of cooking as a skill. Definitely, it is very difficult to design a proper test with a good validity, therefore, the author of the paper believes that it is very essential for the teacher to know and understand what validity really is.

Regarding Weir (1990:22), there are five types of validity:

·        Construct validity;

·        Content validity

·        Face validity

·        Wash back validity;

·        Criterion-related validity.


Weir (ibid.) states that construct validity is a theoretical concept that involves other types of validity. Further, quoting Cronbach (1971), Weird writes that to construct or plan a test you should research into testee’s behaviour and mental organisation. It is the ground on which the test is based; it is the starting point for a constructing of test tasks. In addition, Weird displays the Kelly’s idea (1978) that test design requires some theory, even if it is indirect exposure to it. Moreover, being able to define the theoretical construct at the beginning of the test design, we will be able to use it when dealing with the results of the test. The author of the paper assumes that appropriately constructed at the beginning, the test will not provoke any difficulties in its administration and scoring later.

Another type of validity is content validity. Weir (ibid.) implies the idea that content validity and construct one are closely bound and sometimes even overlap with each other. Speaking about content validity, we should emphasise that it is inevitable element of a good test. What is meant is that usually duration of the classes or test time is rather limited, and if we teach a rather broad topic such as “computers”, we cannot design a test that would cover all the aspects of the following topic. Therefore, to check the students’ knowledge we have to choose what was taught: whether it was a specific vocabulary or various texts connected with the topic, for it is impossible to test the whole material. The teacher should not pick up tricky pieces that either were only mentioned once or were not discussed in the classroom at all, though belonging to the topic. S/he should not forget that the test is not a punishment or an opportunity for the teacher to show the students that they are less clever. Hence, we can state that content validity is closely connected with a definite item that was taught and is supposed to be tested.

Face validity, according to Weir (ibid.), is not theory or samples design. It is how the examinees and administration staff see the test: whether it is construct and content valid or not. This will definitely include debates and discussions about a test; it will involve the teachers’ cooperation and exchange of their ideas and experience.

Another type of validity to be discussed is wash back validity or backwash. According to Hughes (1989:1) backwash is the effect of testing on teaching and learning process. It could be both negative and positive. Hughes believes that if the test is considered to be a significant element, then preparation to it will occupy the most of the time and other teaching and learning activities will be ignored. As the author of the paper is concerned this is already a habitual situation in the schools of our country, for our teachers are faced with the centralised exams and everything they have to do is to prepare their students to them. Thus, the teacher starts concentrating purely on the material that could be encountered in the exam papers alluding to the examples taken from the past exams. Therefore, numerous interesting activities are left behind; the teachers are concerned just with the result and forget about different techniques that could be introduced and later used by their students to make the process of dealing with the exam tasks easier, such as guessing form the context, applying schemata, etc. 

The problem arises here when the objectives of the course done during the study year differ from the objectives of the test. As a result we will have a negative backwash, e.g. the students were taught to write a review of a film, but during the test they are asked to write a letter of complaint. However, unfortunately, the teacher has not planned and taught that.

Often a negative backwash may be caused by inappropriate test design. Hughes further in his book speaks about multiple-choice activities that are designed to check writing skills of the students. The author of the paper is very confused by that, for it is unimaginable how writing an essay could be tested with the help of multiple choices. Testing essay the teacher first of all is interested in the students’ ability to apply their ideas in writing, how it has been done, what language has been used, whether the ideas are supported and discussed, etc. At this point multiple-choice technique is highly inappropriate.

Notwithstanding, according to Hughes apart form negative side of the backwash there is the positive backwash as well. It could be the creation of an entirely new course designed especially for the students to make them pass their final exams. The test given in a form of final exams imposes the teacher to re-organise the course, choose appropriate books and activities to achieve the set goal: pass the exam. Further, he emphasises the importance of partnership between teaching and testing. Teaching should meet the needs of testing. It could be understand in the following way that teaching should correspond the demands of the test. However, it is a rather complicated work, for according to the knowledge of the author of the paper the teachers in our schools are not supplied with specially designed materials that could assist them in their preparation the students to the exams. The teachers are just given vague instructions and are free to act on their own.

The last type that could be discussed is criterion-related validity. Weir (1990:22.) assumes that it is connected with test scores link between two different performances of the same test: either older established test or future criterion performance. The author of the paper considers that this type of validity is closely connected with criterion and evaluation the teacher uses to assess the test. It could mean that the teacher has to work out definite evaluation system and, moreover, should explain what she finds important and worth evaluating and why. Usually the teachers design their own system; often these are points that the students can obtain fulfilling a certain task. Later the points are gathered and counted for the mark to be put. Furthermore, the teacher can have a special table with points and relevant marks. According to our knowledge, the language teachers decide on the criteria together during a special meeting devoted to that topic, and later they keep to it for the whole study year. Moreover, the teachers are supposed to make his/her students acquainted with their evaluation system for the students to be aware what they are expected to do.


2.3  Reliability

 

According to Bynom (Forum, 2001) reliability shows that the test’s results will be similar and will not change if one and the same test will be given on various days. The author of the paper is of the same mind with Bynom and presumes the reliability to be the one of the key elements of a good test in general. For, as it has been already discussed before, the essence of reliability is that when the students’ scores for one and the same test, though given at different periods of time and with a rather extended interval, will be approximately the same. It will not only display the idea that the test is well organized, but will denote that the students have acquired the new material well.

A reliable test, according to Bynom, will contain well-formulated tasks and not indefinite questions; the student will know what exactly should be done. The test will always present ready examples at the beginning of each task to clarify what should be done. The students will not be frustrated and will know exactly what they are asked to perform. However, judging form the personal experience, the author of the paper has to admit, that even such hints may confuse the students; they may fail to understand the requirements and, consequently, fail to complete the task correctly. This could be explained by the fact that the students are very often inattentive, lack patience and try to accomplish the test quickly without bothering to double check it.

Further, regarding to Heaton (1990:13), who states that the test could be unreliable if the two different markers mark it, we can add that this factor should be accepted, as well. For example, one representative of marking team could be rather lenient and have different demands and requirements, but the other one could appear to be too strict and would pay attention to any detail. Thus, we can come to another important factor influencing the reliability that is marker’s comparison of examinees’ answers. Moreover, we have to admit a rather sad fact but not the exceptional one that the maker’s personal attitude towards the testee could impact his/her evaluation. No one has to exclude various home or health problems the marker can encounter at that moment, as well.

To summarize, we can say that for a good test possessing validity and reliability is not enough. The test should be practical, or in other words, efficient. It should be easily understood by the examinee, ease scored and administered, and, certainly, rather cheap. It should not last for eternity, for both examiner and examinee could become tired during five hours non-stop testing process. Moreover, testing the students the teachers should be aware of the fact that together with checking their knowledge the test can influence the students negatively. Therefore, the teachers ought to design such a test that it could encourage the students, but not to make them reassure in their own abilities. The test should be a friend, not an enemy. Thus, the issue of validity and reliability is very essential in creating a good test. The test should measure what it is supposed to measure, but not the knowledge beyond the students’ abilities. Moreover, the test will be a true indicator whether the learning process and the teacher’s work is effective.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Chapter 3

Types of tests

 

Different scholars (Alderson, 1996; Heaton, 1990; Underhill, 1991) in their researches ask the similar question – why test, do the teachers really need them and for what purpose. Further, they all agree that test is not the teacher’s desire to catch the students unprepared with what they are not acquainted; it is also not the motivating factor for the students to study. In fact, the test is a request for information and possibility to learn what the teachers did not know about their students before. We can add here that the test is important for the students, too, though they are unaware of that. The test is supposed to display not only the students’ weak points, but also their strong sides. It could act as an indicator of progress the student is gradually making learning the language. Moreover, we can cite the idea of Hughes (1989:5) who emphasises that we can check the progress, general or specific knowledge of the students, etc. This claim will directly lead us to the statement that for each of these purposes there is a special type of testing. According to some scholars (Thompson, 2001; Hughes, 1989; Alderson, 1996; Heaton, 1990; Underhill, 1991), there are four traditional categories or types of tests: proficiency tests, achievement tests, diagnostic tests, and placement tests. The author of the paper, once being a teacher, can claim that she is acquainted with three of them and has frequently used them in her teaching practice.

In the following sub-chapters we are determined to discuss different types of tests and if possible to apply our own experience in using them.


3.1. Diagnostic tests

 

It is wise to start our discussion with that type of testing, for it is typically the first step each teacher, even non-language teacher, takes at the beginning of a new school year. In the establishment the author of the paper was working it was one of the main rules to start a new study year giving the students a diagnostic test. Every year the administration of the school had stemmed a special plan where every teacher was supposed to write when and how they were going to test their students. Moreover, the teachers were supposed to analyse the diagnostic tests, complete special documents and provide diagrams with the results of each class or group if a class was divided. Then, at the end of the study year the teachers were demanded to compare the results of them with the final, achievement test (see in Appendix 1). The author of the paper has used this type of test for several times, but had never gone deep into details how it is constructed, why and what for. Therefore, the facts listed below were of great value for her.

 Referring to Longman Dictionary of LTAL (106) diagnostic tests is a test that is meant to display what the student knows and what s/he does not know. The dictionary gives an example of testing the learners’ pronunciation of English sounds. Moreover, the test can check the students’ knowledge before starting a particular course. Hughes (1989:6) adds that diagnostic tests are supposed to spot the students’ weak and strong points. Heaton (1990:13) compares such type of test with a diagnosis of a patient, and the teacher with a doctor who states the diagnosis. Underhill (1991:14.) adds that a diagnostic test provides the student with a variety of language elements, which will help the teacher to determine what the student knows or does not know. We believe that the teacher will intentionally include the material that either is presumed to be taught by a syllabus or could be a starting point for a course without the knowledge of which the further work is not possible. Thus, we fully agree with the Heaton’s comparison where he contrasts the test with a patient’s diagnosis. The diagnostic test displays the teacher a situation of the students’ current knowledge. This is very essential especially when the students return from their summer holidays (that produces a rather substantial gap in their knowledge) or if the students start a new course and the teacher is completely unfamiliar with the level of the group. Hence, the teacher has to consider carefully about the items s/he is interested in to teach. This consideration reflects Heaton’s proposal (ibid.), which stipulates that the teachers should be systematic to design the tasks that are supposed to illustrate the students’ abilities, and they should know what exactly they are testing. Moreover, Underhill (ibid.) points out that apart from the above-mentioned the most essential element of the diagnostic test is that the students should not feel depressed when the test is completed. Therefore, very often the teachers do not put any marks for the diagnostic test and sometimes even do not show the test to the learners if the students do not ask the teacher to return it. Nevertheless, regarding our own experience, the learners, especially the young ones, are eager to know their results and even demand marks for their work. Notwithstanding, it is up to the teacher whether to inform his/her students with the results or not; however, the test represents a valuable information mostly for the teacher and his/her plans for designing a syllabus.

 Returning to Hughes (ibid.) we can emphasise his belief that this type of test is very useful for individual check. It means that this test could be applicable for checking a definite item; it is not necessary that it will cover broader topics of the language. However, further Hughes assumes that this test is rather difficult to design and the size of the test can be even impractical. It means that if the teacher wants to check the students’ knowledge of Present simple, s/he will require a great deal of examples for the students to choose from. It will demand a tiresome work from the teacher to compose such type of the test, and may even confuse the learners.

At that point we can allude to our experience in giving a diagnostic test in Form 5. It was the class the teacher had worked before and knew the students and their level rather good. However, new learners had joined the class, and the teacher had not a slightest idea about their abilities. It was obvious that the students worried about how they would accomplish the test and what marks would they receive. The teacher had ensured them that the test would not be evaluated by marks. It was necessary for the teacher to plan her future work. That was done to release the tension in the class and make the students get rid of the stress that might be crucial for the results. The students immediately felt free and set to work. Later when analysing and summarizing the results the teacher realized that the students’ knowledge was purely good. Certainly, there were the place the students required more practice; therefore during the next class the students were offered remedial activities on the points they had encountered any difficulties. Moreover, that was the case when the students were particularly interested in their marks.

To conclude, we can conceive that interpreting the results of diagnostic tests the teachers apart from predicting why the student has done the exercises the way s/he has, but not the other, will receive a significant information about his/her group s/he is going to work with and later use the information as a basis for the forming syllabus.


3.2 Placement tests

 

Another type of test we are intended to discuss is a placement test. Concerning Longman Dictionary of LTAL again (279-280) we can see that a placement test is a test that places the students at an appropriate level in a programme or a course. This term does not refer to the system and construction of the test, but to its usage purpose. According to Hughes (1989:7), this type of test is also used to decide which group or class the learner could be joined to. This statement is entirely supported by another scholar, such as Alderson (1996:216), who declares that this type of test is meant for showing the teacher the students’ level of the language ability. It will assist to put the student exactly in that group that responds his/her true abilities.

Heaton (ibid.) adheres that the following type of testing should be general and should purely focus on a vast range of topics of the language not on just specific one. Therefore, the placement test typically could be represented in the form of dictations, interviews, grammar tests, etc.

Moreover, according to Heaton (ibid.), the placement test should deal exactly with the language skills relevant to those that will be taught during a particular course. If our course includes development of writing skills required for politics, it is not appropriate to study writing required for medical purposes. Thus, Heaton (ibid.) presumes that is fairly important to analyse and study the syllabus beforehand. For the placement test is completely attributed to the future course programme. Furthermore, Hughes (ibid.) stresses that each institution will have its own placement tests meeting its needs. The test suitable for one institution will not suit the needs of another. Likewise, the matter of scoring is particularly significant in the case of placement tests, for the scores gathered serve as a basis for putting the students into different groups appropriate to their level.

At this point we can attempt to compare a placement test and diagnostic one. From the first sight these both types of tests could look similar. They both are given at the beginning of the study year and both are meant for distinguishing the students’ level of the current knowledge. However, if we consider the facts described in sub-chapter 2.1 we will see how they are different. A diagnostic test is meant for displaying a picture of the students’ general knowledge at the beginning of the study year for the teacher to plan further work and design an appropriate syllabus for his/her students. Whereas, a placement test is designed and given in order to use the information of the students’ knowledge for putting the students into groups according to their level of the language. Indeed, they are both used for teacher’s planning of the course their functions differ. A colleague of mine, who works at school, has informed me that they have used a placement test at the beginning of the year and it appeared to be relevant and efficient for her and her colleague’s future teaching. The students were divided according to their English language abilities: the students with better knowledge were put together, whereas the weaker students formed their own group. It does not mean discrimination between the students. The teachers have explained the students the reason for such actions, why it was necessary – they wanted to produce an appropriate teaching for each student taking his/her abilities into account. The teachers have altered their syllabus to meet the demands of the students. The result proved to be satisfying. The students with better knowledge progressed; no one halted them. The weaker students have gradually improved their knowledge, for they received due attention than it would be in a mixed group.

Страницы: 1, 2, 3, 4, 5, 6


ИНТЕРЕСНОЕ



© 2009 Все права защищены.