Please enable JavaScript.
Coggle requires JavaScript to display documents.
Assessment - Coggle Diagram
Assessment
-
-
direct testing testing a particular skill by getting the student to perform that skill testing whether someone can write a discursive essay by asking them to write one The argument is that this kind of test is more reliable because it tests the outcomes, not just the individual skills and knowledge that the test-taker needs to deplo
indirect testing trying to test the abilities which underlie the skills we are interested in testing whether someone can write a discursive essay by testing their ability to use contrastive markers, modality, hedging etc. Although this kind of test is less reliable in testing whether the individual skills can be combined, it is easier to mark objectivel
discrete-point testing a test format with many items requiring short answers which each target a defined area placement tests are usually of this sort with multiple-choice items focused on vocabulary, grammar, functional language etc. These sorts of tests can be very objectively marked and need no judgement on the part of the marker
integrative testing combining many language elements to do the task public examinations contain a good deal of this sort of testing with marks awarded for various elements: accuracy, range, communicative success etc. Although the task is integrative, the marking scheme is designed to make the marking non-judgemental by breaking down the assessment into discrete part
Subjective marking the marks awarded depend on someone’s opinion or judgement marking an essay on the basis of how well you think it achieved the task Subjective marking has the great disadvantage of requiring markers to be very carefully monitored and standardised to ensure that they all apply the same strictness of judgement consistentl
objective marking marking where only one answer is possible – right or wrong machine marking a multiple-choice test completed by filling in a machine-readable mark sheet This obviously makes the marking very reliable but it is not always easy to break language knowledge and skills down into digital, right-wrong elements
analytic marking the separate marking of the constituent parts that make up the overall performance breaking down a task into parts and marking each bit separately (see integrative testing, above) This is very similar to integrative testing but care has to be taken to ensure that the breakdown is really into equivalent and usefully targeted area
holistic marking different activities are included in the overall description to produce a multi-activity scale marking an essay on the basis of how well it achieves its aims (see subjective marking, above) The term holistic refers to seeing the whole picture and such test marking means that it has the same drawbacks as subjective marking, requiring monitoring and standardisation of markers
-
Criterion-referenced tests are those in which the result is measured against a scale (e.g., by grades from A to E or by a score out of 100). The object is to judge how well someone did against a set of objective criteria independently of any other factors. A good example is a driving test.
Norm-referencing is a way of measuring students against each other. For example, if 10% of a class are going to enter the next class up, a norm-referenced test will not judge how well they achieved a task in a test but how well they did against the other students in the group. Some universities apply norm-referencing tests to select undergraduates
Direct Testing - students perform the skills precisely which the test is intended to assess
- Integrative approach normally open ended answers, using various components of language systems (underlying knowledge).
Performance Based Tests - Speaking or Writing using descriptors to assess (Criteria)Indirect Testing
-
Summative Tests 'Learning for Assessment'
- End of course tests
- Evaluation of student progress
Formative 'Assessment for Learning'
- Helps students to review progress
- takes place on a course on a regular basis
- Oral/wriitten feedback + self and peer assessment, sharing of objectives intentions with students.
Proficiency Tests
Assess candidates language ability irrespective of any previous instruction, and are based on what exam takers will be able to do with 'real' language in the future. (KET, PET, CPE)
Acheviement Tests
- Related directly to the learning process on an individual language course.
-
Diagnostic Tests
To see students strengths and weaknesses to help design future course content.
Face validity
Does it test what they are expecting to be tested on? (Impression)
Content Validity
Does it test what has been taught? Measures the particular skills or behaviour intending to be tested.
Construct validity
Practicality
How easy is the test to administer + mark