Please enable JavaScript.
Coggle requires JavaScript to display documents.
Evaluation of Personalised/Adapted resources (CONTENT PERSONALISATION AND…
Evaluation of Personalised/Adapted resources
AIMS
Evaluate if the resources is as accessible as they say it is.
Understand potential difficulties so that these challenges can be conveyed accurately to users.
Important to know type of evaluation, what has been E and how it has been E.
To avoid
Unless one has a clear notion of the property that has to be tested, the testing process is rarely successful. And when the property depends on human cognitive processes, then also the testing method may reduce the effectiveness of testing. This is indeed the case for web accessibility. On the one hand, there are several (more or less practical) definitions of accessibility. Sometimes accessibility is defined in terms of effectiveness; now and then it is defined in terms of usability; but unfortunately there are too often claims that a web site is accessible simply because an automatic testing tool yielded no error.
(Brajnik, 2006, p. 150)
IMPORTANT POINTS
Accessibility of tools used to present material and material itself.
FACTORS TO CONSIDER
Format - learning objectives, availability of resources, time constraints, expertise.
Assessment is difficult - various methods and stake holders to consider.
Has to be as easy as possible for lecturers to use.
Cant ethically use real situations as a student using one method may do better and therefore be saidto have an advantage over the other student.
Creating material interesting enough for people to take part in evaluation and material that represents an actual course.
CONTENT PERSONALISATION AND ADAPTION
Resources can be made adaptable and so personalised.
EU4ALL and Tile projects
Software inspection
Nut and bolts programming where errors can be changed.
Automated Checking
e.g. Imergo web compliance manager check it conforms to well known guidance, gives an indication as to whether something has been missed. but does not take into account needs and preferences of a particular user.
Heuristic Evaluation
Human - computer interaction, id. usability issues with interactive devises.
Good to assess the interfaces different stakeholders use to interact with the resources that they then personalise.
Uses experts who may miss problems or see problems that do not exist.
Predictive Evaluation
GOMS - used to predicte performance or result of design change and then compare different situations to see which is best.
Do not take into consideration end user differences.
End-User Evaluation
Costly and time consuming so usually done after a series of heuristic evaluations.
Give indepth understanding or real challenges faced by real users.
Field Evaluations
Unlike end user evl these are carried out where the resource will be accessed e.g. at home.
Id. issues with the effect of AT or system preferences
Pedagogic Evaluation
Does the system facilitate learning
Assessed through number of pre and post test knowledge indicators.
Economic Evaluation
Is it time and cost efficient?
Perception Evaluation
How is it perceived by different stakeholders and what can be done to meet any challenges?
UNIVERSAL DESIGN
Resources can be used by everyone.
But is it optimal for everyone?