Please enable JavaScript.
Coggle requires JavaScript to display documents.
Themes 7 & 8, image, image , image , image , image , image , …
Themes 7 & 8
Formative Assessment
Evaluation conducted during the learning process.
Techniques and Strategies
Class discussions:
can be used to assess students' ability to engage with course material, think critically, and express themselves clearly.
Peer review:
can be used to assess students' writing or other work, and to provide feedback from their peers.
Quizzes:
can be used to assess students' understanding of specific concepts or skills.
Self-assessment:
Students can be asked to reflect on their own learning and progress
and identify areas where they need to improve.
Impact on students learning
Can also encourage student engagement and motivation
as it gives students the opportunity to see their progress and receive feedback on their work.
Can help students identify their strengths and weaknesses
and target their efforts to areas where they need improvement.
Formative assessment can have a positive impact on students' learning
by providing ongoing feedback and support during the learning process.
Principles
Feedback:
They should provide students with timely and specific feedback on their learning and performance.
Collaboration:
They should involve collaboration between students and instructors
with emphasis on student engagement and ownership of learning.
The assessment should not be biased against any particular group of students.
The assessment should be consistent in its results over time.
Inclusivity:
Assessments should be accessible to all students
regardless of their abilities or backgrounds.
Provides feedback to students and inform instruction.
Its goal is to help students improve their learning and performance.
Summative Assessment
Evaluation that is conducted after a learning period.
Techniques and Strategies
Projects:
can be used to assess students' ability to apply their knowledge and skills in a real-world context.
Papers:
can be used to assess students' writing and research skills.
Exams:
can be used to assess students' knowledge and understanding of course material.
Portfolios:
can be used to assess students' overall progress and development over time.
Impact on students learning
Can help students understand what they have learned and what they need to work on
and it can also provide motivation for students to perform well.
This is generally less effective at promoting ongoing learning and improvement
as it typically occurs after the learning has taken place.
Summative assessment can also have a positive impact on students' learning
by providing a clear and objective measure of their progress and achievement.
Principles
Standardization:
assessments should be based on clear and standardized criteria
To ensure consistency and fairness in grading.
Objectivity:
assessments should be objective
to minimize the influence of subjective factors on grading.
The assessment should measure what it is intended to measure.
Alignment:
assessments should be aligned with the learning objectives of the course or program
in order to accurately measure student learning.
Used to measure how well students have learned the material.
Can take many forms, including exams, projects, and papers.
Reliability
Refers to the consistency of a measure.
Test administration reliability
refers to the consistency and dependability of the process of administering a test or assessment.
Ensuring that these factors are controlled and consistent can help to increase test administration reliability.
important to consider test administration reliability when designing and administering tests and assessments
critical to ensure that the results of the test accurately reflect the abilities or characteristics being measured.
Student-related reliability
refers to the consistency and dependability of a test or assessment when it is used to evaluate student performance.
factors that can affect student-related reliability:
content and format of the test
test-taking conditions
skill and expertise of the test administrator
A test or assessment with high student-related reliability will produce:
similar results when administered to the same student on different occasions
Test reliability
refers to the consistency and dependability of a test or assessment. It is a measure of the accuracy and consistency of the test results
types of reliability that can be considered when evaluating the reliability of a test:
Test-retest reliability:
refers to the consistency of the results when the same test is administered to the same group on two different occasions.
Inter-rater reliability:
refers to the consistency of the results when the test is administered by different individuals.
Internal consistency reliability:
refers to the consistency of the results within a single test or assessment.
Parallel-form reliability:
refers to the consistency of the results when two different versions of the same test are administered to the same group of individuals.
and is often used to evaluate the quality of a test or assessment.
Rater reliability
refers to the consistency and dependability of ratings or evaluations made by individuals.
concepts:
It is a measure of the accuracy and consistency of the ratings given by the individuals
It is often used in situations where ratings are used to make decisions about individuals or groups.
factors that can affect rater reliability
clarity of the criteria used
training and experience of the raters
amount of time and effort put in ratings
a measure is considered reliable if it produces consistent results over time.
Validity
Refers to the extent to which a measure accurately assesses what it is intended to assess.
Criterion-related evidence
Refers to the evidence that supports the criterion-related validity of a measure.
types of criterion-related evidence:
Concurrent criterion-related evidence:
Refers to the evidence that is collected at the same time as the measure is administered.
used to assess the relationship between the measure and the criterion or outcome at the same time.
Predictive criterion-related evidence:
Refers to the evidence that is collected after the measure is administered.
used to assess the ability of the measure to predict future performance on the criterion or outcome.
It is the evidence that demonstrates the relationship between the measure and a criterion or outcome of interest.
ways to collect criterion-related evidence:
Cross-validation studies:
involves administering the measure to one group and collecting data on the criterion or outcome
and then administering the measure to a different group and collecting data on the criterion or outcome.
Validity generalization studies:
involves collecting data on the measure and the criterion or outcome from multiple studies
and analyzing the overall relationship between them.
Correlation studies:
involves collecting data on both the measure and the criterion or outcome and analyzing the relationship between them.
Construct-related evidence
refers to the evidence that supports the construct validity of a measure.
ways to collect construct-related evidence
Factor analysis:
is a statistical method that is used to identify the underlying dimensions or factors that contribute to the measure.
Comparison with established measures:
involves comparing the results of the measure in question to the results of established measures of related concepts
Correlation studies:
involves collecting data on both the measure in question and other measures of related concepts and their relationship.
Validity generalization studies:
involves collecting data on the measure and other measures of related concepts from multiple studies
and analyzing the overall relationship between them.
evidence that demonstrates the relationship between the measure and other measures of related concepts.
Content-related Evidence
Refers to the evidence that supports the content validity of a measure.
types of content-related evidence
Item analysis:
Involves analyzing the items on the measure to determine whether they are representative of the full range of the concept being measured.
Correlation with other measures:
Involves comparing the results of the measure to the results of other measures that assess similar concepts to determine whether they are consistent.
Expert review:
Involves having experts in the field review the measure and determine whether it adequately covers the full range of the concept being measured.
Data from the target population:
Involves collecting data from the population that the measure is intended to assess and using it to evaluate the content validity of the measure.
It is evidence that demonstrates that the measure adequately covers all aspects of the concept it is intended to assess.
Content validity
A measure has good content validity if it includes items or questions that adequately represent the full range of the concept being measured.
Consequential validity
refers to the impact that a test or assessment has on an individual or group.
characteristics
includes both the intended and unintended consequences of testing.
important to consider it when designing and administering tests and assessments
For example
a test used to determine whether a student is ready to graduate from high school has consequential validity,
as the outcome of the test (pass or fail) will have significant consequences for the student's future.
concerned with the consequences of using a particular test or assessment
and whether those consequences are positive or negative.
consequences of testing can have a significant impact on individuals and groups.
a measure is considered valid if it accurately reflects the concept it is supposed to be measuring.