Please enable JavaScript.
Coggle requires JavaScript to display documents.
RM: Reliability, "it's not great unless it +.80" - Coggle…
RM: Reliability
- reliability of questionnaires overtime should be measured using test-retest.
- comparing 2 sets of data should = correlation exceeding r=+.80
- questionnaires producing low test-retest reliability may require some of the items to be 'deselected' or rewritten
e.g. if some questions are complex or ambiguous, they may be interpreted differently by same person on different occasions.
- solution may be to replace some of open questions with closed, fixed-choice alternatives which may be less ambiguous
- best way of ensuring reliability is to use same interviewer each time
- if not possible/practical, all interviewers must be properly trained.
-
- reliability can be improved by making sure behavioural categories have been properly operationalised & they're measurable & self-evident (e.g. category 'pushing' is much les open to interpretation than 'aggression')
- categories shouldn't overlap (e.g. 'hugging' and 'cuddling') & all possible behaviours should be covered on checklist.
- if categories aren't operationalised well/are overlapping/absent, different observers have to make own judgements of what to record where & may end up with differing & inconsistent record.
- if reliability low, observers may need further training in using behavioural categories and/ may wish to discuss their decisions with each other so they can apply their categories more consistently.
- in exp it's procedures which are focus of reliability.
- to compare performance of different ppts procedures must be consistent every time.
- therefore in terms of reliability an experimenter is concerned about standardised procedures.
- ways of testing reliability
- most straightforward way = test-retest method.
- involves applying the same test/questionnaire to the same person/people on different occasions.
- if test/questionnaire is reliable then the results obtained should be the same or at least very similar each time the test is administered.
- in the case of questionnaires/tests, the 2 sets of scores would be correlated to make sure they are similar.
- if correlation is significant (and positive) then the reliability of the measuring instrument is assumed to be good.
- most commonly use with questionnaires and psychological test (e.g. IQ tests) but can also be applied to interviews.
- must be sufficient time between test & retest to ensure ppts/respondents cannot recall their answers to the questions to a survey but not so long that their attitude, opinions or abilities may have changed.
- inter-observer reliability
- in observational research, one observer's interpretation of events may differ widely from someone else's - introducing subjectivity, bias and unreliability into the data collection process.
- recommended that would-be observers conduct their research in teams of at least 2, however, inter-observer reliability must be established
- this may involve a pilot study of the observation in order to check that observers are applying behavioural categories in the same way, or a comparison may be reported at the end of a study
- observers must watch the same event/ sequence of events, but record their data independently.
- as with the test-retest method, the data collected y the 2 observers should be correlated to assess its reliability.
- reliability is measured using a correlation analysis.
- in test-retest & inter-observer reliability, the 2 sets of scores are correlated. the correlation coefficient must exceed r=+.80 in order to be reliable
- reliability = a measure of consistency
- in simpler terms, if a specific measurement is made twice and produce the same result both times then that measurement is described as being reliable = dependable
e.g. a ruler should find the same measurement for a particular object (e.g. a chair). if there's a change in the measurement over time then we would assimilate that change to the object rather than the ruler.
-