Please enable JavaScript.
Coggle requires JavaScript to display documents.
L5A - Survey Research, Design and Data Collection Describe survey design…
L5A - Survey Research, Design and Data Collection
- Describe survey design including question types, advantages and limitations
- Define and describe the main types of validity.
- Define and describe reliability including types and measurement coefficients.
Survey
- What is the purpose of the survey?
- Clinical question
- Preferentially use an existing surveys (i.e. "the psychiametrics of survey...")
- What survey mode will you use?
When to use a survey
To say something about the population from which the sample was drawn
• To collect information from a large number of people or records
• To collect this information relatively inexpensively
• When you want the data collection procedure to be as standardised as possible
• When you do not need very in-depth information
Limitations
- No opportunity to explain questions that people don't understand
- Can't guarantee a response
Advantages
- Anonymity
- Allows sensitive questions to be broached
- Standardised data to monitor trends
- Simple Data Analysis can be used
- Data can be presented graphically
Survey Forms
-
-
-
Guttman Scale
- An agreement with one item implies agreement with others on the lst
Multiple Choice
- Single-response & multi-response
Chronbach's alpha is used to indicate a survey's validity
- It is a measure of internal consistency
- Determines how much the items on a scale are measuring the same underlying dimension
- Commonly used when procedure has multiple Likert questions and you want to determine if the scale is reliable
Kappa coefficient is used to measure reliability
- Standard psychometric measure for agreement between raters and categorical diagnoses
- Measures agreement that has been corrected for the agreement that is to be expected due to the "arbitrary factor"
3.Correlation Coefficient
- Measure reproducibility (comparison of the results are made)
-
Chronbach's alpha
E.g. The questionnaire was employed to measure different, underlying constructs. One construct, 'enthusiasm', consisted of six questions. The scale had a high level of internal consistency, as determined by a Cronbach's alpha of 0.823.
Validity and Reliability
- Many years are devoted to acquiring legitimised questionnaires that are accepted as Valid and Reliable
- Generally questionnaire's that are widely used and highly regarded have these two qualities
Validity
- How well a survey actually measures what it sets out to measure.
- How well does it reflect the reality it claims to represent
Construct Validity
Overarching term used to asses the validity of the measurement procedure used to measure the variable of interest
- It doesn't measure irrelevant factors
- It incorpoorates varous other forms of validity (Content,convergent, divergent and criterion)
Convegent validity
- The measurement procedure used, when compared with a related measurement procedure, both produce similar results
- Indicated by a strong correlation between the scores produced by relatedstudies
Divergent validity
- The measurement proceudre used, when compared with an unrelated measurment procedure, successful discriminates your result
- Indicated by a weak or non-existent relation ship between the scores produced by unrelated studies
Content Validity
The extent to which elements within a measurement procedure are relevant and representative of the construct that they will be used to measure
- This is asses by a critical review by an exert panel, and is compared with the existing literature
Representativeness reflects the extent to which your measurement procedure;
- Over/under represents some elements of the study more than others
- Excludes elements that are required to measure the construct you are interested
Relevance
Simply means that elements within your measurement procedure match the construct that you are interested/measuring
Criterion Validity
Reflects the used of a well-established measurement procedure as the basis to create a new measurement procedure to measure a construct that you are interested in
E.g. Selectng 19 items from an 42-item survey that is well regarded
Concurrent Validity
Two different procedures are carried out at the same time, concurrent validity is reached when the scores from a new measurement are directly related to the scores from the well-estabilished procedure that the former is drawn from
Test for consistency between the measures
Reasons for using criterions
- Create shorter version of a well-established measurement;
- To account for a new context, location, culture
- To help test the theoretical relatedness and construct validity of a well established measurement procedure.
Predictive Validity
Involves establishing that the scores from a measurement procedure make accurate predictions about the construct they represent
E.g. EXAMPLE: Universities use high school grades to select students. This is seen to be a predictive measure of future grades/outcomes.
Face Validity
A subjective, superficial assessment of whether the measurement procedure used to measure a variable is a valid way of assessing it.
- Weak form of Validity
- Not tested using statistical procedures
Reliability
- Is essentially concerned with error in measurement
- Repeatability of measurement and reproducibility of the results
Testing for Reliability
- Two measurements are applied under the "same"
conditions and compared
- The results of the comparison are presented by the correlation coefficients
Test Re-test Reliability
Measurement is applied on two occasions, each at a separate time
- Perfect Reliability (impossible) = full agreement
- Correlation coefficient is 1.0
Reasons for Discrepancy
- State of the participant(s) has changed
- Rehearsed responses (Particpant(s) has an awareness of the test)
- Participant did not understand the question on either occasion
- Particpant didn't want to reply correctly
Inter-Rater reliability
- Two raters (one interviewer and one observer) execute the same measurement procedure at the same time
- At the end of he procedure, the observer then repeats questions that they feel were not probed adequately by the interviewer
Problems
- Not indepedent
- Raters can agree by chance
-
-