CHAPTER 6 TRAINING EVALUATION (The Evaluation Process (Conduct a Needs…
CHAPTER 6 TRAINING EVALUATION
Training effectiveness refers
to the benefits that the company and the trainees receive from training.
Training outcomes or criteria
refer to measures that the trainer and the company use to evaluate training programs
Training evaluation refers to the process of collecting the outcomes needed to determine whether training is
design refers to the collection of information.
REASONS FOR EVALUATING TRAINING
Companies are investing millions of dollars in training programs to help gain a competitive
Pilot testing refers to the process of previewing
the training program with potential trainees.
Summative evaluation refers to an evaluation conducted to determine the extent to which trainees have changed as a result of participating in the training program
Conduct a Needs Analysis
Develop Measurable Learning Objectives
and Analyze Transfer of Training
Develop Outcome Measures
Choose an Evaluation Strategy
Plan and Execute the Evaluation
Reaction outcomes refer to trainees’ perceptions of the program, including the facilities,
trainers, and content
Cognitive outcomes are used to determine the degree to which trainees are familiar
with the principles, facts techniques, procedures, and processes emphasized in the
Skill-based outcomes are used to assess the level of technical or motor skills and
Affective outcomes include attitudes and motivation. Affective outcomes that might be
collected in an evaluation include tolerance for diversity, employee engagement, motivation
to learn, safety attitudes, and customer service orientation
Results are used to determine the training program’s payoff for the company.
Return on investment (ROI) refers to comparing the training’s monetary benefits with
the cost of the training.
Criteria relevance refers to the extent to which training outcomes are related to the
learned capabilities emphasized in the training program
refers to the extent that training outcomes measure inappropriate capabilities
or are affected by extraneous conditions.
Reliability refers to the degree to which outcomes can be measured consistently over
Discrimination refers to the degree to which trainees’ performance on the outcome actually
reflects true differences in performance
Practicality refers to the ease with which the outcome measures can be collected
Formative evaluation refers to the evaluation of training that takes place during program
design and development
Methods to Control for Threats to Validity
Pretests and Post-tests
Random assignment refers to assigning employees to the training
or comparison group on the basis of chance alone.
Types of Evaluation Designs
Pretest/Post-test with Comparison Group
Time series refers to an evaluation design in which training outcomes are collected periodic intervals both before and after training.
The Solomon four-group design combines the pretest/post-test comparison group and the
post-test-only control group design
MEASURING HUMAN CAPITAL AND TRAINING ACTIVITY
Big data refers to complex data sets developed by compiling data across different organizational
systems including marketing and sale
refers to the practice of using quantitative methods.
refers to a computer interface designed to receive and analyze the data from departments
within the company to provide information to managers and other decision-makers.