CHAPTER 6 TRAINING EVALUATION

Training effectiveness refers
to the benefits that the company and the trainees receive from training.

Training outcomes or criteria
refer to measures that the trainer and the company use to evaluate training programs

Training evaluation refers to the process of collecting the outcomes needed to determine whether training is
effective.

The evaluation
design refers to the collection of information.

REASONS FOR EVALUATING TRAINING

Companies are investing millions of dollars in training programs to help gain a competitive
advantage

Pilot testing refers to the process of previewing
the training program with potential trainees.

Summative evaluation refers to an evaluation conducted to determine the extent to which trainees have changed as a result of participating in the training program

The Evaluation
Process

Conduct a Needs Analysis

Develop Measurable Learning Objectives
and Analyze Transfer of Training

Develop Outcome Measures

Choose an Evaluation Strategy

Plan and Execute the Evaluation

Reaction outcomes refer to trainees’ perceptions of the program, including the facilities,
trainers, and content

Cognitive outcomes are used to determine the degree to which trainees are familiar
with the principles, facts techniques, procedures, and processes emphasized in the
training program.

Skill-based outcomes are used to assess the level of technical or motor skills and
behaviors.

Affective outcomes include attitudes and motivation. Affective outcomes that might be
collected in an evaluation include tolerance for diversity, employee engagement, motivation
to learn, safety attitudes, and customer service orientation

Results are used to determine the training program’s payoff for the company.

Return on investment (ROI) refers to comparing the training’s monetary benefits with
the cost of the training.

Criteria relevance refers to the extent to which training outcomes are related to the
learned capabilities emphasized in the training program

Criterion contamination
refers to the extent that training outcomes measure inappropriate capabilities
or are affected by extraneous conditions.

Reliability refers to the degree to which outcomes can be measured consistently over
time.

Discrimination refers to the degree to which trainees’ performance on the outcome actually
reflects true differences in performance

Practicality refers to the ease with which the outcome measures can be collected

Formative evaluation refers to the evaluation of training that takes place during program
design and development

Methods to Control for Threats to Validity

Pretests and Post-tests

pretraining measure

post-training measure

Random assignment refers to assigning employees to the training
or comparison group on the basis of chance alone.

Types of Evaluation Designs

Post-test Only

Pretest/Post-test

Pretest/Post-test with Comparison Group

Time series refers to an evaluation design in which training outcomes are collected periodic intervals both before and after training.

The Solomon four-group design combines the pretest/post-test comparison group and the
post-test-only control group design

MEASURING HUMAN CAPITAL AND TRAINING ACTIVITY

Big data refers to complex data sets developed by compiling data across different organizational
systems including marketing and sale

Workforce analytics
refers to the practice of using quantitative methods.

dashboard
refers to a computer interface designed to receive and analyze the data from departments
within the company to provide information to managers and other decision-makers.