Please enable JavaScript.
Coggle requires JavaScript to display documents.
Unit 3a - Communication strategy effectiveness (OBJ 4: Analyse the roles…
Unit 3a - Communication strategy effectiveness
INTRO:
Reputation gets evaluated by stakeholders based on
5 elements of reputation:
A company's financial performance
The quality of management
The social and environmental responsibility performance
The employee quality
The quality of goods/services provided
Stakeholders view of org (reputation) is impacted by orgs
ability to manage relationships with stakeholders around these elements.
CC cannot change these elements, but they can assist in managing the relationship through strategy, process, engagement
CC can only
influence
these elements & comm around these elements
It is also NB to realize that employees evaluate org (reputation) around same elements excluding employee quality
Employees can assist org in formation of image of org among its external stakeholders
OBJ 1: The evolution of PR measurement and evaluation
:!!:
The measurement & evaluation of PR parallels the developing of PR it self
Measurement & evaluation help shape the discipline as it evolves
:!!:
PR has its beginnings in the 18th & 19th centuries
with presidential campaigners measuring public opinion of candidates
:!!:
The early 20th century saw the 1st PR agencies open
Companies e.g. AT&T began with media monitoring
:!!:
Around WW1 -> PR & Propaganda
:!!:
1920's:
Arthur Page introduced opinion research
Also the development of surveys
:!!:
1930s & 1940s:
PR still heavily relied on government
Ushered in the age of stakeholder dialogue
:!!:
Thereafter spin doctoring & brands began over-promising & somewhat in-authentically so:
Cutlip & Center began writing about evaluation
:!!:
1960s - more academic writing about research & development began to appear & PR was established as an official academic discipline
:!!:
1980s - PR Institute began publishing & PR evaluation & research became a major professional and academic practice
:!!:
From 1990s to present:
The rise of 'new media' has been felt & landscape has shifted dramatically
This necessitated an adaption of PR practice & thus measurement & evaluation
OBJ 2: Determine how Lindenmann's 'Effective yardstick' can be applied to CC strategy:
His paper is based on 2 premises:
It is possible to measure PR effectiveness
PR measurement studies can be done at relatively modest
cost
& it does not have to take too much
time
The right way to evaluate PR impact is to do it in a logical, step by step manner (systematically)
He therefore developed a PR effectiveness Yardstick
->
A straightforward set of guidelines/standards to follow to measure PR effectiveness
-> This involves a 2-step process:
Setting PR objectives
Determine at what levels you wish to measure PR effectiveness
STEP 1: SETTING OBJECTIVES
To measure one needs to know exactly
what it is you are measuring something against
Thus, determine the goals & objectives of your program - what are you seeking to accomplish. This normally falls in 4 categories:
You are trying to get
certain messages
, themes or ideas out
To a certain key or
target audience
group
Via certain pre-selected or specific comm channels
For the above you normally have certain
short-term
or
long-term
ends -> you want TP's to
respond
in a certain way
Thus, to begin to assess impact you need to ID messages, target publics & channels of communication
-Then you need to use these markers to determine how effective you have been in achieving what you are trying to achieve ( your short- or long-term ends)
STEP 2: DETERMINING LEVELS OF PR MEASUREMENT
Now you must determine exactly what you want to measure
There are 3 different levels of PR measurement:
The
basic
level of measuring
outputs
The
intermediate
level for measuring
outgrowths
The
advanced
level for measuring
outcomes
These levels are like marks on ruler (yardstick) -> each level identifies a higher (more sophisticated) level of measurement of PR success
He ends off by stating that:
There is not 1 simplistic method to measure PR effectiveness
-> various tools & techniques are needed
It is NB that before you attempt to evaluate anything, you need to set objectives. This is called formative evaluation. One must think of evaluation before planning & implementing, not only while or after implementation.
BASIC LEVEL 1:
It is relatively easy/simple - thus basic. It measures
outputs
.
:<3: How PR present themselves, how they handle activities or events such as:
Media placements: amount of exposure received through media & total number of placements
Impressions: total number of impressions
Target publics: the likelihood of having reached specific TPs
:<3: Measures what was actually done (literal, tangible elements) e.g.
Was brochures for TP attractive?
Was media conferences well attended?
Did media use media releases?
Did messages get transmitted to TP?
:<3: Methods used:
content analysis
to tract/measure media placements or
simple public opinion polls
to measure if TP has been exposed to certain messages.
INTERMEDIATE LEVEL 2:
More sophisticated.
It measures
outgrowths
, measuring whether or not TP's:
have
received
the messages - reception
if they paid
attention
to it - awareness
if they have
understood
it - comprehension
if they have
retained
the messages - reception
Methods used:
-> Mix of qualitative & quantitative data collection techniques e.g.
focus groups
,
depth interviews
with opinion-leader groups,
extensive polling
of key TPs by telephone, face-to-face or email.
ADVANCED LEVEL 3:
It is the most advanced/sophisticated level.
It measures
outcomes
:
Opinion change
Attitude change
Behaviour change
Methods used:
Before & after polls
e.g. pre-and post tests
Experimental & quasi-experimental
research designs
Unobstrusive data collection
e.g. observation, participation & role playing
Advanced data analysis
techniques e.g. perceptual analysis, psychographics analysis, conjoint analysis (determining how people make decisions & what they value)
Comprehensive, multi-facetted
communication audits
OBJ 3: Identify & explain the stages of communication campaign planning cycle
1. AUDIT
Research to identify issues & set benchmarks
2. OBJECTIVES
Comm objectives for each stakeholder
3. PLANNING & EXECUTION
Design & execution of campaigns
4. MEASUREMENT & EVALUATION
Continuous measurement
5. RESULTS
Taking stock of results against initial objectives
OBJ 4: Analyse the roles of research & evaluation in a comm campaign planning cycle
INTRO:
:warning: Research is used throughout planning, implementation & evaluation of comm campaigns -> it is a circular process
:warning: Research is the act of gathering information, it includes evaluation
:warning: For this discussion the term research is used for the gathering of info
before
action & evaluation as research done
after
action
:warning:
Advantages of seeing research & evaluation as part of planning cycle:
Each cycle of activity can be more effective than previous cycle if results of evaluation are used to make adjustments to campaign - helps deliver results
Helps improve on past performance
It draws attention to research & evaluation at different stages of the cycle.
Gives credibility with senior staff managers of org
:star:
It gathers info before planning (formative research - can use summative evaluation from previous campaign/program/strategy)
e.g. surveys to understand problems or issues or to help segment stakeholder audiences
:star:
It gathers info before implementing (formative measurement & evaluation)
e.g. focus groups to explore the feasibility of a campaign or to pre-test or refine message
:star:
It gathers info during implementation (measurement & evaluation)
:star:
It gathers info after implementation (summative measurement & evaluation)
To determine if objectives were met
To be used as research before planning of next phase/campaign
Evaluation is also NB to improve the perception of the value of CC & to prove return on investment
PP 130
Looking at only the concept of evaluation and its role in comm campaign cycle:
Evaluation is a form of research.
It can be defined as the use of research for informing & assessing the conceptualization, design, execution and effects of comm programs or campaigns.
It plays a role in the last three stages of the planning cycle:
Planning & execution or referred to as PREPARATION
Measurement & evaluation or referred to as IMPLEMENTATION
Results or referred to as IMPACT
THE STAGES & LEVELS OF EVALUATION -> ALSO THE SEQUENCE OF EFFECTS:
Preparation:
(measuring input)
-Adequacy of background info for designing program
-Appropriateness of message & activity content
-Quality of message & activity presentations
Implementation:
(measuring output)
-Number of messages sent to media & activities designed
-Number of messages placed & activities implemented
-Number who receive messages & activities
-Number who attend to messages & activities
Impact:
(measuring outcomes)
-Number who learn message content
-Number who change opinions
-Number who change attitudes
-Number who behave as desired
-Number who repeat behaviour
-Social & cultural change
Research and evaluation methods:
For this discussion the term 'research' is used for gathering of info
before
action & 'evaluation' as research done
after
action
This objectively specifically refers to
evaluation
methods used
after action
and specifically to measure impact, but the textbook also discuss methods linked to research before action.
INFORMAL VS FORMAL RESEARCH
Informal:
Casual interactions with key stakeholders
-> To define issues
-> to better understand problems
Disadvantages:
-> Not systematic effort
-> not representative
-> info can be biased in terms of who was asked & not - thus conclusions biased.
Formal:
More systematic data gatherings methods
Sensitive to representativeness & sampling stakeholders
e.g. Focus groups, surveys & content analysis
Research methods:
FOCUS GROUPS (primary, formal, qualitative)
Semi-structured group discussion to understand underlying motivations of an issue or problem
Are recorder & interpreted not counted
SURVEYS (primary, formal, quantitative + qualitative)
To record in numbers the level of awareness, attitudes or behaviour of the population in relation to certain issues or circumstances
Can be analytical to explain why certain circumstances, attitudes & behaviour exists & then testing hypotheses
Can be both analytical & descriptive at the same time
e.g. Questionnaires
CONTENT ANALYSIS (primary, formal, quantitative)
Scientific method to describe comm content in numerical form
To monitor and trace media coverage of issues & orgs
Looks at
frequency of coverage & overall tone of reporting
Often carried out without scientific software packages
OBJ 7: Differentiate between the research methods for campaign evaluation
Herewith a discussion of methods used in evaluation & specifically on impact level.
Here research focus on the number of people in TP who have seen the message, digested the contents and changed their views & behaviors favourably
This needs to be done continuously
QUALITATIVE METHOD
Open in design as respondents are invited to describe an org in own words
This leads to rich descriptions & insights
QUANTITATIVE METHOD
Closed techniques of data collection & analysis that ask respondents to rate org & its campaigns on a set of pre-defined questions & scales
Advantage: it facilitates comparison across respondents & allow practitioners to generalize an overall picture of program/campaign effects
General effects at the basis of evaluation:
When evaluating, CCPs should not only choose their methods - they need to determine what general effects should form the basis of their evaluation
It is NB to define these effects in precise terms
These can be measured in aided or unaided ways and in comparison with other companies
This is found in their objectives and can be split into:
Cognitive
e.g. awareness / knowledge
Affective
e.g. liking or emotions
Behavioural
e.g. behavioral actions
CCPs might want to measure net effects such as org trust, legitimacy, reputation
Trust & legitimacy depends on cognitive & affective effects
But measuring reputation needs a whole discussion on its own