Please enable JavaScript.
Coggle requires JavaScript to display documents.
Theories in Scientific Research (Building Blocks of a Theory (David…
Theories in Scientific Research
Theories
Theories are explanations of a natural or social behavior, event, or phenomenon.
Theories should explain why things happen, rather than just describe or predict.
It is possible to predict events or behaviors using a set of predictors, without necessarily explaining why such events are taking place.
Explanations require causations, or understanding of cause-effect relationships
Establishing causation requires 3 conditions: 1) correlations between 2 constructs 2) temporal precedence (the cause must precede the effect in time) 3) rejection of alternative hypotheses (through testing)
Idiographic explanations
- explain a single situation or event in idiosyncratic detail. The explanations may be detailed, accurate, and valid, but they may not apply to other similar situations, even involving the same person, and are hence not generalizable.
Nomothetic explanations
seek to explain a class of situations or events rather than a specific situation or event. Example: students who do poorly in exams do so because of lack of preparation or they suffer from nervousness, attention-deficit, or some other medical disorder.
Nomothetic explanations are designed to be generalizable across situations, events or people, they tend to be less precise, less complete, and less detailed.
Theories are also intended to serve as generalized explanations for patterns of events, behaviors, or phenomena, theoretical explanations are generally nomothetic in nature.
Theories must go well beyond constructs to include propositions, explanations, and boundary conditions. Data, facts, and findings operate at the empirical or observational level, while theories operate at a conceptual level and are based on logic rather than observations.
First, Theories provide the underlying logic of the occurrence of natural or social phenomenon by explaining what are the key drivers and key outcomes of the target phenomenon.
Second, they help in sense-making by helping us synthesize prior empirical findings within a theoretical framework and reconcile contradictory findings by discovering contingent factors influencing the relationship between 2 constructs in different studies.
Third, theories provide guidance for further research by helping identify constructs and relationships that are worthy of further research. Fourth, theories can contribute to cumulative knowledge building by bridging gaps between other theories and by causing existing theories to be reevaluated in a new light.
Theories may not always provide accurate explanations of the phenomenon on interest based on a limited set of constructs and relationships.
Building Blocks of a Theory
David Whetton (1989) suggests that there are 4 building blocks of a theory: constructs, propositions, logic, and boundary conditions/assumptions.
Constructs capture the "what" of theories, propositions capture the "how", logic represents the "why", and boundary conditions/assumptions examines the "who, when, and where."
Constructs are abstract concepts specified at a high level of abstraction that are chosen specifically to explain the phenomenon of interest.
All constructs must have clear and unambiguous operational definition that should specify exactly how the construct will be measured and at what level of analysis
Measurable representations of abstract constructs are called
variables
Example, the IQ score is a variable that is purported to measure an abstract construct called intelligence.
Propositions
are associations postulated between constructs based on deductive logic. Propositions are stated in declarative form and should ideally indicate a cause-effect MUST be testable. Propositions are stated at the theoretical level, and they can only be tested by examining the corresponding relationship between measurable variables of those constructs.
Empirical formulation of propositions stated as relationships between variables, is called
hypotheses
.
Logic
- provides the basis for justifying the propositions as postulated. Logic acts like a "glue" that connects the theoretical constructs and provides meaning and relevance to the relationships between these constructs. Logic represents the "explanations" that lies at the core of a theory.
All theories are constrained by
assumptions
about values, time, and space, and
boundary conditions
that govern where the theory can be applied and where it cannot be applied.
Economic and political theories are not directly comparable and researchers should not use economic theories if their objectives is to understand the power structure or its evolution in a organization.
Theories may have implicit cultural assumptions, temporal assumptions, and spatial assumptions.
Attributes of a Good Theory
Logical consistency
- if some of these "building blocks" of a theory are inconsistent with each other then the theory is a poor theory.
Explanatory power
- How much does a given theory explain (or predict) reality? Good theories obviously explain the target phenomenon better than rival theories, as often measured by variance explained value in regression equations.
Falsifiability
- British philosopher Karl Popper stated in the 1940s that for theories to be valid, they must be falsifiable. Falsifiability ensures that the theory is potentially disprovable, if empirical data does not match with theoretical propositions, which allows for their empirical testing by researchers. Theories cannot be theories unless they can be empirically testable.
Parsimony-
examines how much of a phenomenon is explained with how few variables. The concept is attributed to the 14th century English logician Father William of Ockham. Ockham says among competing explanations that sufficiently explain the observed evidence, the simplest theory.
Parsimony relates to the degrees of freedom in a given theory. Parsimonious theories have higher degrees of freedom, which allow them to be more easily generalized to other contexts, settings, and populations
Explanation of a complex social phenomenon can always be increased by adding more and more constructs.
Approaches to Theorizing
Steinfeld and Fulk (1990) recommend 4 such approaches.
The first approach is to build theories inductively based on observed patterns of events or behaviors. It is sometimes called "grounded theory building" because the theory is grounded in empirical observations. This technique is heavily dependent on the observational and interpretive abilities of the researcher, and the resulting theory may be subjective and non-confirmable.
The second approach to theory building is to conduct a bottom-up conceptual analysis to identify different sets of predictors relevant to the phenomenon of interest using a predefined framework. It is also inductive approach that relies heavily on the inductive abilities of the researcher, and interpretation may be biased by researcher's prior knowledge of the phenomenon being studied.
The third approach to theorizing is to extend or modify existing theories to explain a new context, such as by extending theories of individual learning to explain organizational learning. The deductive approach leverages the rich inventory of social science theories developed by prior theoreticians, and is an efficient way of building new theories by building on existing ones.
The fourth approach is to apply existing theories in entirely new contexts by drawing upon the structural similarities between the 2 contexts. This approach relies on reasoning by analogy, and is probably the most creative way of theorizing using a deductive approach.
In 1987, Markus used analogic similarities between a nuclear explosion and uncontrolled growth of networks or network-based businesses to propose a critical mass theory of network growth. Markus thought the network requires a critical mass of users to sustain its growth, and without such critical mass, users may leave the network, causing an eventual demise of the network.
Examples of Social Science Theories
Agency theory
- a classic theory in the organizational economics literature, was originally proposed by Ross in 1973 to explain two-party relationships whose goals are not congruent with each other.
The goal of agency theory is to specify optimal contracts and the conditions under which such contracts may help minimize the effect of goal incongruence.
The two parties in this theory are the principal and the agent; the principal employs the agent to perform certain tasks on its behalf.
Agency theory recommends using outcome-based contracts, such as a communications or a fee payable upon task completion, or mixed contracts that combine behavior-based and outcome-based incentives.
Agency theory also recommends tools that principals may employ to improve the efficacy of behavior-based contracts, such as investing in monitoring mechanisms to counter the information asymmetry caused by moral hazard, desgining renewable contracts contingent on agent's performance or by improving the structure of the assigned task to make it more programmable and therefore more observable.
Theory of Planned Behavior
- postulated by Azjen (1991), the theory of planned behavior (TPB) is a generalized theory of human behavior in the social psychology literature that can be used to study a wide range of individual behaviors.
It presumes that individual behavior represents conscious reasoned choice, and is shaped by cognitive thinking and social pressures. The theory postulates that behaviors are based on one's intention regarding that behavior, which in turn is a function of the person's attitude toward the behavior, subjective norm regarding that behavior and perception of control over that behavior.
Attitude is defined as the individual's overall positive or negative feelings about performing the behavior in question, which may be assessed as a total of one's beliefs regarding the different consequences of that behavior, weighted by the desirability of those consequences.
Subjective norm refers to one's perception of whether people important to that person expect the person to perform the intended behavior, and represented as a weighted combination of the expected norms of different referent groups such as friends, colleagues, or supervisors at work.
TPB is an extension of an earlier theory called the theory of reasoned action, which included attitude and subjective norm as key drivers of intention, but not behavioral control.
Innovation diffusion theory (IDT)
is a seminal theory in the communications literature that explains how innovations are adopted within a population of potential adopters.
French sociologist Gabriel Tarde first studied the concept and Everett Rogers developed the theory in 1962 based on observations of 508 diffusion studies.
The four key elements in this theory are: innovation, communication channels, time, and social system.
IDT views innovation diffusion as a process of communication where people in a social system learn about a new innovation and its potential benefits through communication channels and are persuaded to adopt it.
Diffusion is a temporal process; the diffusion process starts off slow among a few early adopters, then picks up speed as the innovation is adopted by the mainstream population, and finally slows down as the adopter population research saturation.
All adopters are not identical and can be classified into innovators, early adopters, early majority, late majority, and laggards based on their time of their adoption.
In 1995, Rogers suggests innovation adoption is a process consisting of 5 stages.
1) knowledge- when adopters first learn about an innovation from mass-media or interpersonal channels
2) persuasion- when they are persuaded by prior adopters to try the innovation
3) decision- their decision to accept or reject the innovation
4) implementation- their initial utilization of the innovation
5) confirmation- their decision to continue using it to its fullest potential
5 innovation characteristics are presumed to shape adopters' innovation adoption decisions
1) relative advantage- the expected benefits of an innovation relative to prior innovations
2) compatibility- the extent to which the innovation fits with the adopter's work habits, beliefs, and values
3) complexity- the extent to which the innovation is difficult to learn and use. Complexity is negatively correlated to innovation adoption, while the other 4 factors are positively charged,
4) trialbility- the extent to which the innovation can be tested on trial basis
5) observability- the extent to which the results of using the innovation can be clearly observed
Innovation adoption also depends on personal factors such as the adopter's risk-taking propensity, education level, cosmopolitanism, and communication influence.
IDT has been criticized for having a "pro-innovation bias," that is for presuming that all innovations are beneficial and will be eventually diffused across the entire population.
Elaboration Likelihood Model
was developed by Petty and Cacioppo in 1986 and is a dual-process theory of attitude formation or change in the psychology literature. It explains how individuals can be influenced to change their attitude toward a certain object, events, or behavior and the relative efficacy of such change strategies.
ELM posits that one's attitude may be shaped by 2 "routes" of influence, the central route and the peripheral route, which differ in the amount of thoughtful information processing or "elaboration" required of people . The central route requires a person to think and relevance of those arguments, before forming an informed judgment about the target object.
In the peripheral route, subjects rely on external "cues" such as number of prior users, endorsements from experts, or likeability of the endorser, rather than on the quality of arguments, in framing their attitude towards the target object.
People will be influenced by the central or peripheral route depends upon their ability and motivation to elaborate the central merits of an argument. This is called elaboration likelihood.
People in a state of high elaboration likelihood are more likely to thoughtfully process the information presented and are therefore more influenced by argument quality, and low elaboration likelihood state are more motivated by peripheral cues. Elaboration likelihood is a situational characteristic and not personal trait.
General Deterrence Theory (GDT)
- 2 utilitarian philosophers of the 18th century, Cesare Beccaria and Jeremy Bentham, formulated GDT as both an explanation of crime and a method for reducing it. GDT examines why certain individuals engage in deviant, anti-social, or criminal behaviors. This theory holds that people are fundamentally rational and that they freely choose deviant behaviors based on a rational cost-benefit calculation.
GDT focuses on the criminal decision making process and situational factors that influence that process.
The focus of GDT is not how to rehabilitate criminals and avert future criminal behaviors, but how to make criminal activities less attractive and therefore prevent crimes
"Target hardening" such as installing deadbolts and building self defense skills, legal deterrents such as eliminating parole for certain crimes, like "three strikes law" and the death penalty, increasing the chances of apprehension using means such as neighborhood watch programs, special task forces on drugs or gang-related crimes
Theory has interesting implications not only traditional crimes, but also for contemporary white-collar crimes such as insider trading, software piracy, and illegal charing of music.