Please enable JavaScript.
Coggle requires JavaScript to display documents.
Objectivity in Science - Coggle Diagram
Objectivity in Science
What is Objectivity?
Historical Understandings
- C18: Objectivity as being 'true to nature', giving an idealised vision of the world
- C19: Objectivity as 'mechanical objectivity', where we represent the world exactly as it is
- C20: Trained judgement where you follow rules, omit certain features to produce a USEFUL picture of the world
L1 Daston and Galiston
- Here, the authors are giving a historical overview of the evolution of the meaning of 'objectivity'. Importantly, they argue that changing notions of objectivity generated new ideas of the scientific self, and prompted shifts in epistemic values
- While the term only became used in the C19, this does not invalidate the work of earlier natural philosophers; they just weren't concerned with the effect of the self on these observations
- C18: truth-to-nature framework encouraged the viewing of objects through a standardised lens (so atlases mattered)
- C19: mechanical objectivity was about removing human interference
- C20: trained judgement, where images/objectrs were manipulated by observers to become USEFUL
Why it Matters
- We use the term objective frequently in scientific debates and discussions, and that is a testament to the value we place on the concept
- We should try to find a core meaning to match our everyday sense/understanding of the word!
- We need to understand the term, because it relates to the social and political manipulations/uses of science
L1 Douglas
- Argues that there are 8 distinct senses of objectivity that can be used operationally, and because they cannot collapse into one another, objective has no single CORE definition (irreducible complexity)
- Douglas too acknowledges that trust isn't sufficient, so she seeks to provide these 8 notions to generate clarity on the concept
- See annotated biblio for the 8 core notions, divided into 3 categories: (1) rs between observer and the world (2) values in an individual's thought processes (3) community and objectivity
- Finally, Douglas explains that having objectivity doesn't mean that subjectivity is counterproductive!
The View from Nowhere (Thomas Nagel) or "Big O Objectivity"
- The idea that we view the world not from a human perspective, but to see it as it really IS
- Science can be viewed this way, because we think it shows us how the world TRULY is
Problems
- Relies on implausible metaphysics and assumes that a world exists that is completely SEPARATE from humans
- How does one even get to this point where we can view the world from nowhere?
- Finally, it seems, in daily practice, we can STILL evoke the term objective while still adopting a human lens (eg. inflation)
Objectivity as Trust (alternative to Big O Objectivity)
- Basically, to claim that something is objective because we can trust it (e.g. trusting experts)
- Here, we focus on PROCESSES; that we believe something is objective because we TRUST the processes that led to the the product
Problems/Issues
- Centrally, its just the question of WHY something being rule-governed is something that gives us epistemic trust
(1) Separate Virtues Problem (i.e. that TRUST and RULES are separate virtues in the process)
- By examining processes, we believe that there are certain rules in place to allow researchers to produce an objective result
- But does having rules mean that we should have trust in the product? The link isn't clear (example of errant marking in MCQ)
Also consider Kuhn's SSR
- Kuhn suggests that scientific revolutions happen when we get a change in paradigms, and he rebuts people's claims that science is being driven by irrational factors by asserting that this process of change is governed by rules
- But, there's really no clear-cut way of saying 1 paradigm is better than another, even if we had a rule-governed process!
(2) Conflicting Virtues Problem
- What do we do when it ends up that the processes have led to an errant product/outcome?
- Should we just stick to the old outcome, because it has already generated trust anyway?
- In policy making, we often desire processes that regulates interactions FAIRLY; so we are okay with having rules that don't tell us how the world really is
Case Study of NICE:
- When it was proven that their rule of not spending > GBP30,000/QALY for drug funding was wrong (the figure should be GBP13,000), NICE stuck to their guns and ignored it
- Argument that it is easier and more convenient for all to just stick to something we've already known/used for a long time (we have trust there already)!
Can/Should we do without objectivity?
- Hacking: An elevator word that doesn't tell us what matters. The term is empty and distracting, and doesn't tell us why we should trust the scientist
- Brown: That objectivity, in promoting a value-free notion of science, gives a warped view of science and the role of values to the public
What stops objectivity?
- Values: The values of a researcher can influenced the research they produce
- Interests: Maybe the researcher is paid to do some sort of research and can only publish favourable results
- Biases: Confirmation bias, fitting the evidence to the theory etc.
The Value-Free Ideal
- So, this suggests that science can only be objective/good when not under the influence of NON-EPISTEMIC values
--> Note here: (1) what is 'good' science (2) what is 'non-epistemic' values?
BUT... NEVs can influence science in non-problematic ways
- Tuskegee Syphillis Trial: NEVs can tell us how NOT to conduct unethical research in future
- Using the A-boms in 1945: A question that also appeals to MORAL values!
The trick:
- The value-free ideal specifies that the internal reasoning practices of science must be value-free
- So, when you have evidence/data, and you are trying to draw out conclusions, NEVs cannot interfere with this 'core of reasoning'
Here comes the problem of INDUCTIVE RISK
- Rudner 1953: Because science works by induction, claims will always be underdetermined (i.e. evidence will never give 100% certainty to a claim) --> Scientists must risk accepting false hypotheses, or rejecting hypotheses that are true, and they appeal to NEVs to make the trade-off
- Douglas 2000: Specifies that there is UNCERTAINTY in the collection of evidence and this causes you to need NEVs to determine what mistakes matter, and allow you to better obtain a resolution (so it occurs throughout the process)
Case Study: Betz on the IPCC
- The big question is whether the IPCC and the report it produces are ways of proving that scientists need not take inductive risks by using HEDGED hypotheses, and qualifying their degree of uncertainty
Concerns for Betz (See L2 John)
- 2nd Order Uncertainty: Claims about how uncertain you are can matter
- Deferred IR: Seems like it just becomes a case where the scientists defers the taking of inductive risks to the policymakers
- Evidence: The IPCC's choice of WHAT evidence to include seems to include the use of NEVs (they ignore things simply because they were not peer-reviewed)
L2 Betz
- Argues that the VFI can hold up in science as long as scientists make their uncertainties explicit by using hedged hypotheses
- Betz seems to think that scientists are required to produce plain hypotheses in order to produce policy-relevant science
- But, like in IPCC reports, scientists can communicate their uncertainty to avoid taking IR
- On the 2nd order uncertainty: Betz believes that this isn't policy-relevant (we take many empirical statements as plain facts without questioning their validity because we see their policy relevance as they are)!
Personal Thoughts
- The process of articulating the uncertainty in the report seems to require the use of NEVs (what makes something 60% uncertain, or 50% uncertain)?
- Betz is focussing solely on the part about statistical significance, but seems to neglect the other internal processes of science (evidence gathering, interpretation)
- And I question the policy-relevance of hedged hypotheses
L2 John
- Argues that the IPCC case study does not show that scientists can do science while adhering to the VFI
- Betz gives a false binary that if you use plain hypotheses, you must use NEVs (and vice-versa)
--> Even with hedged hypotheses, they only are value-free IF, the hypotheses, relative to the evidence claims, does not involve significant errors/problems
- John notes that for the IPCC (Steele's argument), scientists still need to translate uncertainties to a qualitative measure
- Also, being a consensus-making body, the IPCC needs to have a framework for secreting evidence (which involves NEVs)!
- Finally, asking if a value-free IPCC report is valuable to policymakers (is policy impotence a price worth paying for value-free science?)
Important
- In order to undermine Betz, the thing is to show that climate scientists MUST take inductive risks, that this is something they cannot AVOID! (rather than simply showing that this is something that they DO)
- One way to consider this is to use Steele's argument that the scientist QUA policy advisor takes inductive risk!
L2 Winsberg
- This argues that uncertainty quantification (UQ) in climate modelling doesn't really achieve its goal of separating epistemic from normative values
- In climate modelling, the systems are so complex and intertwined that it is impossible to avoid calling climate modellers to take inductive risks --> Because they need to interpret the data, choose what model to use etc!
- Trying to screen these values simply doesn't work because of the sheer complexity of these models, and so, in short, one can never avoid using NEVs in the process of generating evidence for climate science! It is very much value-laden
L2 Havstad and Brown
- Uses the case study of the IPCC's Pragmatic Enlightened Model to show how scientists can take inductive risks and STILL be objective/policy-neutral
- The PEM model acts like a policy map, whereby scientists list out all of the viable policy pathways based on their different predictions for policymakers
- IT explicitly admits that science is value-laden by proposing policies, and yet, leaves the decision-making to policymakers
- But, its issue is that this map ends up being very complicated, and may lose its value. It's also not an attempt to solve the VFI (not that it matters)!
pp. 110 also specifically explains why climate scientists have a responsibility to make value judgements
But, note also that PEM is merely just Betz's argument in a different form (it also calls for the deferral of value-laden decisions to policymakers)
Appeal to the Authority of a Collective Community
- Oreskes further adds that science and knowledge is the consensus of a well-established community that produces these knowledge through a process of organised scepticism
- Thus, when we say science is an appeal to authority, it is an appeal to the authority of a collective community
- This can make it such that values and objectivity becomes compatible
L2 Douglas
- Argues that the presence of AIR means that scientists must employ NEVs when doing science, particularly when the science done has political or social repercussions
- NEVs are important when scientists are trying to recognise whether it is better to have a false negative or false positive
- Here, we focus on the role of NEVs on the internal reasoning processes of science: (1) choosing levels of statistical significance (2) characterisation of evidence that is borderline (3) interpreting results for social outcomes!
L2 Rudner
- Rudner makes a distinction between using values in the INTERNAL and EXTERNAL processes of science, and for him, he centres on the extrapolation of data to conclusions
- Underdetermination means scientists always ask: "how sure do we need to be before we accept a hypothesis" --> This depends on "how serious the mistake is"
- Fact-value dichotomy seems to be untenable (as per Quine), because the scientist qua scientist often makes value judgements about the facts they have
- What is needed instead is for scientists to be explicit about the value judgements they are making!
L2 Steele
- Exapnsion on Rudner's argument by arguing that Rudner neglects how scientists as POLICY ADVISORS need to consider various practical implications of their suggestions to decide how wide their credibility gap should be
- As policy advisors, there are new dimensions of the scientist having to be strategic, and this strengthens the arguments that scientists need to use NEVs! --> Strengthens Rudner, because now, it's not just about the truth of the world, but also about the POLICY consequences of science
- IPCC: Scientists need to use NEVs to convert their beliefs from probabilities to something usable
- Reporting: Scientists need NEVs to choose how they should report intervals, because they need to recognise the pragmatic consequences of being wrong
- Regardless, someone MUST transverse the fact-value line; it boils down to an issue of division of labour!
Going Social
Model 1: Values or biases are cancelling out through the interaction of sceintists
Model 2: That social processes guarantee the use of acceptable values/norms
Consider: Solomon and Plate Tectonics
- How did we get a consensus on plate tectonics, despite the presence of all sorts of biases among scientists? (confirmation bias, salience bias, availability bias, authority bias etc.)
- Well, the scientists were interacting with one another and the world!
- Is this however just a case of scientist being lucky? Solomon's account is NOT a normative description of how biases are being reduced/made good...
Concerns of Going Social:
- Social interactions can remove some 'rubbish values', but cannot cleanse the system completely
- Social interactions can actually work to REINFORCE negative/bad biases!
-
Is Objectivity Just Male Subjectivity?
- That the notion of objectivity is just being defined based on a specific/particular perspective - that of the white male
- So, objectivity is just working to serve particular interests over the others!
Standpoint Theory/Epistemology
- Depending on where you are, you can see certain things and can't see other things
- People can see things by virtue of WHERE they are physically located!
Sandra Harding on Standpoint Theory
- This applies to claims too: Certain claims can only be known from some social/cultural/economic perspective
- Links to postcolonial thought (it articulates this awareness of standpoint theory)
- Also links to epistemic injustice (whereby people are disrespected in their status as a knower)
How it links to objectivity:
- A claim is STRONGLY objective when: it is well-established from every perspective possible
- A claim is weakly objective when: it is correct from within a singular standpoint
--> Many scholars support this notion that objectivity is achieved when people from different perspectives agree on something! And this is not IMPOSSIBLE to achieve (it's hard, but not impossible)
Alexandrova on Strong Objectivity:
- Alexandrova argues that some claims are mixed claims, in that they are causal, but have moral/political dimensions to them
- In such cases, the value claims must be made explicit, and then checked with people affected by the work to see if it is something they endorse. (so, the claim can be objective even if there are values, as long as the concepts are agreed from different perspectives)
Nozik's INVARIANCE to explain strong objectivity:
- Given some fact/claim, if it remains invariant under different forms of admissible transformations, the fact/claim can be said to be objective
EG: The claim that WWII ended in 1945 is strongly objective, because no matter how much we debate about what caused the end of WWII, we all agree that it ended in 1945
- So, there is a sense that objectivity is when perspectives CONVERGE
Concerns:
- How do we decide WHOSE knowledge/perspectives do we consider? (e.g. conspiracy theories?)
- It seems then that agreement itself doesn't give us truth, and is more than mere agreement.
L4 Nozik
- Here, Nozik describes what makes a fact or belief objective, and the significance this has on science
- For him, there are 3 conditions that make a fact objective: accessible from different angles, intersubjective agreement and indepdence
- So, with the concept of invariance, a fact is objective IF, under admissible transformations, it remains the same
- Importantly, for Nozik, subjectivity-objectivity is a spectrum, not a binary: we can speak of DEGREES of objectivity
- Objective beliefs are not assessed the same way as facts; a belief is objective when biases that leads it further from the truth are absent
--> Important: Not about being FREE of value; but just having good values to counter biases to affirm objectivity
- For science, it is precisely that observations are value-laden, or they are path-dependent, that allows it to progress! (it is a cumulative effort after all)
L4 Harding Ch 1
- Harding is tracing postwar global changes and developments to argue that changing political environments generated new philosophies of science --> That in a climate with social democratic movements, theories/ideas of PoS shifted too (new perspectives were created)
- An exceptionalism stance of science emerged post-war, as S/T became critical to the global Cold War, and scientists wanted to assert the autonomy of science
- 1970s: Failure of development projects, counterculture (anti-authoritarian social movements), globalisation!
- In this volatile climate, new, alternative epistemologies emerged: feminist studies, postcolonial studies, SSK! They sought to show how science and their societies co-produced each other
--> Harding asserts that these 3 case studies have shown how diversity strengthens objectivity, allowing us to achieve strong objectivity
NB: Harding and Longino
- Longino focusses on the social processes, trying to explain HOW social processes can lead us to objectivity
- Harding isn't interested in the interactions of different standpoints, but instead, is asserting that for claims to be objective, they need to have this strong notion
NB: The concern with standpoint epistemology is that theorists may go for EXTREME views; that because someone does not come from a certain perspective, they will NEVER understand some types of knowledge
Case Study: Sheep Farmers in Cumbria
- How the scientists, when trying to give recommendations/helping the farmers, completely ignore the expertise of the farmers
- Implementation of misguided policies, poor recommendations based on bad readings!
- A clash of worldviews/knowledge systems is evident here
L4 Wynne
- Argues that scholarly literature has neglected the intellectual value of lay knowledge. We cannot ignore its importance if we are to understand how natural knowledge is produced from social contexts!
- Rebuts Gidden's argument that a lack of public dissent means public trust in experts/systems! (it could be an issue of transparency, or technological fatalism on the public's view)
- Risk is a social construct manufactured by the expert institutions, and so, risk, to the public, is about the behaviour and trustworthiness of experts!
- We must stop seeing the lay-expert divide as a divide, so as to redistribute power AWAY from expert institutions
- The case study of the Cumbrian sheep farmers show how scientists ignore local knowledge at their own peril, as this increased distrust in scientists. Also, farmers followed the scientists' advice not because they trusted them, but they had no choice. And also, the scientists' failure to respect the farmers led to the questionable implantation of certain scientific policies or experiments!
- Case study of Andean potato farmers also shows the strong contrast between local, indigenous knowledge and universal scientific knowledge. Van Der Pleog explains that indigenous knowledge is often ignored, because it is seen as incompatible with standardised scientific knowledge!
Link to P6 Science and Activism
- Why does indigenous knowledge matter, or what is the role of the public in a risk society?
- Also consider how combining science with local/indigenous knowledge can make it easier for sicentists/policymakers to accept it!