Experimental design and sampling in ecology (SAMPLING (Types of sampling…
Experimental design and sampling in ecology
reduce bias (accuracy).
How would you do it? You need to list every single tree in the forest and give a number to each (e.g. 1 to 50000). Then take a random number from your computer (runif() in R can be handy here) and that’s your chosen tree. Anything less than this is not randomization.
treatment effects might be confounded with background effects.
risks of misidentifying a plot and applying the wrong treatment increase
might be logistically difficult if we need machinery or to apply irrigation
increase reliability (precision)
repeated measurements must
must not form part of time series
must not be grouped together in one place (might not be spatially independent)
must be of an appropriate spatial scale.
ideally one replicate from each treatment should be grouped together into a block, and each treatment repeated in different blocks
have a sample size of at least 30 or 10 replicates per treatment combination.
temporal or spatial (measuring same individual or same location repeatedly).
one of the pillars of standard statistical analysis is independence of errors
warning sign is that a field experiment has a lot of degrees of freedom.
You can avoid pseudoreplication by:
Average away the pseudoreplicated samples and carry the analysis on the means (mean of the 50 plants per plot in our example).
Carry out separate analysis for each time period
Use mixed-effects models (this is the most elegant approach that also stops you from losing valuable information, we will study them).
In a split-plot experiment different treatments are applied to plots of different sizes.
typical arguments in power.t.test are: delta, difference in the means we want to detect; sd: standard deviation of the sample; type of t-test, two-sample, one-sample, paired.
power.t.test(delta=0.1*5,sd=2^0.5,power=0.8,type = "one.sample")
The population of interest is the target population, the population sampled is the sampled population. Ideally they should be the same.
We need a sample size big enough to test our hypothesis. If it is too low our results might be incoherent with literature.
A sampling unit might be a plant, a rabbit, a 10 m2 quadrat…
•A sampling frame lists all the sampling units in the population.
The general rule of thumb is that, whenever possible, do simple random sampling
Types of sampling
The take home message is that, if possible, use simple random sampling. If the cost and convenience of randomization are very large, then little may be lost using systematic sampling (but watch out for periodic trends).
WATCH OUT FOR
Haphazard sampling (e.g. the first rabbit to enter the cage), convenient sampling, accessibility sampling (taking observations that are readily available), judgment sampling (researcher selects a series of “typical” sampling units based on experience and atypical units are not included thus underestimating the variability) or accidental sampling.
Not knowing your species
Understanding the study species is essential for considering biases and interpreting data.
Not knowing exactly why you are surveying
Think exactly what the question is and what data are required to answer it. How will the data be presented and analyzed?
Counting in one or a few large areas rather than a large number
of small ones
A single count gives no measure of the natural variation and it is then hard to see how significant any changes are. This also applies to quadrats.
Not giving precise information as to where sampling occurred
Use of the GPS is the easy solution to this.
Only sampling sites where the species is abundant
It seems obvious to concentrate upon sites where the species is known to occur. However, without knowing the density where it is scarce, it is impossible to determine the total population size.
Changing the methods in monitoring
Unless there is a careful comparison of the different methods, changing the methods prevents comparisons between the years.
Pretending that the samples taken within a site are replicates
For example, if the project involves comparing logged and unlogged forest, but you have just collected a number of samples in one area of each, then there is only one replicate of each treatment. It is unacceptable to compare the samples within the logged forest with the samples from the unlogged forest by pretending that each sample is a replicate (pseudoreplication).
Not having controls in management experiments
This is the greatest problem in interpreting the consequences of management.
Not being honest about the methods used
If you survey moths only on warm still nights or place pitfall traps in the locations that are most likely to be successful then this is fine, but say so. Someone else surveying on all nights or randomly locating traps may otherwise conclude that the species has declined.
Believing that the density of trapped individuals is the same as the absolute density
Assuming that the sampling efficiency is similar in different
Differences in physical structure or vegetation structure will influence almost every surveying technique and thus confound comparisons.
Deviating from transect routes
Not knowing the assumptions of the survey techniques
Each technique has assumptions and it is important to consider these. For example, many mark–release–recapture methods assume the population to be closed (i.e. no gains or losses) yet are often applied in situations where this is clearly not the case.
Thinking that someone else will identify all your samples for you
Most taxonomists have a huge backlog of samples.
Assuming that others will collect data in exactly the same manner and with the same enthusiasm
Everyone collects data in a slightly different way, which will affect the results, including setting traps, erecting mist nets or counting plants within quadrats. It is essential to standardise and test.
Being too ambitious
A common problem is to start an extensive project that could never be completed. The partly completed project is usually far less worthwhile than a smaller, completed project would be. Collecting far more samples than can possibly be analysed is a common problem.
Not knowing the difference between accuracy and precision
Ideally one would like the result to be accurate and precise, but this is not always possible. A precise but biased (inaccurate) measure may be sufficient if one is looking for changes over time or in comparing sites. A precise but inaccurate measure of a population size for assessing threat is usually not of great use.
Believing the results
Practically every survey has biases and inaccuracies. The secret is to evaluate how much these matter.
Not storing information where it can be retrieved in the future
Not telling the world what you have found
There is no point in doing work unless the results are presented to the appropriate people by publishing the results or feeding results back to any key audiences.
Other common sampling techniques
mark-recapture and transect sampling
main assumption is then that the proportion of marked animals in the second sample should be the same as the proportion of marked animals in the population
critical assumptions are made:
1—The population is closed: the study duration should be short for it to hold but long enough for mixing to occur.
2—Marks are not lost or overlooked.
3—All animals are equally likely to be captured in each sample: careful with animals becoming “trap shy” or “trap happy” after being captured. Some traps will attract a gender more than other, an age group more than other. Issues with small home ranges.
The more complex Jolly-Seber model will account for that. The model will estimate migration rates, births and deaths. These models are fitted using GLM with Poisson errors in the package Rcapture.
Useful to estimate population densities.
(i) strip transects in which we walk along a line assuming that we observe the individuals up to a distance w. If we can only see a portion of the individuals beyond the line we call them line transects
the researcher sits in a point and observes individuals (e.g. birds) within a given radius of the point. These are circular plot surveys if we assume that we see up to a distance w and point transects if we assume that some of the farther birds go unobserved.
main assumptions are: (i) all objects are sighted once; (ii) the strip
transects randomly sample the study area (e.g. choose a random starting
point and walk parallel transects through the study region); (iii) the
objects are distribution following a Poisson process (the occurrence of
each object is not related to the others and hence s2
Several models can be used to fit the decrease in sightings as a function of distance.