This vignette demonstrates the latest features from the development version, which is installed via GitHub.
library(devtools)
install_github("SPATIAL-Lab/assignR@*release")
We will introduce the basic functionality of assignR using data bundled with the package. We’ll review how to access data for known-origin biological samples and environmental models, use these to fit and apply functions estimating the probability of sample origin across a study region, and summarize these results to answer research and conservation questions. We’ll also demonstrate a quality analysis tool useful in study design, method comparison, and uncertainty analysis.
Let’s load assignR and another package we’ll need.
library(assignR)
library(terra)
Now use data from the package to plot a simplified North America boundary mask.
plot(naMap)
Let’s do the same for a growing season precipitation H isoscape for
North America. Notice this is a spatial raster (SpatRaster) with two
layers, the mean prediction and a standard error of the prediction. The
layers are from waterisotopes.org,
and their resolution has been reduced to speed up processing in these
examples. Full-resolution isoscapes of several different types can be
downloaded using the getIsoscapes
function (refer to the
help page for details).
plot(d2h_lrNA)
The package includes a database of H and O isotope data for known
origin samples (knownOrig.rda
), which consists of three
features (sites
, samples
, and
sources
). Let’s load it and have a look. First we’ll get
the names of the data fields available in the tables.
names(knownOrig$sites)
## [1] "Site_ID" "Site_name" "State" "Country" "Site_comme"
names(knownOrig$samples)
## [1] "Sample_ID" "Sample_ID_orig" "Site_ID" "Dataset_ID" "Taxon" "Group"
## [7] "Source_quality" "Age_class" "Material_type" "Matrix" "d2H" "d2H.sd"
## [13] "d18O" "d18O.sd" "Sample_comments"
names(knownOrig$sources)
## [1] "Dataset_ID" "Dataset_name" "Citation" "Sampling_method"
## [5] "Sample_powdered" "Lipid_extraction" "Lipid_extraction_method" "Exchange"
## [9] "Exchange_method" "Exchange_T" "H_cal" "O_cal"
## [13] "Std_powdered" "Drying" "Analysis_method" "Analysis_type"
## [17] "Source_comments"
The sites
feature is a spatial object that records the
geographic location of all sites from which samples are available.
plot(wrld_simpl)
points(knownOrig$sites, col = "red")
Now lets look at a list of species names available.
unique(knownOrig$samples$Taxon)
## [1] "Danaus plexippus" "Setophaga ruticilla" "Turdus migratorius" "Setophaga coronata auduboni"
## [5] "Poecile atricapillus" "Thryomanes bewickii" "Thryothorus ludovicianus" "Spizella passerina"
## [9] "Geothlypis trichas" "Setophaga pensylvanica" "Baeolophus bicolor" "Vermivora chrysoptera"
## [13] "Catharus guttatus" "Setophaga citrina" "Geothlypis formosa" "Geothlypis tolmiei"
## [17] "Oreothlypis ruficapilla" "Cardinalis cardinalis" "Oreothlypis celata" "Junco hyemalis oregonus"
## [21] "Seiurus aurocapilla" "Vireo olivaceus" "Melospiza melodia" "Catharus ustulatus"
## [25] "Catharus fuscescens" "Vireo griseus" "Cardellina pusilla" "Hylocichla mustelina"
## [29] "Icteria virens" "Setophaga petechia" "Melozone aberti" "Vermivora cyanoptera"
## [33] "Passer domesticus" "Aimophila ruficeps" "Poecile carolinensis" "Troglodytes aedon"
## [37] "Dumetella carolinensis" "Mniotilta varia" "Lanius ludovicianus" "Anthus spragueii"
## [41] "Euphagus carolinus" "Empidonax minimus" "Oreothlypis peregrina" "Aythya affinis"
## [45] "Cyanistes caeruleus" "Phasianus colchicus" "Lagopus lagopus" "Tetrao tetrix"
## [49] "Dryocopus maritus" "Serin serin" "Vanellus vanellus" "Corvus corone"
## [53] "Turdus merula" "Corvus monedula" "Columba palumbus" "Turtle Dove"
## [57] "Tetrastes bonasia" "Perdix perdix" "Anas platyrhyncos" "Branta canadensis"
## [61] "Columba livia" "Numenius arguata" "Turdus pilaris" "Turdus iliacus"
## [65] "Turdus philomelos" "Fringilla coelebs" "Buteo lagopus" "Accipiter striatus"
## [69] "Falco sparverius" "Accipiter gentillis" "Accipiter cooperii" "Buteo jamaicensis"
## [73] "Buteo platypterus" "Buteo swainsoni" "Circus cyaneus" "Falco columbarius"
## [77] "Falco mexicanus" "Falco perigrinus" "Wilsonia citrina" "Oporornis tolmiei"
## [81] "Wilsonia pusilla" "Homo sapiens" "Charadrius montanus" "Anas platyrhynchos"
## [85] "Locustella luscinioides" "Acrocephalus arundinaceus" "Acrocephalus scirpaceus" "Pipilo maculatus"
Load H isotope data for North American Loggerhead Shrike from the package database.
Ll_d = subOrigData(taxon = "Lanius ludovicianus", mask = naMap)
## 524 samples are found from 427 sites
## 524 samples from 427 sites in the transformed dataset
By default, the subOrigData
function transforms all data
to a common reference scale (defined by the standard materials and
assigned, calibrated values for those; by default VSMOW-SLAP) using data
from co-analysis of different laboratory standards (see Magozzi et al.,
2021). The calibrations used are documented in the function’s return
object.
Ll_d$chains
## [[1]]
## [1] "OldEC.1_H_1" "EC_H_7" "EC_H_9" "VSMOW_H"
Information on these calibrations is contained in the
stds.rda
data file.
Transformation is important when blending data from different labs or papers because different reference scales have been used to calibrate published data and these calibrations are not always comparable. In this case all the data come from one paper:
Ll_d$sources[,1:3]
## Dataset_ID Dataset_name
## 2 2 Hobson et al. 2012 Plos
## Citation
## 2 Hobson KA, Van Wilgenburg SL, Wassenaar LI, Larson K. 2012. Linking hydrogen (d2H) isotopes in feathers and precipitation: sources of variance and consequences for assignment to isoscapes. Plos One 7:e35137
If we didn’t want to transform the data, and instead wished to use
the reference scale from the original publication, we can specify that
in our call to subOrigData
. Keep in mind that any
subsequent analyses using these data will be based on this calibration
scale: for example, if you wish to assign samples of unknown origin, the
values for those samples should be reported on the same scale.
Ll_d = subOrigData(taxon = "Lanius ludovicianus", mask = naMap, ref_scale = NULL)
## 524 samples are found from 427 sites
Ll_d$sources$H_cal
## [1] "OldEC.1_H_1"
For a real application you would want to explore the database to find measurements that are appropriate to your study system (same or similar taxon, geographic region, measurement approach, etc.) or collect and import known-origin data that are specific to your system.
We need to start by assessing how the environmental (precipitation)
isoscape values correlate with the sample values. calRaster
fits a linear model relating the precipitation isoscape values to sample
values, and applies it to produce a calibrated, sample-type specific
isoscape.
d2h_Ll = calRaster(known = Ll_d, isoscape = d2h_lrNA, mask = naMap)
##
##
## ---------------------------------------
## ------------------------------------------
## rescale function uses linear regression model,
## the summary of this model is:
## -------------------------------------------
## --------------------------------------
##
## Call:
## lm(formula = tissue.iso ~ isoscape.iso[, 1], weights = tissue.iso.wt)
##
## Residuals:
## Min 1Q Median 3Q Max
## -149.640 -16.474 0.743 19.946 107.625
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.20058 1.81952 1.209 0.227
## isoscape.iso[, 1] 1.12628 0.04019 28.026 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 32.41 on 522 degrees of freedom
## Multiple R-squared: 0.6007, Adjusted R-squared: 0.6
## F-statistic: 785.4 on 1 and 522 DF, p-value: < 2.2e-16
## NULL
Let’s create some hypothetical samples to use in demonstrating how we can evaluate the probability that the samples originated from different parts of the isoscape. The isotope values are drawn from a random distribution with a standard deviation of 8 per mil, which is a pretty reasonable variance for conspecific residents at a single location. We’ll also add made-up values for the analytical uncertainty for each sample and a column recording the calibration scale used for our measurements. If you had real measured data for your study samples you would load them here, instead.
id = letters[1:5]
set.seed(123)
d2H = rnorm(5, -110, 8)
d2H.sd = runif(5, 1.5, 2.5)
d2H_cal = rep("UT_H_1", 5)
Ll_un = data.frame(id, d2H, d2H.sd, d2H_cal)
print(Ll_un)
## id d2H d2H.sd d2H_cal
## 1 a -114.48381 2.456833 UT_H_1
## 2 b -111.84142 1.953334 UT_H_1
## 3 c -97.53033 2.177571 UT_H_1
## 4 d -109.43593 2.072633 UT_H_1
## 5 e -108.96570 1.602925 UT_H_1
As discussed above, one issue that must be considered with any
organic H or O isotope data is the reference scale used by the
laboratory producing the data. The reference scale for your unknown
samples should be the same as that for the known origin data used in
calRaster. Remember that the scale for our known origin data
d
is OldEC.1_H_1. Let’s assume that our fake data
were normalized to the UT_H_1 scale. The refTrans
function allows us to convert between the two.
Ll_un = refTrans(Ll_un, ref_scale = "OldEC.1_H_1")
print(Ll_un)
## $data
## id d2H d2H.sd d2H_cal
## 1 a -127.8800 3.374984 OldEC.1_H_1
## 2 b -125.0795 2.970657 OldEC.1_H_1
## 3 c -109.6746 2.808334 OldEC.1_H_1
## 4 d -122.5874 3.016841 OldEC.1_H_1
## 5 e -121.7123 2.594943 OldEC.1_H_1
##
## $chains
## $chains[[1]]
## [1] "UT_H_1" "UT_H_2" "US_H_5" "US_H_6" "VSMOW_H" "EC_H_9" "EC_H_7" "OldEC.1_H_1"
##
##
## attr(,"class")
## [1] "refTrans"
Notice that both the d2H values and the uncertainties have been updated to reflect the scale transformation.
Now we will produce posterior probability density maps for the unknown samples. For reference on the Bayesian inversion method see Wunder, 2010
Ll_prob = pdRaster(d2h_Ll, Ll_un)
## NULL
Cell values in these maps are small because each cell’s value represents the probability that this one cell, out of all of them on the map, is the actual origin of the sample. Together, all cell values on the map sum to ‘1’, reflecting the assumption that the sample originated somewhere in the study area. Let’s check this for sample ‘a’.
global(Ll_prob[[1]], 'sum', na.rm = TRUE)
## sum
## a 1
Check out the help page for pdRaster
for additional
options, including the use of informative prior probabilities.
We can also use multiple isoscapes to (potentially) add power to our analyses. We will start by calibrating a H isoscape for the monarch butterfly, Danaus plexippus.
Dp_d = subOrigData(taxon = "Danaus plexippus")
## 208 samples are found from 32 sites
## 150 samples from 31 sites in the transformed dataset
d2h_Dp = calRaster(Dp_d, d2h_lrNA)
##
##
## ---------------------------------------
## ------------------------------------------
## rescale function uses linear regression model,
## the summary of this model is:
## -------------------------------------------
## --------------------------------------
##
## Call:
## lm(formula = tissue.iso ~ isoscape.iso[, 1], weights = tissue.iso.wt)
##
## Weighted Residuals:
## Min 1Q Median 3Q Max
## -11.030 -3.301 0.118 3.086 9.086
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -51.98592 1.95937 -26.53 <2e-16 ***
## isoscape.iso[, 1] 0.74467 0.04055 18.37 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.438 on 148 degrees of freedom
## Multiple R-squared: 0.695, Adjusted R-squared: 0.693
## F-statistic: 337.3 on 1 and 148 DF, p-value: < 2.2e-16
## NULL
Our second isoscape represents 87Sr/86Sr values across our study region, the state of Michigan. It was published by Bataille and Bowen, 2012, obtained from waterisotopes.org, cropped and aggregated to coarser resolution, and a rough estimate of uncertainty added.
In this case, we do not have any known-origin tissue samples to work with. However, our isoscape was developed to approximate the bioavailable Sr pool, and Sr isotopes are not strongly fractionated in food webs. Thus, our analysis will assume that the isoscape provides a good representation of the expected Sr values for our study species without calibration.
Let’s look at the Sr isoscape and compare it with our butterfly H isoscape.
plot(sr_MI$weathered.mean)
crs(sr_MI, describe = TRUE)
## name authority code area extent
## 1 unknown <NA> <NA> <NA> NA, NA, NA, NA
crs(d2h_Dp$isoscape.rescale, describe = TRUE)
## name authority code area extent
## 1 WGS 84 EPSG 4326 World -180, 180, 90, -90
Notice that the we have two different spatial data objects, one for
Sr and one for d2H, and that they have different extents and
projections. In order to conduct a multi-isotope analysis, we’ll first
combine these into a single object using the isoStack
function. In addition to combining the objects, this function resolves
differences in their projection, resolution, and extent. It’s always a
good idea to check that the properties of the isoStack components are
consistent with your expectations.
Dp_multi = isoStack(d2h_Dp, sr_MI)
lapply(Dp_multi, crs, describe = TRUE)
## [[1]]
## name authority code area extent
## 1 WGS 84 EPSG 4326 World -180, 180, 90, -90
##
## [[2]]
## name authority code area extent
## 1 WGS 84 EPSG 4326 World -180, 180, 90, -90
Now we’ll generate a couple of hypothetical unknown samples to use in
our analysis. It is important that our isotopic markers appear here in
the same order as in the isoStack
object we created
above.
Dp_unk = data.frame("ID" = c("A", "B"), "d2H" = c(-86, -96), "Sr" = c(0.7089, 0.7375))
We are ready to make our probability maps. First let’s see how our posterior probabilities would look if we only used the hydrogen isotope data.
Dp_pd_Honly = pdRaster(Dp_multi[[1]], Dp_unk[,-3])
## NULL
We see pretty clear distinctions between the two samples, driven by a strong SW-NE gradient in the tissue isoscape H values across the state.
What if we add the Sr information to the analysis? The syntax for
running pdRaster
is the same, but now we provide our
isoStack object in place of the single isoscape. The function will use
the spatial covariance of the isoscape values to approximate the error
covariance for the two (or more) markers and return posterior
probabilities based on the multivariate normal probability density
function evaluated at each grid cell.
Dp_pd_multi = pdRaster(Dp_multi, Dp_unk)
## NULL
Note that the addition of Sr data greatly strengthens the geographic constraints on our hypothetical unknown samples: the difference between the highest and lowest posterior probabilities is much larger than with H only, and the pattern of high probabilities reflects the regionalization characteristic of the Sr isoscape. This is especially true for sample B, which has a fairly distinctive, high 87Sr/86Sr value.
Many of the functions in assignR are designed to help you analyze and draw inference from the posterior probability surfaces we’ve created above. For the following examples we’ll return to our single-isoscape, Loggerhead shrike analysis, but the tools work identically for multi-isoscape results.
The oddsRatio
tool compares the posterior probabilities
for two different locations or regions. This might be useful in
answering real-world questions…for example “is this sample more likely
from France or Spain?”, or “how likely is this hypothesized location
relative to other possibilities?”.
Let’s compare probabilities for two spatial areas - the states of Utah and New Mexico. First we’ll extract the state boundaries from package data and plot them.
s1 = states[states$STATE_ABBR == "UT",]
s2 = states[states$STATE_ABBR == "NM",]
plot(naMap)
plot(s1, col = c("red"), add = TRUE)
plot(s2, col = c("blue"), add = TRUE)
Now we can get the odds ratio for the two regions. The result reports the odds ratio for the regions (first relative to second) for each of the 5 unknown samples plus the ratio of the areas of the regions. If the isotope values (& prior) were completely uninformative the odds ratios would equal the ratio of areas.
s12 = rbind(s1, s2)
oddsRatio(Ll_prob, s12)
## $`P1/P2 odds ratio`
## a b c d e
## 1 134.1156 102.686 23.71262 80.98042 74.50407
##
## $`Ratio of numbers of cells in two polygons`
## [1] 1
Here you can see that even though Utah is quite a bit smaller the isotopic evidence suggests it’s much more likely to be the origin of each sample. This result is consistent with what you might infer from a first-order comparison of the state map with the posterior probability maps, above.
Comparisons can also be made using points. Let’s create two points (one in each of the Plover regions) and compare their odds. This result also shows the odds ratio for each point relative to the most- and least-likely grid cells on the posterior probability map.
pp1 = c(-112,40)
pp2 = c(-105,33)
pp12 = vect(rbind(pp1,pp2))
crs(pp12) = crs(naMap)
oddsRatio(Ll_prob, pp12)
## $`P1/P2 odds ratio`
## a b c d e
## 1 134.1156 102.686 23.71262 80.98042 74.50407
##
## $`Odds relative to the max/min pixel`
## ratioToMax.a ratioToMax.b ratioToMax.c ratioToMax.d ratioToMax.e ratioToMin.a ratioToMin.b ratioToMin.c ratioToMin.d
## P1 0.142997946 0.190449317 0.62132338 0.243445905 0.264687566 765569677 368790304 6636345.8 192532681
## P2 0.001068176 0.001859184 0.02641222 0.003015486 0.003564495 5718706 3600165 282108.5 2384841
## ratioToMin.e
## P1 153243364
## P2 2063698
The odds of the first point being the location of origin are pretty high for each sample, and much higher than for the second point.
A common goal in movement research is to characterize the distance or
direction of movement for individuals. The wDist
tool and
it’s helper methods are designed to leverage the information in the
posterior probability surfaces for this purpose.
The analyses conducted in assignR cannot determine a single unique location of origin for a given sample, but the do give the probability that each location on the map is the location of origin. If we know the collection location for a sample, we can calculate the distance and direction between each possible location of origin and the collection site, and weighting these by their posterior probability generate a distribution (and statistics for that distribution) describing the distance and direction of travel.
Let’s do a weighted distance analysis for our first two unknown
origin loggerhead shrike samples. Since these are pretend samples, we’ll
pretend that the two point locations we defined above for the
oddsRatio
analysis are the locations at which these samples
were collected. Here are those locations plotted with the corresponding
posterior probability maps.
# View the data
plot(Ll_prob[[1]], main = names(Ll_prob)[1])
points(pp12[1], cex = 2)
plot(Ll_prob[[2]], main = names(Ll_prob)[2])
points(pp12[2], cex = 2)
Now let’s run the analysis and use the functions c
and
plot
to view the summary statistics and distributions
returned by wDist
.
wd = wDist(Ll_prob[[1:2]], pp12)
c(wd)[c(1,2,4,6,8,10,12,14,16)] #only showing select columns for formatting!
## Sample_ID wMeanDist w10Dist w50Dist w90Dist wMeanBear w10Bear w50Bear w90Bear
## 1 a 2876781 1290783 3150518 4138590 -169.8924 115.5482 -159.2225 -105.5870
## 2 b 3518625 2008390 3618131 4953292 178.3488 113.3519 177.4966 -118.4943
plot(wd)
## NULL
Comparing these statistics and plots with the data shows how the
wDist
metrics nicely summarize the direction and distance
of movement. Both individuals almost certainly moved south from their
location of origin to the collection location. Individual a’s migration
may have been a little bit shorter than b’s, and in a more southwesterly
direction, patterns that are dominated more by the difference in
collection locations than the probability surfaces for location of
origin. Also notice the multi-modal distance distribution for individual
a…these can be common in wDist
summaries so it’s a good
ideal to look at the distributions themselves before choosing and
interpreting summary statistics.
Researchers often want to classify their study area in to regions
that are and are not likely to be the origin of the sample (effectively
‘assigning’ the sample to a part of the area). This requires choosing a
subjective threshold to define how much of the study domain is
represented in the assignment region. qtlRaster
offers two
choices.
Let’s extract 10% of the study area, giving maps that show the 10% of grid cells with the highest posterior probability for each sample.
qtlRaster(Ll_prob, threshold = 0.1)
## class : SpatRaster
## dimensions : 13, 28, 5 (nrow, ncol, nlyr)
## resolution : 3.999998, 3.999998 (x, y)
## extent : -164.6667, -52.66672, 20.99996, 72.99993 (xmin, xmax, ymin, ymax)
## coord. ref. : lon/lat WGS 84 (EPSG:4326)
## source(s) : memory
## names : a, b, c, d, e
## min values : FALSE, FALSE, FALSE, FALSE, FALSE
## max values : TRUE, TRUE, TRUE, TRUE, TRUE
Now we’ll instead extract 80% of the posterior probability density, giving maps that show the smallest region within which there is an 80% chance each sample originated.
qtlRaster(Ll_prob, threshold = 0.8, thresholdType = "prob")
## class : SpatRaster
## dimensions : 13, 28, 5 (nrow, ncol, nlyr)
## resolution : 3.999998, 3.999998 (x, y)
## extent : -164.6667, -52.66672, 20.99996, 72.99993 (xmin, xmax, ymin, ymax)
## coord. ref. : lon/lat WGS 84 (EPSG:4326)
## source(s) : memory
## names : a, b, c, d, e
## min values : FALSE, FALSE, FALSE, FALSE, FALSE
## max values : TRUE, TRUE, TRUE, TRUE, TRUE
Comparing the two results, the probability-based assignment regions are broader. This suggests that we’ll need to assign to more than 10% of the study area if we want to correctly assign 80% or more of our samples. We’ll revisit this below and see how we can chose thresholds that are as specific as possible while achieving a desired level of assignment ‘quality’.
Most studies involve multiple unknown samples, and often it is
desirable to summarize the results from these individuals.
jointP
and unionP
offer two options for
summarizing posterior probabilities from multiple samples.
jointP
calculates the probability that
all samples came from each grid cell in the analysis
area. Note that this summarization will only be useful if all samples
are truly derived from a single population of common geographic
origin.
jointP(Ll_prob)
## class : SpatRaster
## dimensions : 13, 28, 1 (nrow, ncol, nlyr)
## resolution : 3.999998, 3.999998 (x, y)
## extent : -164.6667, -52.66672, 20.99996, 72.99993 (xmin, xmax, ymin, ymax)
## coord. ref. : lon/lat WGS 84 (EPSG:4326)
## source(s) : memory
## name : Joint_Probability
## min value : 7.179075e-46
## max value : 2.648995e-02
unionP
calculates the probability that
any sample came from each grid cell in the analysis
area. In this case we’ll save the output to a variable for later
use.
Ll_up = unionP(Ll_prob)
The results from unionP
highlight a broader region, as
you might expect.
Any of the other post-hoc analysis tools can be applied to the
summarized results. Here we’ll use qtlRaster
to identify
the 10% of the study area that is most likely to be the origin of one or
more samples.
qtlRaster(Ll_up, threshold = 0.1)
## class : SpatRaster
## dimensions : 13, 28, 1 (nrow, ncol, nlyr)
## resolution : 3.999998, 3.999998 (x, y)
## extent : -164.6667, -52.66672, 20.99996, 72.99993 (xmin, xmax, ymin, ymax)
## coord. ref. : lon/lat WGS 84 (EPSG:4326)
## source(s) : memory
## name : lyr1
## min value : FALSE
## max value : TRUE
How good are the geographic assignments? What area or probability
threshold should be used? Is it better to use isoscape A or
B for my analysis? The QA
function is designed to
help answer these questions.
QA
uses known-origin data to test the quality of
isotope-based assignments and returns a set of metrics from this test.
The default method conducts a split-sample test, iteratively splitting
the dataset and using part to calibrate the isoscape(s) and the rest to
evaluate assignment quality. The option recal = FALSE
allows QA
to be run without the calRaster
calibration step. This provides a less complete assessment of
methodological error but allows evaluation of assignments to tissue
isoscapes made outside of the QA
function, for example
those calibrated using a different known-origin dataset or made through
spatial modeling of tissue data, directly.
We will run quality assessment on the Loggerhead shrike known-origin dataset and precipitation isoscape. These analyses take some time to run, depending on the number of stations and iterations used.
qa1 = QA(Ll_d, d2h_lrNA, valiStation = 8, valiTime = 4, by = 5, mask = naMap, name = "normal")
##
We can plot the result using plot
.
plot(qa1)
The first three panels show three metrics, granularity (higher is better), bias (closer to 1:1 is better), and sensitivity (higher is better). The second plot shows the posterior probabilities at the known locations of origin relative to random (=1, higher is better). More information is provided in Ma et al., 2020.
A researcher might refer to the sensitivity plot, for example, to
assess what qtlRaster
area threshold would be required to
obtain 90% correct assignments in their study system. Here it’s
somewhere between 0.25 and 0.3.
How would using a different isoscape or different known origin dataset affect the analysis? Multiple QA objects can be compared to make these types of assessments.
Let’s modify our isoscape to add some random noise.
dv = values(d2h_lrNA[[1]])
dv = dv + rnorm(length(dv), 0, 15)
d2h_fuzzy = setValues(d2h_lrNA[[1]], dv)
plot(d2h_fuzzy)
We’ll combine the fuzzy isoscape with the uncertainty layer from the
original isoscape, then rerun QA
using the new version.
Obviously this is not something you’d do in real work, but as an example
it allows us to ask the question “how would the quality of my
assignments change if my isoscape predictions were of reduced
quality?”.
d2h_fuzzy = c(d2h_fuzzy, d2h_lrNA[[2]])
qa2 = QA(Ll_d, d2h_fuzzy, valiStation = 8, valiTime = 4, by = 5, mask = naMap, name = "fuzzy")
##
Now we can plot
to compare.
plot(qa1, qa2)
## NULL
Assignments made using the fuzzy isoscape are generally poorer than those made without fuzzing. Hopefully that’s not a surprise, but you might encounter cases where decisions about how to design your project or conduct your data analysis do have previously unknown or unexpected consequences. These types of comparisons can help reveal them!
Questions or comments? gabe.bowen@utah.edu