diff --git a/_posts/2021-03-25-point-count-data-analysis/index.Rmd b/_posts/2021-03-25-point-count-data-analysis/index.Rmd index e660995..41873cf 100644 --- a/_posts/2021-03-25-point-count-data-analysis/index.Rmd +++ b/_posts/2021-03-25-point-count-data-analysis/index.Rmd @@ -17,7 +17,10 @@ output: knitr::opts_chunk$set(echo = TRUE, cache = TRUE) ``` -![](https://github.com/psolymos/qpad-workshop/raw/main/images/thumb.jpg) +```{r echo=FALSE} +knitr::include_graphics("thumb.jpg") +``` + This course is aimed towards researchers analyzing field observations, who are often faced by data heterogeneities due to field sampling protocols changing from one project to another, or through time over the lifespan of projects, or trying to combine 'legacy' data sets with new data collected by recording units. diff --git a/_posts/2021-03-25-point-count-data-analysis/index.html b/_posts/2021-03-25-point-count-data-analysis/index.html index bccbeda..b12590f 100644 --- a/_posts/2021-03-25-point-count-data-analysis/index.html +++ b/_posts/2021-03-25-point-count-data-analysis/index.html @@ -116,7 +116,7 @@ @@ -1498,7 +1498,9 @@

Contents

-

+
+

+

This course is aimed towards researchers analyzing field observations, who are often faced by data heterogeneities due to field sampling protocols changing from one project to another, or through time over the lifespan of projects, or trying to combine ‘legacy’ data sets with new data collected by recording units.

Such heterogeneities can bias analyses when data sets are integrated inadequately, or can lead to information loss when filtered and standardized to common standards. Accounting for these issues is important for better inference regarding status and trend of species and communities.

Analysts of such ‘messy’ data sets need to feel comfortable with manipulating the data, need a full understanding the mechanics of the models being used (i.e. critically interpreting the results and acknowledging assumptions and limitations), and should be able to make informed choices when faced with methodological challenges.

diff --git a/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/__packages b/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/__packages new file mode 100644 index 0000000..d44ddce --- /dev/null +++ b/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/__packages @@ -0,0 +1,7 @@ +base +methods +datasets +utils +grDevices +graphics +stats diff --git a/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.RData b/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.RData new file mode 100644 index 0000000..9944648 Binary files /dev/null and b/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.RData differ diff --git a/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.rdb b/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.rdb new file mode 100644 index 0000000..e69de29 diff --git a/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.rdx b/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.rdx new file mode 100644 index 0000000..a0bc5dc Binary files /dev/null and b/_posts/2021-03-25-point-count-data-analysis/index_cache/html5/unnamed-chunk-1_424d4ac1732531b492b095fdbe4797a8.rdx differ diff --git a/_posts/2021-03-25-point-count-data-analysis/thumb.jpg b/_posts/2021-03-25-point-count-data-analysis/thumb.jpg new file mode 100644 index 0000000..ad324fe Binary files /dev/null and b/_posts/2021-03-25-point-count-data-analysis/thumb.jpg differ diff --git a/docs/index.html b/docs/index.html index cf7a766..a53302b 100644 --- a/docs/index.html +++ b/docs/index.html @@ -2217,7 +2217,7 @@

Training materi
- +

Point-count Data Analysis

diff --git a/docs/index.xml b/docs/index.xml index 4d30a44..2eea466 100644 --- a/docs/index.xml +++ b/docs/index.xml @@ -17,6 +17,7 @@ Twelve hour https://bios2.github.io/bios2_trainings/posts/2021-03-25-point-count-data-analysis Thu, 25 Mar 2021 00:00:00 +0000 + Spatial Statistics in Ecology diff --git a/docs/posts/2021-03-25-point-count-data-analysis/index.html b/docs/posts/2021-03-25-point-count-data-analysis/index.html index 62a378b..875b796 100644 --- a/docs/posts/2021-03-25-point-count-data-analysis/index.html +++ b/docs/posts/2021-03-25-point-count-data-analysis/index.html @@ -103,14 +103,16 @@ + - + + @@ -128,7 +130,7 @@ @@ -2129,7 +2131,9 @@

Contents

-

+
+

+

This course is aimed towards researchers analyzing field observations, who are often faced by data heterogeneities due to field sampling protocols changing from one project to another, or through time over the lifespan of projects, or trying to combine ‘legacy’ data sets with new data collected by recording units.

Such heterogeneities can bias analyses when data sets are integrated inadequately, or can lead to information loss when filtered and standardized to common standards. Accounting for these issues is important for better inference regarding status and trend of species and communities.

Analysts of such ‘messy’ data sets need to feel comfortable with manipulating the data, need a full understanding the mechanics of the models being used (i.e. critically interpreting the results and acknowledging assumptions and limitations), and should be able to make informed choices when faced with methodological challenges.

diff --git a/docs/posts/2021-03-25-point-count-data-analysis/thumb.jpg b/docs/posts/2021-03-25-point-count-data-analysis/thumb.jpg new file mode 100644 index 0000000..ad324fe Binary files /dev/null and b/docs/posts/2021-03-25-point-count-data-analysis/thumb.jpg differ diff --git a/docs/posts/posts.json b/docs/posts/posts.json index 996946c..8b708f5 100644 --- a/docs/posts/posts.json +++ b/docs/posts/posts.json @@ -14,33 +14,10 @@ "Training", "Twelve hour" ], - "contents": "\n\nContents\nInstructor\nOutline\nGet course materialsInstall required software\nGet the notes\n\nUseful resources\nReferences\nLicense\n\n\nThis course is aimed towards researchers analyzing field observations, who are often faced by data heterogeneities due to field sampling protocols changing from one project to another, or through time over the lifespan of projects, or trying to combine ‘legacy’ data sets with new data collected by recording units.\nSuch heterogeneities can bias analyses when data sets are integrated inadequately, or can lead to information loss when filtered and standardized to common standards. Accounting for these issues is important for better inference regarding status and trend of species and communities.\nAnalysts of such ‘messy’ data sets need to feel comfortable with manipulating the data, need a full understanding the mechanics of the models being used (i.e. critically interpreting the results and acknowledging assumptions and limitations), and should be able to make informed choices when faced with methodological challenges.\nThe course emphasizes critical thinking and active learning through hands on programming exercises. We will use publicly available data sets to demonstrate the data manipulation and analysis. We will use freely available and open-source R packages.\nThe expected outcome of the course is a solid foundation for further professional development via increased confidence in applying these methods for field observations.\nInstructor\nDr. Peter SolymosBoreal Avian Modelling Project and the Alberta Biodiversity Monitoring InstituteDepartment of Biological Sciences, University of Alberta\nOutline\nEach day will consist of 3 sessions, roughly one hour each, with short breaks in between.\n\nThe video recordings from the workshop can be found on YouTube.\n\nSession\nTopic\nFiles\nVideos\nDay 1\nNaive techniques\n\n\n\n1. Introductions\nSlides\nVideo\n\n2. Organizing point count data\nNotes\nPart 1, Part 2\n\n3. Regression techniques\nNotes\nPart 1, Part 2\nDay 2\nBehavioral complexities\n\n\n\n1. Statistical assumptions and nuisance variables\nSlides\nVideo\n\n2. Behavioral complexities\nNotes\nbSims, Video\n\n3. Removal modeling techniques\nNotes\nVideo\n\n4. Finite mixture models and testing assumptions\nNotes\nMixtures, Testing\nDay 3\nThe detection process\n\n\n\n1. The detection process\nSlides\nVideo\n\n2. Distance sampling and density\nNotes\nVideo\n\n3. Estimating population density\nNotes\nVideo\n\n4. Assumptions\nNotes\nVideo\nDay 4\nComing full circle\n\n\n\n1. QPAD overview\nSlides\nVideo\n\n2. Models with detectability offsets\nNotes\nOffsets, Models\n\n3. Model validation and error propagation\nNotes\nValidation, Error\n\n4. Recordings, roadsides, closing remarks\nNotes\nVideo\nGet course materials\nInstall required software\nFollow the instructions at the R website to download and install the most up-to-date base R version suitable for your operating system (the latest R version at the time of writing these instructions is 4.0.4).\nThen run the following script in R:\nsource(\"https://raw.githubusercontent.com/psolymos/qpad-workshop/main/src/install.R\")\nHaving RStudio is not absolutely necessary, but it will make life easier. RStudio is also available for different operating systems. Pick the open source desktop edition from here (the latest RStudio Desktop version at the time of writing these instructions is 1.4.1106).\nPrior exposure to R programming is not necessary, but knowledge of basic R object types and their manipulation (arrays, data frames, indexing) is useful for following hands-on exercises. Software Carpentry’s Data types and structures in R is a good resource to brush up your R skills.\nGet the notes\nIf you don’t want to use git:\nDownload the workshop archive release into a folder\nExtract the zip archive\nOpen the workshop.Rproj file in RStudio (or open any other R GUI/console and setwd() to the directory where you downloaded the file)\n(You can delete the archive)\nIf you want to use git: fork or clone the repository\ncd into/your/dir\ngit clone https://github.com/psolymos/qpad-workshop.git\nUseful resources\nUsing the QPAD package to get offsets based on estimates from the Boreal Avian Modelling Project’s database\nNA-POPS: Point count Offsets for Population Sizes of North America landbirds\nReferences\nSólymos, P., Toms, J. D., Matsuoka, S. M., Cumming, S. G., Barker, N. K. S., Thogmartin, W. E., Stralberg, D., Crosby, A. D., Dénes, F. V., Haché, S., Mahon, C. L., Schmiegelow, F. K. A., and Bayne, E. M., 2020. Lessons learned from comparing spatially explicit models and the Partners in Flight approach to estimate population sizes of boreal birds in Alberta, Canada. Condor, 122: 1-22. PDF\nSólymos, P., Matsuoka, S. M., Cumming, S. G., Stralberg, D., Fontaine, P., Schmiegelow, F. K. A., Song, S. J., and Bayne, E. M., 2018. Evaluating time-removal models for estimating availability of boreal birds during point-count surveys: sample size requirements and model complexity. Condor, 120: 765-786. PDF\nSólymos, P., Matsuoka, S. M., Stralberg, D., Barker, N. K. S., and Bayne, E. M., 2018. Phylogeny and species traits predict bird detectability. Ecography, 41: 1595-1603. PDF\nVan Wilgenburg, S. L., Sólymos, P., Kardynal, K. J. and Frey, M. D., 2017. Paired sampling standardizes point count data from humans and acoustic recorders. Avian Conservation and Ecology, 12(1):13. PDF\nYip, D. A., Leston, L., Bayne, E. M., Sólymos, P. and Grover, A., 2017. Experimentally derived detection distances from audio recordings and human observers enable integrated analysis of point count data. Avian Conservation and Ecology, 12(1):11. PDF\nSólymos, P., and Lele, S. R., 2016. Revisiting resource selection probability functions and single-visit methods: clarification and extensions. Methods in Ecology and Evolution, 7:196-205. PDF\nMatsuoka, S. M., Mahon, C. L., Handel, C. M., Sólymos, P., Bayne, E. M., Fontaine, P. C., and Ralph, C. J., 2014. Reviving common standards in point-count surveys for broad inference across studies. Condor 116:599-608. PDF\nSólymos, P., Matsuoka, S. M., Bayne, E. M., Lele, S. R., Fontaine, P., Cumming, S. G., Stralberg, D., Schmiegelow, F. K. A. & Song, S. J., 2013. Calibrating indices of avian density from non-standardized survey data: making the most of a messy situation. Methods in Ecology and Evolution 4:1047-1058. PDF\nMatsuoka, S. M., Bayne, E. M., Sólymos, P., Fontaine, P., Cumming, S. G., Schmiegelow, F. K. A., & Song, S. A., 2012. Using binomial distance-sampling models to estimate the effective detection radius of point-counts surveys across boreal Canada. Auk 129:268-282. PDF\nLicense\nThe course material is licensed under Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. Source code is under MIT license.\n\n\n\n", - "preview": {}, - "last_modified": "2021-04-16T10:28:51-04:00", - "input_file": {} - }, - { - "path": "posts/2021-01-13-spatial-statistics-in-ecology/", - "title": "Spatial Statistics in Ecology", - "description": "Introduction to spatial statistics offered by Philipe Marchand to BIOS2 Fellows in January 2021.", - "author": [ - { - "name": "Philipe Marchand", - "url": {} - } - ], - "date": "2021-01-13", - "categories": [ - "Training", - "co-PI contributed", - "Six hour" - ], - "contents": "\n\nContents\nSpatial correlation of a variable\nIntrinsic or induced dependence\nDifferent ways to model spatial effects\n\nGeostatistical models\nVariogram\nTheoretical models for the variogram\nEmpirical variogram\nRegression model with spatial correlation\n\nGeostatistical models in R\nRegression with spatial correlation\nExercise\n\nKriging\nSolutions\n\nSpatial correlation of a variable\nCorrelation between measurements of a variable taken at nearby points often occurs in environmental data. This principle is sometimes referred to as the “first law of geography” and is expressed in the following quote from Waldo Tobler: “Everything is related to everything else, but near things are more related than distant things”.\nIn statistics, we often refer to autocorrelation as the correlation between measurements of the same variable taken at different times (temporal autocorrelation) or places (spatial autocorrelation).\nIntrinsic or induced dependence\nThere are two basic types of spatial dependence on a measured variable \\(y\\): an intrinsic dependence on \\(y\\), or a dependence induced by external variables influencing \\(y\\), which are themselves spatially correlated.\nFor example, suppose that the abundance of a species is correlated between two sites located near each other:\nthis spatial dependence can be induced if it is due to a spatial correlation of habitat factors that are favorable or unfavorable to the species;\nor it can be intrinsic if it is due to the dispersion of individuals to nearby sites.\nIn many cases, both types of dependence affect a given variable.\nIf the dependence is simply induced and the external variables that cause it are included in the model explaining \\(y\\), then the model residuals will be independent and we can use all the methods already seen that ignore spatial correlation.\nHowever, if the dependence is intrinsic or due to unmeasured external factors, then the spatial correlation of the residuals in the model will have to be taken into account.\nDifferent ways to model spatial effects\nIn this training, we will directly model the spatial correlations of our data. It is useful to compare this approach to other ways of including spatial aspects in a statistical model.\nFirst, we could include predictors in the model that represent position (e.g., longitude, latitude). Such predictors may be useful for detecting a systematic large-scale trend or gradient, whether or not the trend is linear (e.g., with a generalized additive model).\nIn contrast to this approach, the models we will see now serve to model a spatial correlation in the random fluctuations of a variable (i.e., in the residuals after removing any systematic effect).\nMixed models use random effects to represent the non-independence of data on the basis of their grouping, i.e., after accounting for systematic fixed effects, data from the same group are more similar (their residual variation is correlated) than data from different groups. These groups were sometimes defined according to spatial criteria (observations grouped into sites).\nHowever, in the context of a random group effect, all groups are as different from each other, e.g., two sites within 100 km of each other are no more or less similar than two sites 2 km apart.\nThe methods we will see here and in the next parts of the training therefore allow us to model non-independence on a continuous scale (closer = more correlated) rather than just discrete (hierarchy of groups).\nGeostatistical models\nGeostatistics refers to a group of techniques that originated in the earth sciences. Geostatistics is concerned with variables that are continuously distributed in space and where a number of points are sampled to estimate this distribution. A classic example of these techniques comes from the mining field, where the aim was to create a map of the concentration of ore at a site from samples taken at different points on the site.\nFor these models, we will assume that \\(z(x, y)\\) is a stationary spatial variable measured at points with coordinates \\(x\\) and \\(y\\).\nVariogram\nA central aspect of geostatistics is the estimation of the variogram \\(\\gamma_z\\) . The variogram is equal to half the mean square difference between the values of \\(z\\) for two points \\((x_i, y_i)\\) and \\((x_j, y_j)\\) separated by a distance \\(h\\).\n\\[\\gamma_z(h) = \\frac{1}{2} \\text{E} \\left[ \\left( z(x_i, y_i) - z(x_j, y_j) \\right)^2 \\right]_{d_{ij} = h}\\]\nIn this equation, the \\(\\text{E}\\) function with the index \\(d_{ij}=h\\) designates the statistical expectation (i.e., the mean) of the squared deviation between the values of \\(z\\) for points separated by a distance \\(h\\).\nIf we want instead to express the autocorrelation \\(\\rho_z(h)\\) between measures of \\(z\\) separated by a distance \\(h\\), it is related to the variogram by the equation:\n\\[\\gamma_z = \\sigma_z^2(1 - \\rho_z)\\] ,\nwhere \\(\\sigma_z^2\\) is the global variance of \\(z\\).\nNote that \\(\\gamma_z = \\sigma_z^2\\) when we reach a distance where the measurements of \\(z\\) are independent, so \\(\\rho_z = 0\\). In this case, we can see that \\(\\gamma_z\\) is similar to a variance, although it is sometimes called “semivariogram” or “semivariance” because of the 1/2 factor in the above equation.\nTheoretical models for the variogram\nSeveral parametric models have been proposed to represent the spatial correlation as a function of the distance between sampling points. Let us first consider a correlation that decreases exponentially:\n\\[\\rho_z(h) = e^{-h/r}\\]\nHere, \\(\\rho_z = 1\\) for \\(h = 0\\) and the correlation is multiplied by \\(1/e \\approx 0.37\\) each time the distance increases by \\(r\\). In this context, \\(r\\) is called the range of the correlation.\nFrom the above equation, we can calculate the corresponding variogram.\n\\[\\gamma_z(h) = \\sigma_z^2 (1 - e^{-h/r})\\]\nHere is a graphical representation of this variogram.\n\n\n\nBecause of the exponential function, the value of \\(\\gamma\\) at large distances approaches the global variance \\(\\sigma_z^2\\) without exactly reaching it. This asymptote is called a sill in the geostatistical context and is represented by the symbol \\(s\\).\nFinally, it is sometimes unrealistic to assume a perfect correlation when the distance tends towards 0, because of a possible variation of \\(z\\) at a very small scale. A nugget effect, denoted \\(n\\), can be added to the model so that \\(\\gamma\\) approaches \\(n\\) (rather than 0) if \\(h\\) tends towards 0. The term nugget comes from the mining origin of these techniques, where a nugget could be the source of a sudden small-scale variation in the concentration of a mineral.\nBy adding the nugget effect, the remainder of the variogram is “compressed” to keep the same sill, resulting in the following equation.\n\\[\\gamma_z(h) = n + (s - n) (1 - e^{-h/r})\\]\nIn the gstat package that we use below, the term \\((s-n)\\) is called a partial sill or psill for the exponential portion of the variogram.\n\n\n\nIn addition to the exponential model, two other common theoretical models for the variogram are the Gaussian model (where the correlation follows a half-normal curve), and the spherical model (where the variogram increases linearly at the start and then curves and reaches the plateau at a distance equal to its range \\(r\\)). The spherical model thus allows the correlation to be exactly 0 at large distances, rather than gradually approaching zero in the case of the other models.\nModel\n\\(\\rho(h)\\)\n\\(\\gamma(h)\\)\nExponential\n\\(\\exp\\left(-\\frac{h}{r}\\right)\\)\n\\(s \\left(1 - \\exp\\left(-\\frac{h}{r}\\right)\\right)\\)\nGaussian\n\\(\\exp\\left(-\\frac{h^2}{r^2}\\right)\\)\n\\(s \\left(1 - \\exp\\left(-\\frac{h^2}{r^2}\\right)\\right)\\)\nSpherical \\((h < r)\\) *\n\\(1 - \\frac{3}{2}\\frac{h}{r} + \\frac{1}{2}\\frac{h^3}{r^3}\\)\n\\(s \\left(\\frac{3}{2}\\frac{h}{r} - \\frac{1}{2}\\frac{h^3}{r^3} \\right)\\)\n* For the spherical model, \\(\\rho = 0\\) and \\(\\gamma = s\\) if \\(h \\ge r\\).\n\n\n\nEmpirical variogram\nTo estimate \\(\\gamma_z(h)\\) from empirical data, we need to define distance classes, thus grouping different distances within a margin of \\(\\pm \\delta\\) around a distance \\(h\\), then calculating the mean square deviation for the pairs of points in that distance class.\n\\[\\hat{\\gamma_z}(h) = \\frac{1}{2 N_{\\text{paires}}} \\sum \\left[ \\left( z(x_i, y_i) - z(x_j, y_j) \\right)^2 \\right]_{d_{ij} = h \\pm \\delta}\\]\nWe will see in the next section how to estimate a variogram in R.\nRegression model with spatial correlation\nThe following equation represents a multiple linear regression including residual spatial correlation:\n\\[v = \\beta_0 + \\sum_i \\beta_i u_i + z + \\epsilon\\]\nHere, \\(v\\) designates the response variable and \\(u\\) the predictors, to avoid confusion with the spatial coordinates \\(x\\) and \\(y\\).\nIn addition to the residual \\(\\epsilon\\) that is independent between observations, the model includes a term \\(z\\) that represents the spatially correlated portion of the residual variance.\nHere are suggested steps to apply this type of model:\nFit the regression model without spatial correlation.\nVerify the presence of spatial correlation from the empirical variogram of the residuals.\nFit one or more regression models with spatial correlation and select the one that shows the best fit to the data.\nGeostatistical models in R\nThe gstat package contains functions related to geostatistics. For this example, we will use the oxford dataset from this package, which contains measurements of physical and chemical properties for 126 soil samples from a site, along with their coordinates XCOORD and YCOORD.\n\n\nlibrary(gstat)\n\ndata(oxford)\nstr(oxford)\n\n\n'data.frame': 126 obs. of 22 variables:\n $ PROFILE : num 1 2 3 4 5 6 7 8 9 10 ...\n $ XCOORD : num 100 100 100 100 100 100 100 100 100 100 ...\n $ YCOORD : num 2100 2000 1900 1800 1700 1600 1500 1400 1300 1200 ...\n $ ELEV : num 598 597 610 615 610 595 580 590 598 588 ...\n $ PROFCLASS: Factor w/ 3 levels \"Cr\",\"Ct\",\"Ia\": 2 2 2 3 3 2 3 2 3 3 ...\n $ MAPCLASS : Factor w/ 3 levels \"Cr\",\"Ct\",\"Ia\": 2 3 3 3 3 2 2 3 3 3 ...\n $ VAL1 : num 3 3 4 4 3 3 4 4 4 3 ...\n $ CHR1 : num 3 3 3 3 3 2 2 3 3 3 ...\n $ LIME1 : num 4 4 4 4 4 0 2 1 0 4 ...\n $ VAL2 : num 4 4 5 8 8 4 8 4 8 8 ...\n $ CHR2 : num 4 4 4 2 2 4 2 4 2 2 ...\n $ LIME2 : num 4 4 4 5 5 4 5 4 5 5 ...\n $ DEPTHCM : num 61 91 46 20 20 91 30 61 38 25 ...\n $ DEP2LIME : num 20 20 20 20 20 20 20 20 40 20 ...\n $ PCLAY1 : num 15 25 20 20 18 25 25 35 35 12 ...\n $ PCLAY2 : num 10 10 20 10 10 20 10 20 10 10 ...\n $ MG1 : num 63 58 55 60 88 168 99 59 233 87 ...\n $ OM1 : num 5.7 5.6 5.8 6.2 8.4 6.4 7.1 3.8 5 9.2 ...\n $ CEC1 : num 20 22 17 23 27 27 21 14 27 20 ...\n $ PH1 : num 7.7 7.7 7.5 7.6 7.6 7 7.5 7.6 6.6 7.5 ...\n $ PHOS1 : num 13 9.2 10.5 8.8 13 9.3 10 9 15 12.6 ...\n $ POT1 : num 196 157 115 172 238 164 312 184 123 282 ...\n\nSuppose that we want to model the magnesium concentration (MG1), represented as a function of the spatial position in the following graph.\n\n\nlibrary(ggplot2)\nggplot(oxford, aes(x = YCOORD, y = XCOORD, size = MG1)) +\n geom_point() +\n coord_fixed()\n\n\n\n\nNote that the \\(x\\) and \\(y\\) axes have been inverted to save space. The coord_fixed() function of ggplot2 ensures that the scale is the same on both axes, which is useful for representing spatial data.\nWe can immediately see that these measurements were taken on a 100 m grid. It seems that the magnesium concentration is spatially correlated, although it may be a correlation induced by another variable. In particular, we know that the concentration of magnesium is negatively related to the soil pH (PH1).\n\n\nggplot(oxford, aes(x = PH1, y = MG1)) +\n geom_point()\n\n\n\n\nThe variogram function of gstat is used to estimate a variogram from empirical data. Here is the result obtained for the variable MG1.\n\n\nvar_mg <- variogram(MG1 ~ 1, locations = ~ XCOORD + YCOORD, data = oxford)\nvar_mg\n\n\n np dist gamma dir.hor dir.ver id\n1 225 100.0000 1601.404 0 0 var1\n2 200 141.4214 1950.805 0 0 var1\n3 548 215.0773 2171.231 0 0 var1\n4 623 303.6283 2422.245 0 0 var1\n5 258 360.5551 2704.366 0 0 var1\n6 144 400.0000 2948.774 0 0 var1\n7 570 427.5569 2994.621 0 0 var1\n8 291 500.0000 3402.058 0 0 var1\n9 366 522.8801 3844.165 0 0 var1\n10 200 577.1759 3603.060 0 0 var1\n11 458 619.8400 3816.595 0 0 var1\n12 90 670.8204 3345.739 0 0 var1\n\nThe formula MG1 ~ 1 indicates that no linear predictor is included in this model, while the argument locations indicates which variables in the data frame correspond to the spatial coordinates.\nIn the resulting table, gamma is the value of the variogram for the distance class centered on dist, while np is the number of pairs of points in that class. Here, since the points are located on a grid, we obtain regular distance classes (e.g.: 100 m for neighboring points on the grid, 141 m for diagonal neighbors, etc.).\nHere, we limit ourselves to the estimation of isotropic variograms, i.e. the variogram depends only on the distance between the two points and not on the direction. Although we do not have time to see it today, it is possible with gstat to estimate the variogram separately in different directions.\nWe can illustrate the variogram with plot.\n\n\nplot(var_mg, col = \"black\")\n\n\n\n\nIf we want to estimate the residual spatial correlation of MG1 after including the effect of PH1, we can add that predictor to the formula.\n\n\nvar_mg <- variogram(MG1 ~ PH1, locations = ~ XCOORD + YCOORD, data = oxford)\nplot(var_mg, col = \"black\")\n\n\n\n\nIncluding the effect of pH, the range of the spatial correlation seems to decrease, while the plateau is reached around 300 m. It even seems that the variogram decreases beyond 400 m. In general, we assume that the variance between two points does not decrease with distance, unless there is a periodic spatial pattern.\nThe function fit.variogram accepts as arguments a variogram estimated from the data, as well as a theoretical model described in a vgm function, and then estimates the parameters of that model according to the data. The fitting is done by the method of least squares.\nFor example, vgm(\"Exp\") means we want to fit an exponential model.\n\n\nvfit <- fit.variogram(var_mg, vgm(\"Exp\"))\nvfit\n\n\n model psill range\n1 Nug 0.000 0.00000\n2 Exp 1951.496 95.11235\n\nThere is no nugget effect, because psill = 0 for the Nug (nugget) part of the model. The exponential part has a sill at 1951 and a range of 95 m.\nTo compare different models, a vector of model names can be given to vgm. In the following example, we include the exponential, gaussian (“Gau”) and spherical (“Sph”) models.\n\n\nvfit <- fit.variogram(var_mg, vgm(c(\"Exp\", \"Gau\", \"Sph\")))\nvfit\n\n\n model psill range\n1 Nug 0.000 0.00000\n2 Exp 1951.496 95.11235\n\nThe function gives us the result of the model with the best fit (lowest sum of squared deviations), which here is the same exponential model.\nFinally, we can superimpose the theoretical model and the empirical variogram on the same graph.\n\n\nplot(var_mg, vfit, col = \"black\")\n\n\n\n\nRegression with spatial correlation\nWe have seen above that the gstat package allows us to estimate the variogram of the residuals of a linear model. In our example, the magnesium concentration was modeled as a function of pH, with spatially correlated residuals.\nAnother tool to fit this same type of model is the gls function of the nlme package, which is included with the installation of R.\nThis function applies the generalized least squares method to fit linear regression models when the residuals are not independent or when the residual variance is not the same for all observations. Since the estimates of the coefficients depend on the estimated correlations between the residuals and the residuals themselves depend on the coefficients, the model is fitted by an iterative algorithm:\nA classical linear regression model (without correlation) is fitted to obtain residuals.\nThe spatial correlation model (variogram) is fitted with those residuals.\nThe regression coefficients are re-estimated, now taking into account the correlations.\nSteps 2 and 3 are repeated until the estimates are stable at a desired precision.\nHere is the application of this method to the same model for the magnesium concentration in the oxford dataset. In the correlation argument of gls, we specify an exponential correlation model as a function of our spatial coordinates and we include a possible nugget effect.\nIn addition to the exponential correlation corExp, the gls function can also estimate a Gaussian (corGaus) or spherical (corSpher) model.\n\n\nlibrary(nlme)\ngls_mg <- gls(MG1 ~ PH1, oxford, \n correlation = corExp(form = ~ XCOORD + YCOORD, nugget = TRUE))\nsummary(gls_mg)\n\n\nGeneralized least squares fit by REML\n Model: MG1 ~ PH1 \n Data: oxford \n AIC BIC logLik\n 1278.65 1292.751 -634.325\n\nCorrelation Structure: Exponential spatial correlation\n Formula: ~XCOORD + YCOORD \n Parameter estimate(s):\n range nugget \n478.0322959 0.2944753 \n\nCoefficients:\n Value Std.Error t-value p-value\n(Intercept) 391.1387 50.42343 7.757084 0\nPH1 -41.0836 6.15662 -6.673079 0\n\n Correlation: \n (Intr)\nPH1 -0.891\n\nStandardized residuals:\n Min Q1 Med Q3 Max \n-2.1846957 -0.6684520 -0.3687813 0.4627580 3.1918604 \n\nResidual standard error: 53.8233 \nDegrees of freedom: 126 total; 124 residual\n\nTo compare this result with the adjusted variogram above, the parameters given by gls must be transformed. The range has the same meaning in both cases and corresponds to 478 m for the result of gls. The global variance of the residuals is the square of Residual standard error. The nugget effect here (0.294) is expressed as a fraction of that variance. Finally, to obtain the partial sill of the exponential part, the nugget effect must be subtracted from the total variance.\nAfter performing these calculations, we can give these parameters to the vgm function of gstat to superimpose this variogram estimated by gls on our variogram of the residuals of the classical linear model.\n\n\ngls_range <- 478\ngls_var <- 53.823^2\ngls_nugget <- 0.294 * gls_var\ngls_psill <- gls_var - gls_nugget\n\ngls_vgm <- vgm(\"Exp\", psill = gls_psill, range = gls_range, nugget = gls_nugget)\n\nplot(var_mg, gls_vgm, col = \"black\", ylim = c(0, 4000))\n\n\n\n\nDoes the model fit the data less well here? In fact, this empirical variogram represented by the points was obtained from the residuals of the linear model ignoring the spatial correlation, so it is a biased estimate of the actual spatial correlations. The method is still adequate to quickly check if spatial correlations are present. However, to simultaneously fit the regression coefficients and the spatial correlation parameters, the generalized least squares (GLS) approach is preferable and will produce more accurate estimates.\nFinally, note that the result of the gls model also gives the AIC, which we can use to compare the fit of different models (with different predictors or different forms of spatial correlation).\nExercise\nThe bryo_belg.csv dataset is adapted from the data of this study:\n\nNeyens, T., Diggle, P.J., Faes, C., Beenaerts, N., Artois, T. et Giorgi, E. (2019) Mapping species richness using opportunistic samples: a case study on ground-floor bryophyte species richness in the Belgian province of Limburg. Scientific Reports 9, 19122. https://doi.org/10.1038/s41598-019-55593-x\n\nThis data frame shows the specific richness of ground bryophytes (richness) for different sampling points in the Belgian province of Limburg, with their position (x, y) in km, in addition to information on the proportion of forest (forest) and wetlands (wetland) in a 1 km^2$ cell containing the sampling point.\n\n\nbryo_belg <- read.csv(\"data/bryo_belg.csv\")\nhead(bryo_belg)\n\n\n richness forest wetland x y\n1 9 0.2556721 0.5036614 228.9516 220.8869\n2 6 0.6449114 0.1172068 227.6714 219.8613\n3 5 0.5039905 0.6327003 228.8252 220.1073\n4 3 0.5987329 0.2432942 229.2775 218.9035\n5 2 0.7600775 0.1163538 209.2435 215.2414\n6 10 0.6865434 0.0000000 210.4142 216.5579\n\nFor this exercise, we will use the square root of the specific richness as the response variable. The square root transformation often allows to homogenize the variance of the count data in order to apply a linear regression.\nFit a linear model of the transformed species richness to the proportion of forest and wetlands, without taking into account spatial correlations. What is the effect of the two predictors in this model?\nCalculate the empirical variogram of the model residuals in (a). Does there appear to be a spatial correlation between the points?\nNote: The cutoff argument to the variogram function specifies the maximum distance at which the variogram is calculated. You can manually adjust this value to get a good view of the sill.\nRe-fit the linear model in (a) with the gls function in the nlme package, trying different types of spatial correlations (exponential, Gaussian, spherical). Compare the models (including the one without spatial correlation) with the AIC.\nWhat is the effect of the proportion of forests and wetlands according to the model in (c)? Explain the differences between the conclusions of this model and the model in (a).\nKriging\nAs mentioned before, a common application of geostatistical models is to predict the value of the response variable at unsampled locations, a form of spatial interpolation called kriging (pronounced with a hard “g”).\nThere are three basic types of kriging based on the assumptions made about the response variable:\nOrdinary kriging: Stationary variable with an unknown mean.\nSimple kriging: Stationary variable with a known mean.\nUniversal kriging: Variable with a trend given by a linear or non-linear model.\nFor all kriging methods, the predictions at a new point are a weighted mean of the values at known points. These weights are chosen so that kriging provides the best linear unbiased prediction of the response variable, if the model assumptions (in particular the variogram) are correct. That is, among all possible unbiased predictions, the weights are chosen to give the minimum mean square error. Kriging also provides an estimate of the uncertainty of each prediction.\nWhile we will not present the detailed kriging equations here, the weights depend on both the correlations (estimated by the variogram) between the sampled points and the new point, as well of the correlations between the sampled points themselves. In other words, sampled points near the new point are given more weight, but isolated sampled points are also given more weight, because sample points close to each other provide redundant information.\nKriging is an interpolation method, so the prediction at a sampled point will always be equal to the measured value (the measurement is supposed to have no error, just spatial variation). However, in the presence of a nugget effect, any small displacement from the sampled location will show variability according to the nugget.\nIn the example below, we generate a new dataset composed of randomly-generated (x, y) coordinates within the study area as well as randomly-generated pH values based on the oxford data. We then apply the function krige to predict the magnesium values at these new points. Note that we specify the variogram derived from the GLS results in the model argument to krige.\n\n\nset.seed(14)\nnew_points <- data.frame(\n XCOORD = runif(100, min(oxford$XCOORD), max(oxford$XCOORD)),\n YCOORD = runif(100, min(oxford$YCOORD), max(oxford$YCOORD)),\n PH1 = rnorm(100, mean(oxford$PH1), sd(oxford$PH1))\n)\n\npred <- krige(MG1 ~ PH1, locations = ~ XCOORD + YCOORD, data = oxford,\n newdata = new_points, model = gls_vgm)\n\n\n[using universal kriging]\n\nhead(pred)\n\n\n XCOORD YCOORD var1.pred var1.var\n1 227.0169 162.1185 47.13065 1269.002\n2 418.9136 465.9013 79.68437 1427.269\n3 578.5943 2032.7477 60.30539 1264.471\n4 376.2734 1530.7193 127.22366 1412.875\n5 591.5336 421.6290 105.88124 1375.485\n6 355.7369 404.3378 127.73055 1250.114\n\nThe result of krige includes the new point coordinates, the prediction of the variable var1.pred along with its estimated variance var1.var. In the graph below, we show the mean MG1 predictions from kriging (triangles) along with the measurements (circles).\n\n\npred$MG1 <- pred$var1.pred\n\nggplot(oxford, aes(x = YCOORD, y = XCOORD, color = MG1)) +\n geom_point() +\n geom_point(data = pred, shape = 17, size = 2) +\n coord_fixed()\n\n\n\n\nThe estimated mean and variance from kriging can be used to simulate possible values of the variable at each new point, conditional on the sampled values. In the example below, we performed 4 conditional simulations by adding the argument nsim = 4 to the same krige instruction.\n\n\nsim_mg <- krige(MG1 ~ PH1, locations = ~ XCOORD + YCOORD, data = oxford,\n newdata = new_points, model = gls_vgm, nsim = 4)\n\n\ndrawing 4 GLS realisations of beta...\n[using conditional Gaussian simulation]\n\nhead(sim_mg)\n\n\n XCOORD YCOORD sim1 sim2 sim3 sim4\n1 227.0169 162.1185 13.22592 32.43060 42.81847 79.60594\n2 418.9136 465.9013 67.94216 15.53717 69.25356 63.42233\n3 578.5943 2032.7477 99.93083 77.98291 74.28468 58.98483\n4 376.2734 1530.7193 104.86240 155.50774 85.82552 143.07373\n5 591.5336 421.6290 78.14221 68.62827 147.33052 130.14264\n6 355.7369 404.3378 164.46754 117.26160 131.85158 143.58951\n\n\n\nlibrary(tidyr)\nsim_mg <- pivot_longer(sim_mg, cols = c(sim1, sim2, sim3, sim4), \n names_to = \"sim\", values_to = \"MG1\")\nggplot(sim_mg, aes(x = YCOORD, y = XCOORD, color = MG1)) +\n geom_point() +\n coord_fixed() +\n facet_wrap(~ sim)\n\n\n\n\nSolutions\n\n\nbryo_lm <- lm(sqrt(richness) ~ forest + wetland, data = bryo_belg)\nsummary(bryo_lm)\n\n\n\nCall:\nlm(formula = sqrt(richness) ~ forest + wetland, data = bryo_belg)\n\nResiduals:\n Min 1Q Median 3Q Max \n-1.8847 -0.4622 0.0545 0.4974 2.3116 \n\nCoefficients:\n Estimate Std. Error t value Pr(>|t|) \n(Intercept) 2.34159 0.08369 27.981 < 2e-16 ***\nforest 1.11883 0.13925 8.034 9.74e-15 ***\nwetland -0.59264 0.17216 -3.442 0.000635 ***\n---\nSignif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n\nResidual standard error: 0.7095 on 417 degrees of freedom\nMultiple R-squared: 0.2231, Adjusted R-squared: 0.2193 \nF-statistic: 59.86 on 2 and 417 DF, p-value: < 2.2e-16\n\nThe proportion of forest has a significant positive effect and the proportion of wetlands has a significant negative effect on bryophyte richness.\n\n\nplot(variogram(sqrt(richness) ~ forest + wetland, locations = ~ x + y,\n data = bryo_belg, cutoff = 50), col = \"black\")\n\n\n\n\nThe variogram is increasing from 0 to at least 40 km, so there appears to be spatial correlations in the model residuals.\n\n\nbryo_exp <- gls(sqrt(richness) ~ forest + wetland, data = bryo_belg,\n correlation = corExp(form = ~ x + y, nugget = TRUE))\nbryo_gaus <- gls(sqrt(richness) ~ forest + wetland, data = bryo_belg,\n correlation = corGaus(form = ~ x + y, nugget = TRUE))\nbryo_spher <- gls(sqrt(richness) ~ forest + wetland, data = bryo_belg,\n correlation = corSpher(form = ~ x + y, nugget = TRUE))\n\n\n\n\n\nAIC(bryo_lm)\n\n\n[1] 908.6358\n\nAIC(bryo_exp)\n\n\n[1] 867.822\n\nAIC(bryo_gaus)\n\n\n[1] 870.9592\n\nAIC(bryo_spher)\n\n\n[1] 866.9117\n\nThe spherical model has the smallest AIC.\n\n\nsummary(bryo_spher)\n\n\nGeneralized least squares fit by REML\n Model: sqrt(richness) ~ forest + wetland \n Data: bryo_belg \n AIC BIC logLik\n 866.9117 891.1102 -427.4558\n\nCorrelation Structure: Spherical spatial correlation\n Formula: ~x + y \n Parameter estimate(s):\n range nugget \n43.1725704 0.6063077 \n\nCoefficients:\n Value Std.Error t-value p-value\n(Intercept) 2.0368754 0.2481673 8.207671 0.000\nforest 0.6989805 0.1481691 4.717450 0.000\nwetland -0.2441117 0.1809121 -1.349339 0.178\n\n Correlation: \n (Intr) forest\nforest -0.251 \nwetland -0.235 0.241\n\nStandardized residuals:\n Min Q1 Med Q3 Max \n-1.75202529 -0.06568241 0.61415377 1.15239953 3.23320744 \n\nResidual standard error: 0.799832 \nDegrees of freedom: 420 total; 417 residual\n\nBoth effects are less important in magnitude and the effect of wetlands is not significant anymore. As is the case for other types of non-independent residuals, the “effective sample size” here is less than the number of points, since points close to each other provide redundant information. Therefore, the relationship between predictors and response is less clear than given by the model assuming all these points were independent.\nNote that the results for all three gls models are quite similar, so the choice to include spatial correlations was more important than the exact shape assumed for the variogram.\n\n\n\n", - "preview": "posts/2021-01-13-spatial-statistics-in-ecology/spatial-statistics-in-ecology_files/figure-html5/unnamed-chunk-1-1.png", - "last_modified": "2021-04-16T10:18:46-04:00", - "input_file": {}, - "preview_width": 1248, - "preview_height": 768 + "contents": "\n\nContents\nInstructor\nOutline\nGet course materialsInstall required software\nGet the notes\n\nUseful resources\nReferences\nLicense\n\n\n\n\nThis course is aimed towards researchers analyzing field observations, who are often faced by data heterogeneities due to field sampling protocols changing from one project to another, or through time over the lifespan of projects, or trying to combine ‘legacy’ data sets with new data collected by recording units.\nSuch heterogeneities can bias analyses when data sets are integrated inadequately, or can lead to information loss when filtered and standardized to common standards. Accounting for these issues is important for better inference regarding status and trend of species and communities.\nAnalysts of such ‘messy’ data sets need to feel comfortable with manipulating the data, need a full understanding the mechanics of the models being used (i.e. critically interpreting the results and acknowledging assumptions and limitations), and should be able to make informed choices when faced with methodological challenges.\nThe course emphasizes critical thinking and active learning through hands on programming exercises. We will use publicly available data sets to demonstrate the data manipulation and analysis. We will use freely available and open-source R packages.\nThe expected outcome of the course is a solid foundation for further professional development via increased confidence in applying these methods for field observations.\nInstructor\nDr. Peter SolymosBoreal Avian Modelling Project and the Alberta Biodiversity Monitoring InstituteDepartment of Biological Sciences, University of Alberta\nOutline\nEach day will consist of 3 sessions, roughly one hour each, with short breaks in between.\n\nThe video recordings from the workshop can be found on YouTube.\n\nSession\nTopic\nFiles\nVideos\nDay 1\nNaive techniques\n\n\n\n1. Introductions\nSlides\nVideo\n\n2. Organizing point count data\nNotes\nPart 1, Part 2\n\n3. Regression techniques\nNotes\nPart 1, Part 2\nDay 2\nBehavioral complexities\n\n\n\n1. Statistical assumptions and nuisance variables\nSlides\nVideo\n\n2. Behavioral complexities\nNotes\nbSims, Video\n\n3. Removal modeling techniques\nNotes\nVideo\n\n4. Finite mixture models and testing assumptions\nNotes\nMixtures, Testing\nDay 3\nThe detection process\n\n\n\n1. The detection process\nSlides\nVideo\n\n2. Distance sampling and density\nNotes\nVideo\n\n3. Estimating population density\nNotes\nVideo\n\n4. Assumptions\nNotes\nVideo\nDay 4\nComing full circle\n\n\n\n1. QPAD overview\nSlides\nVideo\n\n2. Models with detectability offsets\nNotes\nOffsets, Models\n\n3. Model validation and error propagation\nNotes\nValidation, Error\n\n4. Recordings, roadsides, closing remarks\nNotes\nVideo\nGet course materials\nInstall required software\nFollow the instructions at the R website to download and install the most up-to-date base R version suitable for your operating system (the latest R version at the time of writing these instructions is 4.0.4).\nThen run the following script in R:\nsource(\"https://raw.githubusercontent.com/psolymos/qpad-workshop/main/src/install.R\")\nHaving RStudio is not absolutely necessary, but it will make life easier. RStudio is also available for different operating systems. Pick the open source desktop edition from here (the latest RStudio Desktop version at the time of writing these instructions is 1.4.1106).\nPrior exposure to R programming is not necessary, but knowledge of basic R object types and their manipulation (arrays, data frames, indexing) is useful for following hands-on exercises. Software Carpentry’s Data types and structures in R is a good resource to brush up your R skills.\nGet the notes\nIf you don’t want to use git:\nDownload the workshop archive release into a folder\nExtract the zip archive\nOpen the workshop.Rproj file in RStudio (or open any other R GUI/console and setwd() to the directory where you downloaded the file)\n(You can delete the archive)\nIf you want to use git: fork or clone the repository\ncd into/your/dir\ngit clone https://github.com/psolymos/qpad-workshop.git\nUseful resources\nUsing the QPAD package to get offsets based on estimates from the Boreal Avian Modelling Project’s database\nNA-POPS: Point count Offsets for Population Sizes of North America landbirds\nReferences\nSólymos, P., Toms, J. D., Matsuoka, S. M., Cumming, S. G., Barker, N. K. S., Thogmartin, W. E., Stralberg, D., Crosby, A. D., Dénes, F. V., Haché, S., Mahon, C. L., Schmiegelow, F. K. A., and Bayne, E. M., 2020. Lessons learned from comparing spatially explicit models and the Partners in Flight approach to estimate population sizes of boreal birds in Alberta, Canada. Condor, 122: 1-22. PDF\nSólymos, P., Matsuoka, S. M., Cumming, S. G., Stralberg, D., Fontaine, P., Schmiegelow, F. K. A., Song, S. J., and Bayne, E. M., 2018. Evaluating time-removal models for estimating availability of boreal birds during point-count surveys: sample size requirements and model complexity. Condor, 120: 765-786. PDF\nSólymos, P., Matsuoka, S. M., Stralberg, D., Barker, N. K. S., and Bayne, E. M., 2018. Phylogeny and species traits predict bird detectability. Ecography, 41: 1595-1603. PDF\nVan Wilgenburg, S. L., Sólymos, P., Kardynal, K. J. and Frey, M. D., 2017. Paired sampling standardizes point count data from humans and acoustic recorders. Avian Conservation and Ecology, 12(1):13. PDF\nYip, D. A., Leston, L., Bayne, E. M., Sólymos, P. and Grover, A., 2017. Experimentally derived detection distances from audio recordings and human observers enable integrated analysis of point count data. Avian Conservation and Ecology, 12(1):11. PDF\nSólymos, P., and Lele, S. R., 2016. Revisiting resource selection probability functions and single-visit methods: clarification and extensions. Methods in Ecology and Evolution, 7:196-205. PDF\nMatsuoka, S. M., Mahon, C. L., Handel, C. M., Sólymos, P., Bayne, E. M., Fontaine, P. C., and Ralph, C. J., 2014. Reviving common standards in point-count surveys for broad inference across studies. Condor 116:599-608. PDF\nSólymos, P., Matsuoka, S. M., Bayne, E. M., Lele, S. R., Fontaine, P., Cumming, S. G., Stralberg, D., Schmiegelow, F. K. A. & Song, S. J., 2013. Calibrating indices of avian density from non-standardized survey data: making the most of a messy situation. Methods in Ecology and Evolution 4:1047-1058. PDF\nMatsuoka, S. M., Bayne, E. M., Sólymos, P., Fontaine, P., Cumming, S. G., Schmiegelow, F. K. A., & Song, S. A., 2012. Using binomial distance-sampling models to estimate the effective detection radius of point-counts surveys across boreal Canada. Auk 129:268-282. PDF\nLicense\nThe course material is licensed under Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. Source code is under MIT license.\n\n\n\n", + "preview": "posts/2021-03-25-point-count-data-analysis/thumb.jpg", + "last_modified": "2021-04-16T09:32:25-06:00", + "input_file": "index.utf8.md" }, { "path": "posts/2021-01-12-spatial-statistics-in-ecology/", diff --git a/docs/sitemap.xml b/docs/sitemap.xml index 02baf59..104b26b 100644 --- a/docs/sitemap.xml +++ b/docs/sitemap.xml @@ -10,11 +10,7 @@ https://bios2.github.io/bios2_trainings/posts/2021-03-25-point-count-data-analysis/ - 2021-04-16T10:28:51-04:00 - - - https://bios2.github.io/bios2_trainings/posts/2021-01-13-spatial-statistics-in-ecology/ - 2021-04-16T10:18:46-04:00 + 2021-04-16T09:32:25-06:00 https://bios2.github.io/bios2_trainings/posts/2021-01-12-spatial-statistics-in-ecology/