Jesse Alston
- Assistant Professor, Conservation / Management Large Mammals
- Member of the Graduate Faculty
Contact
Degrees
- Certificate Teaching and Learning
- University of Wyoming, Laramie, Wyoming, United States
- Ph.D. Ecology
- University of Wyoming, Laramie, Wyoming, United States
- Energetic Drivers of Behavior and Body Size in Bats
- B.A. Environmental Studies
- Davidson College, Davidson, North Carolina, United States
Work Experience
- Center for Advanced Systems Understanding (2021 - 2022)
Interests
Teaching
Ecology, Conservation Biology, Wildlife Management, Mammals
Research
Movement Ecology, Physiological Ecology, Macroecology, Global Change Biology, Conservation Biology
Courses
2025-26 Courses
-
Dissertation
RNR 920 (Spring 2026) -
Thesis
RNR 910 (Spring 2026) -
Wildlife, Conserv, & Culture
RNR 160D1 (Spring 2026) -
Dissertation
RNR 920 (Fall 2025) -
Independent Study
RNR 599 (Fall 2025) -
Thesis
RNR 910 (Fall 2025) -
Wildlife Ecol, Cnsrv, Mgmt
WFSC 444 (Fall 2025) -
Wildlife Ecology
WFSC 544 (Fall 2025)
2024-25 Courses
-
Directed Research
RNR 392 (Spring 2025) -
Dissertation
RNR 920 (Spring 2025) -
Internship
RNR 393 (Spring 2025) -
Thesis
RNR 910 (Spring 2025) -
Directed Research
RNR 392 (Fall 2024) -
Thesis
RNR 910 (Fall 2024) -
Wildlife Ecol, Cnsrv, Mgmt
WFSC 444 (Fall 2024) -
Wildlife Ecology
WFSC 544 (Fall 2024)
2023-24 Courses
-
Dissertation
RNR 920 (Spring 2024) -
Wildlife & Fisheries Seminar
WFSC 596B (Spring 2024) -
Honors Thesis
RNR 498H (Fall 2023) -
Wildlife Ecol, Cnsrv, Mgmt
WFSC 444 (Fall 2023) -
Wildlife Ecology
WFSC 544 (Fall 2023)
2022-23 Courses
-
Honors Thesis
RNR 498H (Spring 2023) -
Wildlife & Fisheries Seminar
WFSC 496B (Spring 2023) -
Wildlife & Fisheries Seminar
WFSC 596B (Spring 2023)
Scholarly Contributions
Journals/Publications
- , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , ., , , ., et al. (2025). SNAPSHOT USA 2019–2023: The First Five Years of Data From a Coordinated Camera Trap Survey of the United States. Global Ecology and Biogeography, 34(Issue 1). doi:10.1111/geb.13941More infoMotivation: SNAPSHOT USA is an annual, multicontributor camera trap survey of mammals across the United States. The growing SNAPSHOT USA dataset is intended for tracking the spatial and temporal responses of mammal populations to changes in land use, land cover and climate. These data will be useful for exploring the drivers of spatial and temporal changes in relative abundance and distribution, as well as the impacts of species interactions on daily activity patterns. Main Types of Variables Contained: SNAPSHOT USA 2019–2023 contains 987,979 records of camera trap image sequence data and 9694 records of camera trap deployment metadata. Spatial Location and Grain: Data were collected across the United States of America in all 50 states, 12 ecoregions and many ecosystems. Time Period and Grain: Data were collected between 1st August and 29th December each year from 2019 to 2023. Major Taxa and Level of Measurement: The dataset includes a wide range of taxa but is primarily focused on medium to large mammals. Software Format: SNAPSHOT USA 2019–2023 comprises two.csv files. The original data can be found within the SNAPSHOT USA Initiative in the Wildlife Insights platform.
- Burgin, C. J., Zijlstra, J. S., Becker, M. A., Handika, H., Alston, J. M., Widness, J., Liphardt, S., Huckaby, D. G., & Upham, N. S. (2025). How many mammal species are there now? Updates and trends in taxonomic, nomenclatural, and geographic knowledge. Journal of Mammalogy, 106(Issue 5). doi:10.1093/jmammal/gyaf047More infoThe Mammal Diversity Database (MDD) is an open-access resource providing up-to-date taxonomic, nomenclatural, and geographic data for global mammal species. Since its launch in 2018, the MDD has transformed the traditionally static process of updating mammalian taxonomy into regular online releases reflecting the latest published research. To build on this foundation, we here present version 2.0 of the MDD (MDD2), which catalogs 6,759 living and recently extinct mammal species, representing net increases of 4.1% and 24.8% over MDD version 1.0 and Mammal Species of the World, 3rd edition (MSW3), respectively. Additionally, we identify a net increase of 68.8% (+2,754; 3,149 splits + de novo, 395 lumps) species since 1980 at a rate of ∼65 species/yr based on past totals from 14 mammalian compendia, leading to projections of ∼7,079 species by 2030 and ∼8,376 by 2050 if these trends continue. Key updates in MDD2 include: (i) codings of US state, country, continent, and biogeographic realm geographic categories for each species; (ii) a comprehensive nomenclatural dataset for 50,230 valid and synonymous species-rank names, curated with type locality and specimen information for the first time; and (iii) integration between the MDD and the databases Hesperomys and Batnames for greater data accuracy and completeness. These updates bridge critical gaps in the taxonomic and nomenclatural information needed for ongoing revisions and assessments of mammalian species diversity. Using these data, we evaluate temporal and geographic trends over the past 267yr, identifying 4 major time periods of change in mammalian taxonomy and nomenclature: (i) the initial monographic description of traditionally charismatic species (1758 to 1880); (ii) the peak of descriptive taxonomy, describing subspecies, and publishing in journals (1881 to 1939); (iii) the shift toward revisionary taxonomy and recognizing polytypic species (1940 to 1999); and (iv) the current technology-driven period of integrative revisionary taxonomy (2000 to present). Geographically, new species recognition since MSW3 has been concentrated in equatorial, mountainous, and island regions—highlighting areas of high mammal endemism (e.g., Madagascar, Philippines, Andes, East Africa, Himalayas, Atlantic Forest). However, gaps in 21st-century taxonomic activity are identified in West and Central Africa, India, and some parts of Indonesia. Additionally, lagging conservation assessments are alarming, with 25% of the MDD2-recognized mammal species allocated to the “understudied” conservation threat categories of Data Deficient (11%) or Not Evaluated (14%), underscoring the need for greater taxonomic integration with conservation organizations. Governance advancements in MDD2 include the establishment of external taxonomic subcommittees to guide data collection and curation, a rewritten website that improves access and scalability, a cross-platform mobile application that provides offline access, and new partnerships to continue linking MDD data to global biodiversity infrastructure. By providing up-to-date mammalian taxonomic and nomenclatural data—including links to the text of original name descriptions, type localities, and type specimen collections—the MDD provides an integrative resource for mammalogists and conservationists to more easily track the status of their study organisms.
- Hollins, J. P., Fleming, C. H., Calabrese, J. M., Harris, L. N., Moore, J. S., Malley, B. K., Noonan, M. J., Fagan, W. F., Alston, J. M., & Hussey, N. E. (2025). Home-range spillover in habitats with impassable boundaries: Causes, biases and corrections using autocorrelated kernel density estimation. Methods in Ecology and Evolution. doi:10.1111/2041-210x.70082More infoAn animal's home-range plays a fundamental role in determining its resource use and overlap with conspecifics, competitors and predators, and is therefore a common focus of movement ecology studies. Autocorrelated kernel density estimation addresses many of the shortcomings of traditional home-range estimators when animal tracking data are autocorrelated, but other challenges in home-range estimation remain. One such issue is known as ‘spillover bias’, in which home-range estimates do not respect impassable movement boundaries (e.g. shorelines and fences), and occurs in all forms of kernel density estimation. While several approaches to addressing spillover bias are used when estimating home ranges, these approaches introduce bias throughout the remaining home-range area, depending on the amount of spillover removed, or are otherwise inaccessible to most ecologists. Here, we introduce local corrections to home-range kernels to mitigate spillover bias in (autocorrelated) kernel density estimation in the continuous time movement model (ctmm) package, and demonstrate their performance using simulations with known home-range extents and distributions, and a real-world case study. Simulation results showed that local corrections minimized bias in bounded home-range area estimates, and resulted in more accurate distributions when compared with commonly used post hoc corrections, particularly at small–intermediate sample sizes. Comparison of the impacts of local vs. post hoc corrections to bounded home-ranges estimated from lake trout (Salvelinus namaycush) demonstrated that local corrections constrained the redistribution of probability mass within the remaining home-range area, resulting in proportionally smaller home-range areas compared with when post hoc corrections are used.
- Alston, J. M., Fleming, C. H., Kays, R., Streicher, J. P., Downs, C. T., Ramesh, T., Reineking, B., & Calabrese, J. M. (2022). Mitigating pseudoreplication and bias in resource selection functions with autocorrelation-informed weighting. Methods in Ecology and Evolution.
- Burton, A., Beirne, C., Gaynor, K., Sun, C., Granados, A., Allen, M., Alston, J., Alvarenga, G., Amir, Z., Anhalt-Depies, C., Appel, C., Arroyo-Arce, S., Balme, G., Bar-Massada, A., Barcelos, D., Barr, E., Barthelmess, E., Baruzzi, C., Basak, S., , Beenaerts, N., et al. (2024). Mammal responses to global changes in human activity vary by trophic group and landscape. Nature Ecology and Evolution, 8(5). doi:10.1038/s41559-024-02363-2More infoWildlife must adapt to human presence to survive in the Anthropocene, so it is critical to understand species responses to humans in different contexts. We used camera trapping as a lens to view mammal responses to changes in human activity during the COVID-19 pandemic. Across 163 species sampled in 102 projects around the world, changes in the amount and timing of animal activity varied widely. Under higher human activity, mammals were less active in undeveloped areas but unexpectedly more active in developed areas while exhibiting greater nocturnality. Carnivores were most sensitive, showing the strongest decreases in activity and greatest increases in nocturnality. Wildlife managers must consider how habituation and uneven sensitivity across species may cause fundamental differences in human–wildlife interactions along gradients of human influence.
- Kays, R., Snider, M., Hess, G., Cove, M., Jensen, A., Shamon, H., McShea, W., Rooney, B., Allen, M., Pekins, C., Wilmers, C., Pendergast, M., Green, A., Suraci, J., Leslie, M., Nasrallah, S., Farkas, D., Jordan, M., Grigione, M., , LaScaleia, M., et al. (2024). Climate, food and humans predict communities of mammals in the United States. Diversity and Distributions, 30(9). doi:10.1111/ddi.13900More infoAim: The assembly of species into communities and ecoregions is the result of interacting factors that affect plant and animal distribution and abundance at biogeographic scales. Here, we empirically derive ecoregions for mammals to test whether human disturbance has become more important than climate and habitat resources in structuring communities. Location: Conterminous United States. Time Period: 2010–2021. Major Taxa Studied: Twenty-five species of mammals. Methods: We analysed data from 25 mammal species recorded by camera traps at 6645 locations across the conterminous United States in a joint modelling framework to estimate relative abundance of each species. We then used a clustering analysis to describe 8 broad and 16 narrow mammal communities. Results: Climate was the most important predictor of mammal abundance overall, while human population density and agriculture were less important, with mixed effects across species. Seed production by forests also predicted mammal abundance, especially hard-mast tree species. The mammal community maps are similar to those of plants, with an east–west split driven by different dominant species of deer and squirrels. Communities vary along gradients of temperature in the east and precipitation in the west. Most fine-scale mammal community boundaries aligned with established plant ecoregions and were distinguished by the presence of regional specialists or shifts in relative abundance of widespread species. Maps of potential ecosystem services provided by these communities suggest high herbivory in the Rocky Mountains and eastern forests, high invertebrate predation in the subtropical south and greater predation pressure on large vertebrates in the west. Main Conclusions: Our results highlight the importance of climate to modern mammals and suggest that climate change will have strong impacts on these communities. Our new empirical approach to recognizing ecoregions has potential to be applied to expanded communities of mammals or other taxa.
- Alston, J. M., Keinath, D. A., Willis, C. K., Lausen, C. L., O'Keefe, J. M., Tyburec, J. D., Broders, H. G., Moosman, P. R., Carter, T. C., Chambers, C. L., Gillam, E. H., Geluso, K., Weller, T. H., Burles, D. W., & Goheen, J. R. (2023). Environmental drivers of body size in North American bats. Functional Ecology, 37(4), 1020-1032. doi:10.1111/1365-2435.14287More infoBergmann's rule—which posits that larger animals live in colder areas—is thought to influence variation in body size within species across space and time, but evidence for this claim is mixed.We used Bayesian hierarchical models to test four competing hypotheses for spatiotemporal variation in body size within 20 bat species across North America: (1) the heat conservation hypothesis, which posits that increased body size facilitates body heat conservation (and which is the traditional explanation for the mechanism underlying Bergmann's rule); (2) the heat mortality hypothesis, which posits that increased body size increases susceptibility to acute heat stress; (3) the resource availability hypothesis, which posits that increased body size is enabled in areas with more abundant food; and (4) the starvation resistance hypothesis, which posits that increased body size reduces susceptibility to starvation during acute food shortages.Spatial variation in body mass was most consistently (and negatively) correlated with mean annual temperature, supporting the heat conservation hypothesis. Across time, variation in body mass was most consistently (and positively) correlated with net primary productivity, supporting the resource availability hypothesis.Climate change could influence body size in animals through both changes in mean annual temperature and resource availability. Rapid reductions in body size associated with increasing temperatures have occurred in short-lived, fecund species, but such reductions will be obscured by changes in resource availability in longer-lived, less fecund species.
- Alston, J., Fleming, C., Kays, R., Streicher, J., Downs, C., Ramesh, T., Reineking, B., & Calabrese, J. (2023). Mitigating pseudoreplication and bias in resource selection functions with autocorrelation-informed weighting. Methods in Ecology and Evolution, 14(2). doi:10.1111/2041-210X.14025More infoResource selection functions (RSFs) are among the most commonly used statistical tools in both basic and applied animal ecology. They are typically parameterized using animal tracking data, and advances in animal tracking technology have led to increasing levels of autocorrelation between locations in such data sets. Because RSFs assume that data are independent and identically distributed, such autocorrelation can cause misleadingly narrow confidence intervals and biased parameter estimates. Data thinning, generalized estimating equations and step selection functions (SSFs) have been suggested as techniques for mitigating the statistical problems posed by autocorrelation, but these approaches have notable limitations that include statistical inefficiency, unclear or arbitrary targets for adequate levels of statistical independence, constraints in input data and (in the case of SSFs) scale-dependent inference. To remedy these problems, we introduce a method for likelihood weighting of animal locations to mitigate the negative consequences of autocorrelation on RSFs. In this study, we demonstrate that this method weights each observed location in an animal's movement track according to its level of non-independence, expanding confidence intervals and reducing bias that can arise when there are missing data in the movement track. Ecologists and conservation biologists can use this method to improve the quality of inferences derived from RSFs. We also provide a complete, annotated analytical workflow to help new users apply our method to their own animal tracking data using the ctmm R package.
- Lujan, E., Nielsen, R., Short, Z., Wicks, S., Watetu, W., Khasoha, L., Palmer, T., Goheen, J., & Alston, J. (2023). Symbiotic acacia ants drive nesting behavior by birds in an African savanna. Biotropica, 55(6). doi:10.1111/btp.13276More infoMutualisms between plants and ants are common features of tropical ecosystems around the globe and can have cascading effects on interactions with the ecological communities in which they occur. In an African savanna, we assessed whether acacia ants influence nest site selection by tree-nesting birds. Birds selected nest sites in trees inhabited by ant species that vigorously defend against browsing mammals. Future research could address the extent to which hatching and fledging rates depend on the species of ant symbiont, and why ants tolerate nesting birds but no other tree associates (especially insects). Abstract in Swahili is available with online material.
- Wells, H. B., Crego, R. D., Alston, J. M., Ndung'u, S. K., Khasoha, L. M., Reed, C. G., Hassan, A. A., Kurukura, S., Ekadeli, J., Namoni, M., & others, . (2023). Wild herbivores enhance resistance to invasion by exotic cacti in an African savanna. Journal of Ecology, 111(1), 33-44. doi:10.1111/1365-2745.14010More infoWhether wild herbivores confer biotic resistance to invasion by exotic plants remains a key question in ecology. There is evidence that wild herbivores can impede invasion by exotic plants, but it is unclear whether and how this generalises across ecosystems with varying wild herbivore diversity and functional groups of plants, particularly over long-term (decadal) time frames.Using data from three long-term (13- to 26-year) exclosure experiments in central Kenya, we tested the effects of wild herbivores on the density of exotic invasive cacti, Opuntia stricta and O. ficus-indica (collectively, Opuntia), which are among the worst invasive species globally. We also examined relationships between wild herbivore richness and elephant occurrence probability with the probability of O. stricta presence at the landscape level (6150 km2).Opuntia densities were 74% to 99% lower in almost all plots accessible to wild herbivores compared to exclosure plots. Opuntia densities also increased more rapidly across time in plots excluding wild herbivores. These effects were largely driven by megaherbivores (≥1000 kg), particularly elephants.At the landscape level, modelled Opuntia stricta occurrence probability was negatively correlated with estimated species richness of wild herbivores and elephant occurrence probability. On average, O. stricta occurrence probability fell from ~0.56 to ~0.45 as wild herbivore richness increased from 6 to 10 species and fell from ~0.57 to ~0.40 as elephant occurrence probability increased from ~0.41 to ~0.84. These multi-scale results suggest that any facilitative effects of Opuntia by wild herbivores (e.g. seed/vegetative dispersal) are overridden by suppression (e.g. consumption, uprooting, trampling).Synthesis. Our experimental and observational findings that wild herbivores confer resistance to invasion by exotic cacti add to evidence that conserving and restoring native herbivore assemblages (particularly megaherbivores) can increase community resistance to plant invasions.
- Alston, J. M., Dillon, M. E., Keinath, D. A., Abernethy, I. M., & Goheen, J. R. (2022). Daily torpor reduces the energetic consequences of microhabitat selection for a widespread bat. Ecology, 103(6), e3677.
- Alston, J. M., Dillon, M. E., Keinath, D. A., Abernethy, I. M., & Goheen, J. R. (2022). Daily torpor reduces the energetic consequences of microhabitat selection for a widespread bat. Ecology, 103(Issue 6). doi:10.1002/ecy.3677More infoHomeothermy requires increased metabolic rates as temperatures decline below the thermoneutral zone, so homeotherms typically select microhabitats within or near their thermoneutral zones during periods of inactivity. However, many mammals and birds are heterotherms that relax internal controls on body temperature and go into torpor when maintaining a high, stable body temperature, which is energetically costly. Such heterotherms should be less tied to microhabitats near their thermoneutral zones and, because heterotherms spend more time in torpor and expend less energy at colder temperatures, heterotherms may even select microhabitats in which temperatures are well below their thermoneutral zones. We studied how temperature and daily torpor influence the selection of microhabitats (i.e., diurnal roosts) by a heterothermic bat (Myotis thysanodes). We (1) quantified the relationship between ambient temperature and daily duration of torpor, (2) simulated daily energy expenditure over a range of microhabitat temperatures, and (3) quantified the influence of microhabitat temperature on microhabitat selection. In addition, warm microhabitats substantially reduced the energy expenditure of simulated homeothermic bats, and heterothermic bats modulated their use of daily torpor to maintain a constant level of energy expenditure across microhabitats of different temperatures. Daily torpor expanded the range of energetically economical microhabitats, such that microhabitat selection was independent of microhabitat temperature. Our work adds to a growing literature documenting the functions of torpor beyond its historical conceptualization as a last-resort measure to save energy during periods of extended or acute energetic stress.
- Alston, J. M., Reed, C. G., Khasoha, L. M., Brown, B. R., Busienei, G., Carlson, N., Coverdale, T. C., Dudenhoeffer, M., Dyck, M. A., Ekeno, J., & others, . (2022). Ecological consequences of large herbivore exclusion in an African savanna: 12 years of data from the UHURU experiment. Ecology, 103(4), e3649.
- Alston, J. M., Reed, C. G., Khasoha, L. M., Brown, B. R., Busienei, G., Carlson, N., Coverdale, T. C., Dudenhoeffer, M., Dyck, M. A., Ekeno, J., Hassan, A. A., Hohbein, R., Jakopak, R. P., Kimiti, B., Kurukura, S., Lokeny, P., Louthan, A. M., Musila, S., Musili, P. M., , Tindall, T., et al. (2022). Ecological consequences of large herbivore exclusion in an African savanna: 12 years of data from the UHURU experiment. Ecology, 103(Issue 4). doi:10.1002/ecy.3649More infoDiverse communities of large mammalian herbivores (LMH), once widespread, are now rare. LMH exert strong direct and indirect effects on community structure and ecosystem functions, and measuring these effects is important for testing ecological theory and for understanding past, current, and future environmental change. This in turn requires long-term experimental manipulations, owing to the slow and often nonlinear responses of populations and assemblages to LMH removal. Moreover, the effects of particular species or body-size classes within diverse LMH guilds are difficult to pinpoint, and the magnitude and even direction of these effects often depends on environmental context. Since 2008, we have maintained the Ungulate Herbivory Under Rainfall Uncertainty (UHURU) experiment, a series of size-selective LMH exclosures replicated across a rainfall/productivity gradient in a semiarid Kenyan savanna. The goals of the UHURU experiment are to measure the effects of removing successively smaller size classes of LMH (mimicking the process of size-biased extirpation) and to establish how these effects are shaped by spatial and temporal variation in rainfall. The UHURU experiment comprises three LMH-exclusion treatments and an unfenced control, applied to nine randomized blocks of contiguous 1-ha plots (n = 36). The fenced treatments are MEGA (exclusion of megaherbivores, elephant and giraffe), MESO (exclusion of herbivores ≥40 kg), and TOTAL (exclusion of herbivores ≥5 kg). Each block is replicated three times at three sites across the 20-km rainfall gradient, which has fluctuated over the course of the experiment. The first 5 years of data were published previously (Ecological Archives E095-064) and have been used in numerous studies. Since that publication, we have (1) continued to collect data following the original protocols, (2) improved the taxonomic resolution and accuracy of plant and small-mammal identifications, and (3) begun collecting several new data sets. Here, we present updated and extended raw data from the first 12 years of the UHURU experiment (2008–2019). Data include daily rainfall data throughout the experiment; annual surveys of understory plant communities; annual censuses of woody-plant communities; annual measurements of individually tagged woody plants; monthly monitoring of flowering and fruiting phenology; every-other-month small-mammal mark–recapture data; and quarterly large-mammal dung surveys. There are no copyright restrictions; notification of when and how data are used is appreciated and users of UHURU data should cite this data paper when using the data.
- Kays, R., Cove, M. V., Diaz, J., Todd, K., Bresnan, C., Snider, M., Lee Jr, T., Jasper, J. G., Douglas, B., Crupi, A. P., & others, . (2022). SNAPSHOT USA 2020: a second coordinated national camera trap survey of the United States during the COVID-19 pandemic. Ecology, 103(10), e3775.
- Kays, R., Cove, M. V., Diaz, J., Todd, K., Bresnan, C., Snider, M., Lee, T. E., Jasper, J. G., Douglas, B., Crupi, A. P., Weiss, K. C., Rowe, H., Sprague, T., Schipper, J., Lepczyk, C. A., Fantle-Lepczyk, J. E., Davenport, J., Zimova, M., Farris, Z., , Williamson, J., et al. (2022). SNAPSHOT USA 2020: A second coordinated national camera trap survey of the United States during the COVID-19 pandemic. Ecology, 103(Issue 10). doi:10.1002/ecy.3775More infoManaging wildlife populations in the face of global change requires regular data on the abundance and distribution of wild animals, but acquiring these over appropriate spatial scales in a sustainable way has proven challenging. Here we present the data from Snapshot USA 2020, a second annual national mammal survey of the USA. This project involved 152 scientists setting camera traps in a standardized protocol at 1485 locations across 103 arrays in 43 states for a total of 52,710 trap-nights of survey effort. Most (58) of these arrays were also sampled during the same months (September and October) in 2019, providing a direct comparison of animal populations in 2 years that includes data from both during and before the COVID-19 pandemic. All data were managed by the eMammal system, with all species identifications checked by at least two reviewers. In total, we recorded 117,415 detections of 78 species of wild mammals, 9236 detections of at least 43 species of birds, 15,851 detections of six domestic animals and 23,825 detections of humans or their vehicles. Spatial differences across arrays explained more variation in the relative abundance than temporal variation across years for all 38 species modeled, although there are examples of significant site-level differences among years for many species. Temporal results show how species allocate their time and can be used to study species interactions, including between humans and wildlife. These data provide a snapshot of the mammal community of the USA for 2020 and will be useful for exploring the drivers of spatial and temporal changes in relative abundance and distribution, and the impacts of species interactions on daily activity patterns. There are no copyright restrictions, and please cite this paper when using these data, or a subset of these data, for publication.
- Silva, I., Fleming, C. H., Noonan, M. J., Alston, J. M., Folta, C., Fagan, W. F., & Calabrese, J. M. (2022). Autocorrelation-informed home range estimation: a review and practical guide. Methods in Ecology and Evolution, 13(3), 534--544.
- Alston, J. M., & Rick, J. A. (2021).
A Beginner's Guide to Conducting Reproducible Research in Ecology, Evolution, and Conservation
. Bulletin of the Ecological Society of America, 102(2), 1-14. doi:10.1002/bes2.1801More infoReplication is a fundamental tenet of science, but there is increasing fear among scientists that too few scientific studies can be replicated. This has been termed the “replication crisis” (Ioannidis 2005, Schooler 2014). Scientific papers often include inadequate detail to enable replication (Haddaway and Verhoeven 2015, Archmiller et al. 2020), many attempted replications of well-known scientific studies have failed in a wide variety of disciplines (Moonesinghe et al. 2007, Hewitt 2012, Bohannon 2015, Open Science Collaboration 2015), and rates of paper retractions are increasing (Cokol et al. 2008, Steen et al. 2013). Because of this, researchers are working to develop new ways for researchers, research institutions, research funders, and journals to overcome this problem (Peng 2011, Fiedler et al. 2012, Sandve et al. 2013, Stodden et al. 2013). Because replicating studies with new independent data is expensive, rarely published in high-impact journals, and sometimes even methodologically impossible, computationally reproducible research (most often termed simply “reproducible research”) is often suggested as a pathway for increasing our ability to assess the validity and rigor of scientific results (Peng 2011). Research is reproducible when others can reproduce the results of a scientific study given only the original data, code, and documentation (Essawy et al. 2020). This approach focuses on the research process after data collection is complete, and it has many (though not all) of the advantages of replicating studies with independent data while minimizing the largest barrier (i.e., the financial and time costs of collecting new data). Replicating studies remains the gold standard for rigorous scientific research, but reproducibility is increasingly viewed as a minimum standard that all scientists should strive toward (Peng 2011, Sandve et al. 2013, Archmiller et al. 2020, Culina et al. 2020). This commentary describes basic requirements for such reproducible research in the fields of ecology and evolutionary biology. In it, we make the case for why all research should be reproducible, explain why research is often not reproducible, and present a simple three-part framework all researchers can use to make their research more reproducible. These principles are applicable to researchers working in all sub-disciplines within ecology and evolutionary biology with data sets of all sizes and levels of complexity. Reproducible research is a by-product of careful attention to detail throughout the research process and allows researchers to ensure that they can repeat the same analysis multiple times with the same results, at any point in that process. Because of this, researchers who conduct reproducible research are the primary beneficiaries of this practice. First, reproducible research helps researchers remember how and why they performed specific analyses during the course of a project. This enables easier explanation of work to collaborators, supervisors, and reviewers, and it allows collaborators to conduct supplementary analyses more quickly and more efficiently. Second, reproducible research enables researchers to quickly and simply modify analyses and figures. This is often requested by supervisors, collaborators, and reviewers across all stages of a research project, and expediting this process saves substantial amounts of time. When analyses are reproducible, creating a new figure may be as easy as changing one value in a line of code and re-running a script, rather than spending hours recreating a figure from scratch. Third, reproducible research enables quick reconfiguration of previously conducted research tasks so that new projects that require similar tasks become much simpler and easier. Science is an iterative process, and many of the same tasks are performed over and over. Conducting research reproducibly enables researchers to re-use earlier materials (e.g., analysis code, file organization systems) to execute these common research tasks more efficiently in subsequent iterations. Fourth, conducting reproducible research is a strong indicator to fellow researchers of rigor, trustworthiness, and transparency in scientific research. This can increase the quality and speed of peer review, because reviewers can directly access the analytical process described in a manuscript. Peer reviewers' work becomes easier, and they may be able to answer methodological questions without asking the authors. Reviewers can check whether code matches with methods described in the text of a manuscript to make sure that authors correctly performed the analyses as described, and it increases the probability that errors are caught during the peer-review process, decreasing the likelihood of corrections or retractions after publication. Finally, it also protects researchers from accusations of research misconduct due to analytical errors, because it is unlikely that researchers would openly share fraudulent code and data with the rest of the research community. Finally, reproducible research increases paper citation rates (Piwowar et al. 2007, McKiernan et al. 2016) and allows other researchers to cite code and data in addition to publications. This enables a given research project to have more impact than it would if the data or methods were hidden from the public. For example, researchers can re-use code from a paper with similar methods and organize their data in the same manner as the original paper and then cite code from the original paper in their manuscript. A third team of researchers may conduct a meta-analysis on the phenomenon described in these two research papers and thus use and cite both of these papers and the data from those papers in their meta-analysis. Papers are more likely to be cited in these re-use cases if full information about data and analyses are available (Whitlock 2011, Culina et al. 2018). Reproducible research also benefits others in the scientific community. Sharing data, code, and detailed research methods and results leads to faster progress in methodological development and innovation because research is more accessible to more scientists (Parr and Cummings 2005, Roche et al. 2015, Mislan et al. 2016). First, reproducible research allows others to learn from your work. Scientific research has a steep learning curve, and allowing others to access data and code gives them a head start on performing similar analyses. For example, researchers who are new to an analytical technique can use code shared with the research community by researchers with more experience with that technique to learn how to rigorously perform and validate these analyses. This allows researchers to conduct research that is more rigorous from the outset, rather than having to spend months or years trying to figure out current “best practices” through trial and error. Modifying existing resources can also save time and effort for experienced researchers—even experienced coders can modify existing code much faster than they can write code from scratch. Sharing code thus allows experienced researchers to perform similar analyses more quickly. Second, reproducible research allows others to understand and reproduce a researcher's work. Allowing others to access data and code makes it easier for other scientists to perform follow-up studies to increase the strength of evidence for the phenomenon of interest. It also increases the likelihood that similar studies are compatible with one another, and that a group of studies can together provide evidence in support of or in opposition to a concept. In addition, sharing data and code increases the utility of these studies for meta-analyses that are important for generalizing and contextualizing the findings of studies on a topic. Meta-analyses in ecology and evolutionary biology are often hindered by incompatibility of data between studies, or lack of documentation for how those data were obtained (Stewart 2010, Culina et al. 2018). Well-documented, reproducible findings enhance the likelihood that data can be used in future meta-analyses (Gerstner et al. 2017). Third, reproducible research allows others to protect themselves from your mistakes. Mistakes happen in science. Allowing others to access data and code gives them a better chance to critically analyze the work, which can lead to coauthors or reviewers discovering mistakes during the revision process, or other scientists discovering mistakes after publication. This prevents mistakes from compounding over time and provides protection for collaborators, research institutions, funding organizations, journals, and others who may be affected when such mistakes happen. There are a number of reasons that most research is not reproducible. Rapidly developing technologies and analytical tools, novel interdisciplinary approaches, unique ecological study systems, and increasingly complex data sets and research questions hinder reproducibility, as does pressure on scientists to publish novel research quickly. This multitude of barriers can be simplified into four primary themes: (1) complexity, (2) technological change, (3) human error, and (4) concerns over intellectual property rights. Each of these concerns can contribute to making research less reproducible and can be valid in some scenarios. However, each of these factors can also be addressed easily via well-developed tools, protocols, and institutional norms concerning reproducible research. Science is difficult, and scientific research requires specialized (and often proprietary) knowledge and tools that may not be available to everyone who would like to reproduce research. For example, studies in the fields of ecology and evolutionary biology often involve study systems, mathematical models, and statistical techniques that require a large amount of domain knowledge to understand, and these analyses can therefore be difficult to reproduce for those with limited understanding of any of the necessary underlying bases of knowledge. Some analyses may require high-performance computing clusters that use several different programming languages and software packages, or that are designed for specific hardware configurations. Other analyses may be performed using proprietary software programs such as SAS statistical software (SAS Institute Inc., Cary, North Carolina, USA) or ArcGIS (Esri, Redlands, California, USA) that require expensive software licenses. Lack of knowledge, lack of institutional infrastructure, and lack of funding all make research less reproducible. However, most of these issues can be mitigated fairly easily. Researchers can cite primers on complex subjects or analyses to reduce knowledge barriers. They can also thoroughly annotate analytical code with comments explaining each step in an analysis or provide extensive documentation on research software. Using open software (when possible) makes research more accessible for other researchers as well. Hardware and software used to analyze data both change over time, and they often change quickly. When old tools become obsolete, research becomes less reproducible. For example, reproducing research performed in 1960 using that era's computational tools would require a completely new set of tools today. Even research performed just a few years ago may have been conducted using software that is no longer available or is incompatible with other software that has since been updated. One minor update in a piece of software used in one minor analysis in an analytical workflow can render an entire project less reproducible. However, this too can be mitigated by using established tools in reproducible research. Careful documentation of versions of software used in analyses is a baseline requirement that anyone can meet. There are also more advanced tools that can help overcome such challenges in making research reproducible, including software containers, which are described in further detail below. Though fraudulent research is often cited as reason to make research more reproducible (Ioannidis 2005, Laine et al. 2007, Crocker and Cooper 2011), many more innocent reasons exist as to why research is often difficult to reproduce (Elliott 2014). People forget small details of how they performed analyses. They fail to describe data collection protocols or analyses completely despite their best efforts and multiple reviewers checking their work. They fail to collect or thoroughly document data that seem unimportant during collection but later turn out to be vital for unforeseen reasons. Science is performed by fallible humans, and a wide variety of common events can render research less reproducible. While not all of these challenges can be avoided by performing research reproducibly, a well-documented research process can guard against small errors and sloppy analyses. For example, carefully recording details such as when and where data were collected, what decisions were made during data collection, and what labeling conventions were used can make a huge difference in making sure that those data can later be used appropriately or re-purposed. Unintentional errors often occur during the data wrangling stage of a project, and these can be mitigated by keeping multiple copies of data to prevent data loss, carefully documenting the process for converting raw data into clean data, and double-checking a small test set of data before manipulating the data set as a whole. Researchers often hesitate to share data and code because doing so may allow other researchers to use data and code incorrectly or unethically. Other researchers may use publicly available data without notifying authors, leading to incorrect assumptions about the data that result in invalid analyses. Researchers may use publicly available data or code without citing the original data owners or code writers, who then do not receive proper credit for gathering expensive data or writing time-consuming code. Researchers may want to conceal data from others so that they can perform new analyses on those data in the future without worrying about others scooping them using the shared data. Rational self-interest can lead to hesitation to share data and code via many pathways, and we acknowledge that making data openly available is likely the most controversial aspect of reproducible research (Cassey and Blackburn 2006, Hampton et al. 2013, Mills et al. 2015, Mills et al. 2016, Whitlock et al. 2016). However, new tools for sharing data and code (outlined below and in Table 1) are making it easier for researchers to receive credit for doing so and to prevent others from using their data during an embargo period. Conducting reproducible research is not exceedingly difficult nor does it require encyclopedic knowledge of esoteric research tools and protocols. Whether they know it or not, most researchers already perform much of the work required to make research reproducible. To clarify this point, we outline below some basic steps toward making research more reproducible in three stages of a research project: (1) before data analysis, (2) during analysis, and (3) after analysis. We discuss practical tips that anyone can use, as well as more advanced tools for those who would like to move beyond basic requirements (Table 1). Most readers will recognize that reproducible research largely consists of widely accepted best practices for scientific research and that striving to meet a reasonable benchmark of reproducibility is both more valuable and more attainable than researchers may think. Reproducibility starts in the planning stage, with sound data management practices. It does not arise simply from sharing data and code online after a project is done. It is difficult to reproduce research when data are disorganized or missing, or when it is impossible to determine where or how data originated. First, data should be backed up at every stage of the research process and stored in multiple locations. This includes raw data (e.g., physical data sheets or initial spreadsheets), clean analysis-ready data (i.e., final data sets), and steps in between. Because it is entirely possible that researchers unintentionally alter or corrupt data while cleaning it up, raw data should always be kept as a backup. It is good practice to scan and save data sheets or laboratory notebook pages associated with a data set to ensure that these are kept paired with the digital data set. Ideally, different copies should be stored in different locations and using different storage media (e.g., paper copies and an external hard drive and cloud storage) to minimize risk of data loss from any single cause. Computers crash, hard drives are misplaced and stolen, and servers are hacked—researchers should not leave themselves vulnerable to those events. Digital data files should be stored in useful, flexible, portable, nonproprietary formats. Storing data digitally in a “flat” file format is almost always a good idea. Flat file formats are those that store data as plain text with one record per line (e.g., .csv or .txt files) and are the most portable formats across platforms, as they can be opened by anyone without proprietary software programs. For more complex data types, multi-dimensional relational formats such as json, hdf5, or other discipline-specific formats (e.g., biom and EML) may be appropriate. However, the complexity of these formats makes them difficult for many researchers to access and use appropriately, so it is best to stick with simpler file formats when possible. It is often useful to transform data into a “tidy” format (Wickham 2014) when cleaning up and standardizing raw data. Tidy data are in long format (i.e., variables in columns, observations in rows), have consistent data structure (e.g., character data are not mixed with numeric data for a single variable), and have informative and appropriately formatted headers (e.g., reasonably short variable names that do not include problematic characters like spaces, commas, and parentheses). Data in this format are easy to manipulate, model, and visualize during analysis. Metadata explaining what was done to clean up the data and what each of the variables means should be stored along with the data. Data are useless unless they can be interpreted (Roche et al. 2015), and metadata is how we maximize data interpretability across potential users. At a minimum, all data sets should include informative metadata that explains how and why data were collected, what variable names mean, whether a variable consists of raw or transformed data, and how observations are coded. Metadata should be placed in a sensible location that pairs it with the data set it describes. A few rows of metadata above a table of observations within the same file may work in some cases, or a paired text file can be included in the same directory as the data if the metadata must be more detailed. In the latter case, it is best to stick with a simple .txt file for metadata to maximize portability. Finally, researchers should organize files in a sensible, user-friendly structure and make sure that all files have informative names. It should be easy to tell what is in a file or directory from its name, and a consistent naming protocol (e.g., ending the filename with the date created or version number) provides even more information when searching through files in a directory. A consistent naming protocol for both directories and files also makes coding simpler by placing data, analyses, and products in logical locations with logical names. It is often more useful to organize files in small blocks of similar files, rather than having one large directory full of hundreds of files. For example, Noble (2009) suggests organizing computational projects within a main directory for each project, with sub-directories for the manuscript (doc/), data files (data/), analyses (scripts/ or src/), and analysis products (results/) within that directory. While this specific organization scheme may differ for other types of research, keeping all of the research products and documentation for a given project organized in this way makes it much easier to find everything at all stages of the research process and to archive it or share it with others once the project is finished. Throughout the research process, from data acquisition to publication, version control can be used to record a project's history and provide a log of changes that have occurred over the life of a project or research group. Version control systems record changes to a file or set of files over time so that you can recall specific versions later, compare differences between versions of files, and even revert files back to previous states in the event of mistakes. Many researchers use version control systems to track changes in code and documents over time. The most popular version control system is Git, which is often used via hosting services such as GitHub, GitLab, and BitBucket (Table 1). These systems are relatively easy to set up and use, and they systematically store snapshots of data, code, and accompanying files throughout the duration of a project. Version control also enables a specific snapshot of data or code to be easily shared, so that code used for analyses at a specific point in time (e.g., when a manuscript is submitted) can be documented, even if that code is later updated. When possible, all data wrangling and analysis should be performed using coding scripts—as opposed to using interactive or point-and-click tools—so that every step is documented and repeatable by yourself and others. Code both performs operations on data and serves as a log of analytical activities. Because of this second function, code (unlike point-and-click programs) is inherently reproducible. Most errors are unintentional mistakes made during data wrangling or analysis, so having a record of these steps ensures that analyses can be checked for errors and are repeatable on future data sets. If operations are not possible to script, then they should be well-documented in a log file that is kept in the appropriate directory. Analytical code should be thoroughly annotated with comments. Comments embedded within code serve as metadata for that code, substantially increasing its usefulness. Comments should contain enough information for an informed stranger to easily understand what the code does, but not so much that sorting through comments is a chore. Code comments can be tested for this balance by a friend who is knowledgeable about the general area of research but is not a project collaborator. In most scripting languages, the first few lines of a script should include a description of what the script does and who wrote it, followed by small blocks that import data, packages, and external functions. Data cleaning and analytical code then follows those sections, and sections are demarcated using a consistent protocol and sufficient comments to explain what function each section of code performs. Following a clean, consistent coding style makes code easier to read. Many well-known organizations (e.g., RStudio, Google) offer style guidelines for software code that were developed by many expert coders. Researchers should take advantage of these while keeping in mind that all style guides are subjective to some extent. Researchers should work to develop a style that works for them. This includes using a consistent naming convention (e.g., camelCase or snake_case) to name objects and embedding meaningful information in object names (e.g., using “_mat” as a suffix for objects to denote matrices or “_df” to denote data frames). Code should also be written in relatively short lines and grouped into blocks, as our brains process narrow columns of data more easily than longer ones (Martin 2009). Blocks of code also keep related tasks together and can function like paragraphs to make code more comprehensible. There are several ways to prevent coding mistakes and make code easier to use. First, researchers should automate repetitive tasks. For example, if a set of analysis steps are being used repeatedly, those steps can be saved as a function and loaded at the top of the script. This reduces the size of a script and eliminates the possibility of accidentally altering some part of a function so that it works differently in different locations within a script. Similarly, researchers can use loops to make code more efficient by performing the same task on multiple values or objects in series (though it is also important to note that nesting too many loops inside one another can quickly make code incomprehensible). A third way to reduce mistakes is to reduce the number of hard-coded values that must be changed to replicate analyses on an updated or new data set. It is often best to read in the data file(s) and assign parameter values at the beginning of a script, so that those variables can then be used throughout the rest of the script. When operating on new data, these variables can then be changed once at the beginning of a script rather than multiple times in locations littered throughout the script. Because incompatibility between operating systems or program versions can inhibit the reproducibility of research, the current gold standard for ensuring that analyses can be used in the future is to create a software container, such as a Docker (Merkel 2014) or Singularity (Kurtzer et al. 2017) image (Table 1). Containers are standalone, portable environments that contain the entire computing environment used in an analysis: software, all of its dependencies, libraries, binaries, and configuration files, all bundled into one package. Containers can then be archived or shared, allowing them to be used in the future, even as packages, functions, or libraries change over time. If creating a software container is infeasible or a larger step than researchers are willing to take, it is important to thoroughly report all software packages used, including version numbers. After the steps above have been followed, it is time for the step most people associate with reproducible research: sharing research with others. As should be clear by now, sharing the data and code is far from the only component of reproducible research; however, once Steps 1 and 2 above are followed, it becomes the easiest step. All input data, scripts, program versions, parameters, and important intermediate results should be made publicly and easily accessible. Various solutions are now available to make data sharing convenient, standardized, and accessible in a variety of research areas. There are many ways to do this, several of which are described below. Just as it is better to use scripts than interactive tools in analysis, it is better to produce tables and figures directly from code than to manipulate these using Adobe Illustrator, Microsoft PowerPoint, or other image editing programs. A large number of errors in finished manuscripts come from not remembering to change all relevant numbers or figures when a part of an analysis changes, and this task can be incredibly time-consuming when revising a manuscript. Truly reproducible figures and tables are created directly with code and integrated into documents in a way that allows automatic updating when analyses are re-run, creating a “dynamic” document. For example, documents written in LaTeX and markdown incorporate figures directly from a directory, so a figure will be updated in the document when the figure is updated in the directory (see Xie 2015 for a much lengthier discussion of dynamic documents). Both LaTeX and markdown can also be used to create presentations that can incorporate live-updated figures when code or data change, so that presentations can be reproducible as well. If using one of these tools is too large a leap, then simply producing figures directly from code—instead of adding annotations and arranging panels post hoc—can make a substantial difference in increasing the reproducibility of these products. Beyond creating dynamic documents, it is possible to make data wrangling, analysis, and creation of figures, tables, and manuscripts a “one-button” process using GNU Make (https://www.gnu.org/software/make/). GNU Make is a simple, yet powerful tool that can be used to coordinate and automate command-line processes, such as a series of independent scripts. For example, a Makefile can be written that will take the input data, clean and manipulate it, analyze it, produce figures and tables with results, and update a LaTeX or markdown manuscript document with those figures, tables, and any numbers included in the results. Setting up research projects to run in this way takes some time, but it can substantially expedite re-analyses and reduce copy-paste errors in manuscripts. Currently, code and data that can be used to replicate research are often found in the supplementary material of journal articles. Some journals (e.g., eLife) are even experimenting with embedding data and code in articles themselves. However, this is not a fail-safe method of archiving data and analyses. Supplementary materials can be lost if a journal switches publishers or when a publisher changes its website. In addition, research is only reproducible if it can be accessed, and many papers are published in journals that are locked behind paywalls that make them inaccessible to many researchers (Desjardins-Proulx et al. 2013, McKiernan et al. 2016, Alston 2019). To increase access to publications, authors can post preprints of final (but preacceptance) versions of manuscripts on a preprint server, or postprints of manuscripts on postprint servers. There are several widely used preprint servers (see Table 1 for three examples), and libraries at many research institutions host postprint servers. Similarly, data and code shared on personal websites are only available as long as websites are maintained and can be difficult to transfer when researchers migrate to another domain or website provider. Materials archived on personal websites are also often difficult for other scientists to find, as they are not usually linked to the published research and lack a permanent digital object identifier (DOI). To make research accessible to everyone, it is therefore better to use tools like data and code repositories than personal websites. Data archiving in online repositories has become more popular in recent years, a trend resulting from a combination of improvements in technology for sharing data, an increase in omics-scale data sets, and an increasing number of publisher and funding organizations who encourage or mandate data archiving (Whitlock et al. 2010, Whitlock 2011, Nosek et al. 2015). Data repositories are large databases that collect, manage, and store data sets for analysis, sharing, and reporting. Repositories may be either subject- or data-type-specific, or cross-disciplinary general repositories that accept multiple data types. Some are free, and others require a fee for depositing data. Journals often recommend appropriate repositories on their websites, and these recommendations should be consulted when submitting a manuscript. Three commonly used general purpose repositories are Dryad, Zenodo, and Figshare; each of these creates a DOI that allows data and code to be citable by others. Before choosing a repository, researchers should explore commonly used options in their specific fields of research. When data, code, software, and products of a research project are archived together, these are termed a “research compendium” (Gentleman and Lang 2007). Research compendia are increasingly common, although standards for what is included in research compendia differ between scientific fields. They provide a standardized and easily recognizable way to organize the digital materials of a research project, which enables other researchers to inspect, reproduce, and extend research (Marwick et al. 2018). In particular, the Open Science Framework (OSF; http://osf.io/) is a project management repository that goes beyond the repository features of Dryad, Zenodo, and Figshare to integrate and share components of a research project using collaborative tools. The goal of the OSF is to enable research to be shared at every step of the scientific process—from developing a research idea and designing a study, to storing and analyzing collected data and writing and publishing reports or papers (Sullivan et al. 2019). Open Science Framework is integrated with many other reproducible research tools, including widely used preprint servers, version control software, and publishers. While many researchers associate reproducible research primarily with a set of advanced tools for sharing research, reproducibility is just as much about simple work habits as the tools used to share data and code. We ourselves are not perfect reproducible researchers—we do not use all the tools mentioned in this commentary all the time and often fail to follow our own advice (almost always to our regret). Nevertheless, we recognize that reproducible research is a process rather than a destination and work hard to consistently increase the reproducibility of our work. We encourage others to do the same. Researchers can make strides toward a more reproducible research process by simply thinking carefully about data management and organization, coding practices, and processes for making figures and tables (Fig. 1). Time and expertise must be invested in learning and adopting these tools and tips, and this investment can be substantial. Nevertheless, we encourage our fellow researchers to work toward more open and reproducible research practices so we can all enjoy the resulting improvements in work habits, collaboration, scientific rigor, and trust in science. Many thanks to J.G. Harrison, B.J. Rick, A.L. Lewanski, E.A. Johnson, and F.S. Dobson for providing helpful comments on prepublication versions of this manuscript and to C.A. Buerkle for inspiring this project during his Computational Biology course at the University of Wyoming. - Alston, J. M., & Rick, J. A. (2021).
A Beginner’s Guide to Conducting Reproducible Research
. Bulletin of the Ecological Society of America, 102(2), 1-14. doi:10.32942/osf.io/h5r6nMore infoReproducible research is widely acknowledged as an important tool for improving science and reducing harm from the "replication crisis", yet research in most fields within biology remains largely irreproducible. In this article, we make the case for why all research should be reproducible, explain why research is often not reproducible, and offer a simple framework that researchers can use to make their research more reproducible. Researchers can increase the reproducibility of their work by improving data management practices, writing more readable code, and increasing use of the many available platforms for sharing data and code. While reproducible research is often associated with a set of advanced tools for sharing data and code, reproducibility is just as much about maintaining work habits that are already widely acknowledged as best practices for research. Increasing reproducibility will increase rigor, trustworthiness, and transparency while benefiting both practitioners of reproducible research and their fellow researchers. - Alston, J. M., & Rick, J. A. (2021). A beginner’s guide to conducting reproducible research. Bulletin of the Ecological Society of America, 102(2), 1--14.
- Cove, M. V., Kays, R., Bontrager, H., Bresnan, C., Lasky, M., Frerichs, T., Klann, R., Lee Jr, T., Crockett, S. C., Crupi, A. P., & others, . (2021). SNAPSHOT USA 2019: a coordinated national camera trap survey of the United States. Ecology, 102(6), e03353.
- Cove, M. V., Kays, R., Bontrager, H., Bresnan, C., Lasky, M., Frerichs, T., Klann, R., Lee, T. E., Crockett, S. C., Crupi, A. P., Weiss, K. C., Rowe, H., Sprague, T., Schipper, J., Tellez, C., Lepczyk, C. A., Fantle-Lepczyk, J. E., LaPoint, S., Williamson, J., , Fisher-Reid, M. C., et al. (2021). SNAPSHOT USA 2019: a coordinated national camera trap survey of the United States. Ecology, 102(Issue 6). doi:10.1002/ecy.3353More infoWith the accelerating pace of global change, it is imperative that we obtain rapid inventories of the status and distribution of wildlife for ecological inferences and conservation planning. To address this challenge, we launched the SNAPSHOT USA project, a collaborative survey of terrestrial wildlife populations using camera traps across the United States. For our first annual survey, we compiled data across all 50 states during a 14-week period (17 August–24 November of 2019). We sampled wildlife at 1,509 camera trap sites from 110 camera trap arrays covering 12 different ecoregions across four development zones. This effort resulted in 166,036 unique detections of 83 species of mammals and 17 species of birds. All images were processed through the Smithsonian’s eMammal camera trap data repository and included an expert review phase to ensure taxonomic accuracy of data, resulting in each picture being reviewed at least twice. The results represent a timely and standardized camera trap survey of the United States. All of the 2019 survey data are made available herein. We are currently repeating surveys in fall 2020, opening up the opportunity to other institutions and cooperators to expand coverage of all the urban–wild gradients and ecophysiographic regions of the country. Future data will be available as the database is updated at eMammal.si.edu/snapshot-usa, as will future data paper submissions. These data will be useful for local and macroecological research including the examination of community assembly, effects of environmental and anthropogenic landscape variables, effects of fragmentation and extinction debt dynamics, as well as species-specific population dynamics and conservation action plans. There are no copyright restrictions; please cite this paper when using the data for publication.
- Wells, H. B., Crego, R. D., Opedal, O. H., Khasoha, L. M., Alston, J. M., Reed, C. G., Weiner, S., Kurukura, S., Hassan, A. A., Namoni, M., & et al., . (2021). Experimental evidence that effects of megaherbivores on mesoherbivore space use are influenced by species' traits. Journal of Animal Ecology, 90(11), 2510--2522.
- Wells, H. B., Crego, R. D., Opedal, Ø. H., Khasoha, L. M., Alston, J. M., Reed, C. G., Weiner, S., Kurukura, S., Hassan, A. A., Namoni, M., Ekadeli, J., Kimuyu, D. M., Young, T. P., Kartzinel, T. R., Palmer, T. M., Pringle, R. M., & Goheen, J. R. (2021). Experimental evidence that effects of megaherbivores on mesoherbivore space use are influenced by species' traits. Journal of Animal Ecology, 90(Issue 11). doi:10.1111/1365-2656.13565More infoThe extinction of 80% of megaherbivore (>1,000 kg) species towards the end of the Pleistocene altered vegetation structure, fire dynamics and nutrient cycling world-wide. Ecologists have proposed (re)introducing megaherbivores or their ecological analogues to restore lost ecosystem functions and reinforce extant but declining megaherbivore populations. However, the effects of megaherbivores on smaller herbivores are poorly understood. We used long-term exclusion experiments and multispecies hierarchical models fitted to dung counts to test (a) the effect of megaherbivores (elephant and giraffe) on the occurrence (dung presence) and use intensity (dung pile density) of mesoherbivores (2–1,000 kg), and (b) the extent to which the responses of each mesoherbivore species was predictable based on their traits (diet and shoulder height) and phylogenetic relatedness. Megaherbivores increased the predicted occurrence and use intensity of zebras but reduced the occurrence and use intensity of several other mesoherbivore species. The negative effect of megaherbivores on mesoherbivore occurrence was stronger for shorter species, regardless of diet or relatedness. Megaherbivores substantially reduced the expected total use intensity (i.e. cumulative dung density of all species) of mesoherbivores, but only minimally reduced the expected species richness (i.e. cumulative predicted occurrence probabilities of all species) of mesoherbivores (by
- Alston, J. M., Joyce, M. J., Merkle, J. A., & Moen, R. A. (2020). Temperature shapes movement and habitat selection by a heat-sensitive ungulate. Landscape Ecology, 35(9), 1961--1973.
- Alston, J. M., Joyce, M. J., Merkle, J. A., & Moen, R. A. (2020). Temperature shapes movement and habitat selection by a heat-sensitive ungulate. Landscape Ecology, 35(Issue 9). doi:10.1007/s10980-020-01072-yMore infoContext: Warmer weather caused by climate change poses increasingly serious threats to the persistence of many species, but animals can modify behavior to mitigate at least some of the threats posed by warmer temperatures. Identifying and characterizing how animals modify behavior to avoid the negative consequences of acute heat will be crucial for understanding how animals will respond to warmer temperatures in the future. Objectives: We studied the extent to which moose (Alces alces), a species known to be sensitive to heat, mitigates heat on hot summer days via multiple different behaviors: (1) reduced movement, (2) increased visitation to shade, (3) increased visitation to water, or (4) a combination of these behaviors. Methods: We used GPS telemetry and a step-selection function to analyze movement and habitat selection by moose in northeastern Minnesota, USA. Results: Moose reduced movement, used areas of the landscape with more shade, and traveled nearer to mixed forests and bogs during periods of heat. Moose used shade far more than water to ameliorate heat, and the most pronounced changes in behavior occurred between 15 and 20 °C. Conclusions: Research characterizing the behaviors animals use to facilitate thermoregulation will aid conservation of heat-sensitive species in a warming world. The modeling framework presented in this study is a promising method for evaluating the influence of temperature on movement and habitat selection.
- Alston, J. M., Rick, J. A., Alston, J. M., & Rick, J. A. (2020).
A Beginner’s Guide to Conducting Reproducible Research
. Bulletin of the Ecological Society of America, 102(2), 1-14. doi:https://doi.org/10.1002/bes2.1801More infoReproducible research is widely acknowledged as an important tool for improving science and reducing harm from the "replication crisis", yet research in most fields within biology remains largely irreproducible. In this article, we make the case for why all research should be reproducible, explain why research is often not reproducible, and offer a simple framework that researchers can use to make their research more reproducible. Researchers can increase the reproducibility of their work by improving data management practices, writing more readable code, and increasing use of the many available platforms for sharing data and code. While reproducible research is often associated with a set of advanced tools for sharing data and code, reproducibility is just as much about maintaining work habits that are already widely acknowledged as best practices for research. Increasing reproducibility will increase rigor, trustworthiness, and transparency while benefiting both practitioners of reproducible research and their fellow researchers. - Alston, J. M. (2019). Open access principles and practices benefit conservation. Conservation Letters, 12(6), e12672.
- Alston, J. M. (2019). Open access principles and practices benefit conservation. Conservation Letters, 12(Issue 6). doi:10.1111/conl.12672More infoOpen access is often contentious in the scientific community, but its implications for conservation are under-discussed or omitted entirely from scientific discourse. Access to literature is a key factor impeding implementation of conservation research, and many open access models and concepts that are little-known by most conservation researchers may facilitate implementation. Conservation professionals working outside academic institutions should have more access to research so that conservation is better supported by current science. In this perspective, I present elements missing from current discussions of open access and suggest potential pathways for journal publishers and researchers to make conservation publications more open. There are many promising avenues for open access to play a larger role in conservation research, including archiving pre-prints and post-prints, more permissive “green” open access policies, and increasing access to older articles. Collectively supporting open access practices will benefit our profession and the species we are working to protect.
- Alston, J. M., Abernethy, I. M., Keinath, D. A., & Goheen, J. R. (2019). Roost selection by male northern long-eared bats (Myotis septentrionalis) in a managed fire-adapted forest. Forest Ecology and Management, 446(Issue). doi:10.1016/j.foreco.2019.05.034More infoWildlife conservation in multi-use landscapes requires identifying and conserving critical resources that may otherwise be destroyed or degraded by human activity. Summer day-roost sites are critical resources for bats, so conserving roost sites is a focus of many bat conservation plans. Studies quantifying day-roost characteristics typically focus on female bats due to their much stronger influence on reproductive success, but large areas of species’ ranges can be occupied predominantly by male bats due to sexual segregation. We used VHF telemetry to identify and characterize summer day-roost selection by male northern long-eared bats (Myotis septentrionalis) in a ponderosa pine (Pinus ponderosa) forest in South Dakota, USA. We tracked 18 bats to 43 tree roosts and used an information-theoretic approach to determine the relative importance of tree- and plot-level characteristics for roost site selection. Bats selected roost trees that were larger in diameter, more decayed, and under denser canopy than other trees available on the landscape. Much like studies of female northern long-eared bats have shown, protecting large-diameter snags within intact forest is important for the conservation of male northern long-eared bats. Unlike female-specific studies, however, many roosts in our study (39.5%) were located in short (≤3 m) snags. Protecting short snags may be a low-risk, high-reward strategy for conservation of resources important to male northern long-eared bats. Other tree-roosting bat species in fire-prone forests may benefit from forest management practices that promote these tree characteristics, particularly in high-elevation areas where populations largely consist of males.
- Alston, J. M., Abernethy, I. M., Keinath, D. A., & Goheen, J. R. (2019). Roost selection by male northern long-eared bats (Myotis septentrionalis) in a managed fire-adapted forest. Forest Ecology and Management, 446, 251--256.
- Alston, J. M., Maitland, B. M., Brito, B. T., Esmaeili, S., Ford, A. T., Hays, B., Jesmer, B. R., Molina, F. J., & Goheen, J. R. (2019). Reciprocity in restoration ecology: When might large carnivore reintroduction restore ecosystems?. Biological Conservation, 234, 82--89.
- Alston, J. M., Millard, J. E., Rick, J. A., Husby, B. W., & Mundy, L. A. (2017). Observations of notable parental behaviours of northern spotted owls (Strix occidentalis caurina). Canadian Field-Naturalist, 131(3), 225--227.
Presentations
- Alston, J. M. (2025).
Wildlife-livestock coexistence in East African savannas
. Departmental Seminar. Las Cruces, NM: Departmental of Fish, Wildlife, and Conservation Ecology, New Mexico State University. - Alston, J. M., Mercer, M. M., Mollohan, C., Baldwin, K., & LeCount, A. (2025).
Why did the bobcat cross the road? Urban bobcat behavior and roadkill mitigation strategies
. Joint Annual Meeting of the Arizona and New Mexico Chapters of The Wildlife Society and the American Fisheries Society. Albuquerque, NM. - Alston, J. (2024). Roosting behavior by male northern long-eared bats and fringed myotis in the Black Hills. Black Hills National Forest, US Forest Service.
- Alston, J. M. (2022).
Large mammals as model organisms: movement and ecosystem management
. Departmental seminar. Tucson, AZ: School of Natural Resources and the Environment, University of Arizona. - Alston, J. M. (2022). Environmental drivers of body size in bats. Mississippi Bat Working Group Annual Meeting. Virtual: Mississippi Bat Working Group.
- Alston, J. M. (2022). Linking temperature to habitat selection in wildlife. Arizona Game and Fish Department Monthly Research Seminar. Virtual: Arizona Game and Fish Department.
- Alston, J. M. (2022). New analytical tools for studying habitat selection in terrestrial mammals. American Society of Mammalogists Annual Meeting. Tucson, AZ: American Society of Mammalogists.
- Alston, J. M. (2022). Wildlife-livestock coexistence in Laikipia, Kenya. Departmental seminar. Tucson, AZ: School of Natural Resources and the Environment, University of Arizona.
