Tools to Process the Munich ChronoType Questionnaire (MCTQ)
A complete toolkit to process the Munich ChronoType Questionnaire (MCTQ) for its three versions (standard, micro, and shift). MCTQ is a quantitative and validated tool to assess chronotypes using peoples' sleep behavior, originally presented by Till Roenneberg, Anna Wirz-Justice, and Martha Merrow (2003, doi:10.1177/0748730402239679).
View DocumentationStandardize Dates in Different Formats or with Missing Data
There are many different formats dates are commonly represented with: the order of day, month, or year can differ, different separators ("-", “/”, or whitespace) can be used, months can be numerical, names, or abbreviations and year given as two digits or four. datefixR takes dates in all these different formats and converts them to Rs built-in date class. If datefixR cannot standardize a date, such as because it is too malformed, then the user is told which date cannot be standardized and the corresponding ID for the row. datefixR' also allows the imputation of missing days and months with user-controlled behavior.
View DocumentationRead and Write ODS Files
Read ODS (OpenDocument Spreadsheet) into R as data frame. Also support writing data frame into ODS file.
View DocumentationData Quality Reporting for Temporal Datasets
Generate reports that enable quick visual review of temporal shifts in record-level data. Time series plots showing aggregated values are automatically created for each data field (column) depending on its contents (e.g. min/max/mean values for numeric data, no. of distinct values for categorical data), as well as overviews for missing values, non-conformant values, and duplicated rows. The resulting reports are shareable and can contribute to forming a transparent record of the entire analysis process. It is designed with Electronic Health Records in mind, but can be used for any type of record-level temporal data (i.e. tabular data where each row represents a single “event”, one column contains the “event date”, and other columns contain any associated values for the event).
View DocumentationPhylogenetic Reconstruction and Time-dating
The phruta R package is designed to simplify the basic phylogenetic pipeline. Specifically, all code is run within the same program and data from intermediate steps are saved in independent folders. Furthermore, all code is run within the same environment which increases the reproducibility of your analysis. phruta retrieves gene sequences, combines newly downloaded and local gene sequences, and performs sequence alignments.
View DocumentationManage Data from Cardiopulmonary Exercise Testing
Import, process, summarize and visualize raw data from metabolic carts. See Robergs, Dwyer, and Astorino (2010) doi:10.2165/11319670-000000000-00000 for more details on data processing.
View DocumentationWrangle, Analyze, and Visualize Animal Movement Data
Tools to import, clean, and visualize movement data, particularly from motion capture systems such as Optitracks Motive, the Straw Labs Flydra, or from other sources. We provide functions to remove artifacts, standardize tunnel position and tunnel axes, select a region of interest, isolate specific trajectories, fill gaps in trajectory data, and calculate 3D and per-axis velocity. For experiments of visual guidance, we also provide functions that use subject position to estimate perception of visual stimuli.
View DocumentationInterface to Phylocom
Interface to Phylocom (https://phylodiversity.net/phylocom/), a library for analysis of phylogenetic community structure and character evolution. Includes low level methods for interacting with the three executables, as well as higher level interfaces for methods like aot, ecovolve, bladj, phylomatic, and more.
View DocumentationCreate and Query a Local Copy of GenBank in R
Download large sections of GenBank https://www.ncbi.nlm.nih.gov/genbank/ and generate a local SQL-based database. A user can then query this database using restez functions or through rentrez https://CRAN.R-project.org/package=rentrez wrappers.
View DocumentationAutomated Cleaning of Occurrence Records from Biological Collections
Automated flagging of common spatial and temporal errors in biological and paleontological collection data, for the use in conservation, ecology and paleontology. Includes automated tests to easily flag (and exclude) records assigned to country or province centroid, the open ocean, the headquarters of the Global Biodiversity Information Facility, urban areas or the location of biodiversity institutions (museums, zoos, botanical gardens, universities). Furthermore identifies per species outlier coordinates, zero coordinates, identical latitude/longitude and invalid coordinates. Also implements an algorithm to identify data sets with a significant proportion of rounded coordinates. Especially suited for large data sets. The reference for the methodology is: Zizka et al. (2019) doi:10.1111/2041-210X.13152.
View DocumentationRead Spectrometric Data and Metadata
Parse various reflectance/transmittance/absorbance spectra file formats to extract spectral data and metadata, as described in Gruson, White & Maia (2019) doi:10.21105/joss.01857. Among other formats, it can import files from Avantes https://www.avantes.com/, CRAIC https://www.microspectra.com/, and OceanInsight (formerly OceanOptics) https://www.oceaninsight.com/ brands.
View DocumentationBase Classes and Functions for Phylogenetic Tree Input and Output
treeio is an R package to make it easier to import and store phylogenetic tree with associated data; and to link external data from different sources to phylogeny. It also supports exporting phylogenetic tree with heterogeneous associated data to a single tree file and can be served as a platform for merging tree with associated data and converting file formats.
View DocumentationParsing GenBank files into semantically useful objects
Export Data Frames to Excel xlsx Format
Zero-dependency data frame to xlsx exporter based on libxlsxwriter. Fast and no Java or Excel required.
View DocumentationHigh Performance CommonMark and Github Markdown Rendering in R
The CommonMark specification defines a rationalized version of markdown syntax. This package uses the cmark reference implementation for converting markdown text into various formats including html, latex and groff man. In addition it exposes the markdown parse tree in xml format. Also includes opt-in support for GFM extensions including tables, autolinks, and strikethrough text.
View DocumentationPolyhedra Database
A polyhedra database scraped from various sources as R6 objects and rgl visualizing capabilities.
View DocumentationLinguistic Phonetic Fieldwork Tools
There are a lot of different typical tasks that have to be solved during phonetic research and experiments. This includes creating a presentation that will contain all stimuli, renaming and concatenating multiple sound files recorded during a session, automatic annotation in Praat TextGrids (this is one of the sound annotation standards provided by Praat software, see Boersma & Weenink 2020 https://www.fon.hum.uva.nl/praat/), creating an html table with annotations and spectrograms, and converting multiple formats (Praat TextGrid, ELAN, EXMARaLDA, Audacity, subtitles .srt, and FLEx flextext). All of these tasks can be solved by a mixture of different tools (any programming language has programs for automatic renaming, and Praat contains scripts for concatenating and renaming files, etc.). phonfieldwork provides a functionality that will make it easier to solve those tasks independently of any additional tools. You can also compare the functionality with other packages: rPraat https://CRAN.R-project.org/package=rPraat, textgRid https://CRAN.R-project.org/package=textgRid.
View DocumentationChecks for Exclusion Criteria in Online Data
Data that are collected through online sources such as Mechanical Turk may require excluding rows because of IP address duplication, geolocation, or completion duration. This package facilitates exclusion of these data for Qualtrics datasets.
View DocumentationDeterministic Categorization of Items Based on External Code Data
Fast categorization of items based on external code data identified by regular expressions. A typical use case considers patient with medically coded data, such as codes from the International Classification of Diseases (ICD) or the Anatomic Therapeutic Chemical (ATC) classification system. Functions of the package relies on a triad of objects: (1) case data with unit id:s and possible dates of interest; (2) external code data for corresponding units in (1) and with optional dates of interest and; (3) a classification scheme (classcodes object) with regular expressions to identify and categorize relevant codes from (2). It is easy to introduce new classification schemes (classcodes objects) or to use default schemes included in the package. Use cases includes patient categorization based on comorbidity indices such as Charlson, Elixhauser, RxRisk V, or the comorbidity-polypharmacy score (CPS), as well as adverse events after hip and knee replacement surgery.
View DocumentationQuantitative PCR Analysis with the Tidyverse
For reproducible quantitative PCR (qPCR) analysis building on packages from the ’tidyverse’, notably ’dplyr’ and ’ggplot2’. It normalizes (by ddCq), summarizes, and plots pre-calculated Cq data, and plots raw amplification and melt curves from Roche Lightcycler (tm) machines. It does NOT (yet) calculate Cq data from amplification curves.
View DocumentationTools to Manipulate and Query Semantic Data
The Resource Description Framework, or RDF is a widely used data representation model that forms the cornerstone of the Semantic Web. RDF represents data as a graph rather than the familiar data table or rectangle of relational databases. The rdflib package provides a friendly and concise user interface for performing common tasks on RDF data, such as reading, writing and converting between the various serializations of RDF data, including rdfxml, turtle, nquads, ntriples, and json-ld; creating new RDF graphs, and performing graph queries using SPARQL. This package wraps the low level redland R package which provides direct bindings to the redland C library. Additionally, the package supports the newer and more developer friendly JSON-LD format through the jsonld package. The package interface takes inspiration from the Python rdflib library.
View DocumentationTools for Spell Checking in R
Spell checking common document formats including latex, markdown, manual pages, and description files. Includes utilities to automate checking of documentation and vignettes as a unit test during R CMD check. Both British and American English are supported out of the box and other languages can be added. In addition, packages may define a wordlist to allow custom terminology without having to abuse punctuation.
View DocumentationRecodes Sex/Gender Descriptions into a Standard Set
Provides functions and dictionaries for recoding of freetext gender responses into more consistent categories.
View DocumentationStore and Retrieve Data.frames in a Git Repository
The git2rdata package is an R package for writing and reading dataframes as plain text files. A metadata file stores important information. 1) Storing metadata allows to maintain the classes of variables. By default, git2rdata optimizes the data for file storage. The optimization is most effective on data containing factors. The optimization makes the data less human readable. The user can turn this off when they prefer a human readable format over smaller files. Details on the implementation are available in vignette(“plain_text”, package = “git2rdata”). 2) Storing metadata also allows smaller row based diffs between two consecutive commits. This is a useful feature when storing data as plain text files under version control. Details on this part of the implementation are available in vignette(“version_control”, package = “git2rdata”). Although we envisioned git2rdata with a git workflow in mind, you can use it in combination with other version control systems like subversion or mercurial. 3) git2rdata is a useful tool in a reproducible and traceable workflow. vignette(“workflow”, package = “git2rdata”) gives a toy example. 4) vignette(“efficiency”, package = “git2rdata”) provides some insight into the efficiency of file storage, git repository size and speed for writing and reading.
View DocumentationClient for jq, a JSON Processor
Client for jq, a JSON processor (https://stedolan.github.io/jq/), written in C. jq allows the following with JSON data: index into, parse, do calculations, cut up and filter, change key names and values, perform conditionals and comparisons, and more.
View DocumentationLightweight Qualitative Coding
A free, lightweight, open source option for analyzing text-based qualitative data. Enables analysis of interview transcripts, observation notes, memos, and other sources. Supports the work of social scientists, historians, humanists, and other researchers who use qualitative methods. Addresses the unique challenges faced in analyzing qualitative data analysis. Provides opportunities for researchers who otherwise might not develop software to build software development skills.
View DocumentationTree Biomass Estimation at Extra-Tropical Forest Plots
Standardize and simplify the tree biomass estimation process across globally distributed extratropical forests.
View DocumentationManipulation of Matched Phylogenies and Data using data.table
An implementation that combines trait data and a phylogenetic tree (or trees) into a single object of class treedata.table. The resulting object can be easily manipulated to simultaneously change the trait- and tree-level sampling. Currently implemented functions allow users to use a data.table syntax when performing operations on the trait dataset within the treedata.table object.
View DocumentationAnalysis of Work Loops and Other Data from Muscle Physiology Experiments
Functions for the import, transformation, and analysis of data from muscle physiology experiments. The work loop technique is used to evaluate the mechanical work and power output of muscle. Josephson (1985) doi:10.1242/jeb.114.1.493 modernized the technique for application in comparative biomechanics. Although our initial motivation was to provide functions to analyze work loop experiment data, as we developed the package we incorporated the ability to analyze data from experiments that are often complementary to work loops. There are currently three supported experiment types: work loops, simple twitches, and tetanus trials. Data can be imported directly from .ddf files or via an object constructor function. Through either method, data can then be cleaned or transformed via methods typically used in studies of muscle physiology. Data can then be analyzed to determine the timing and magnitude of force development and relaxation (for isometric trials) or the magnitude of work, net power, and instantaneous power among other things (for work loops). Although we do not provide plotting functions, all resultant objects are designed to be friendly to visualization via either base-R plotting or tidyverse functions. This package has been peer-reviewed by rOpenSci (v. 1.1.0).
View DocumentationExtensible Style-Sheet Language Transformations
An extension for the xml2 package to transform XML documents by applying an xslt style-sheet.
View DocumentationEcological Metadata as Linked Data
This is a utility for transforming Ecological Metadata Language (EML) files into JSON-LD and back into EML. Doing so creates a list-based representation of EML in R, so that EML data can easily be manipulated using standard R tools. This makes this package an effective backend for other R-based tools working with EML. By abstracting away the complexity of XML Schema, developers can build around native R list objects and not have to worry about satisfying many of the additional constraints of set by the schema (such as element ordering, which is handled automatically). Additionally, the JSON-LD representation enables the use of developer-friendly JSON parsing and serialization that may facilitate the use of EML in contexts outside of R, as well as the informatics-friendly serializations such as RDF and SPARQL queries.
View DocumentationParse a BibTeX File to a Data Frame
Parse a BibTeX file to a data.frame to make it accessible for further analysis and visualization.
View DocumentationJSON for Linking Data
JSON-LD is a light-weight syntax for expressing linked data. It is primarily intended for web-based programming environments, interoperable web services and for storing linked data in JSON-based databases. This package provides bindings to the JavaScript library for converting, expanding and compacting JSON-LD documents.
View DocumentationPositron Emission Tomography Time-Activity Curve Analysis
To facilitate the analysis of positron emission tomography (PET) time activity curve (TAC) data, and to encourage open science and replicability, this package supports data loading and analysis of multiple TAC file formats. Functions are available to analyze loaded TAC data for individual participants or in batches. Major functionality includes weighted TAC merging by region of interest (ROI), calculating models including standardized uptake value ratio (SUVR) and distribution volume ratio (DVR, Logan et al. 1996 doi:10.1097/00004647-199609000-00008), basic plotting functions and calculation of cut-off values (Aizenstein et al. 2008 doi:10.1001/archneur.65.11.1509). Please see the walkthrough vignette for a detailed overview of tacmagic functions.
View DocumentationConduct Co-Localization Analysis of Fluorescence Microscopy Images
Automate the co-localization analysis of fluorescence microscopy images. Selecting regions of interest, extract pixel intensities from the image channels and calculate different co-localization statistics. The methods implemented in this package are based on Dunn et al. (2011) doi:10.1152/ajpcell.00462.2010.
View DocumentationDendrograms for Evolutionary Analysis
Contains functions for developing phylogenetic trees as deeply-nested lists (“dendrogram” objects). Enables bi-directional conversion between dendrogram and “phylo” objects (see Paradis et al (2004) doi:10.1093/bioinformatics/btg412), and features several tools for command-line tree manipulation and import/export via Newick parenthetic text.
View DocumentationGenerate Starting Trees For Combined Molecular, Morphological and Stratigraphic Data
Combine a list of taxa with a phylogeny to generate a starting tree for use in total evidence dating analyses.
View Documentation