The AVERT instrument platform is a robust hardware/software system for the field deployment of a range of instrumentation for volcanological observations (e.g., seismometers, GNSS antennas/receivers, cameras, to name a few). This repository contains the source code for managing the individual nodes in a network—data acquisition and archival with custom drivers, system scheduling and power management, and data telemetry.
An alpha version of this system was deployed at Cleveland volcano, Alaska, where data were telemetered back via a satellite uplink (initially BGAN and later Starlink) to our data server at the Lamont-Doherty Earth Observatory in New York. The current iteration is in operation at Poás volcano, Costa Rica, where it is being used to managed a small network of sites in and around the active crater region.
The node management software provides utilities for:
- system-level configuration, such as network interfaces and systemd daemons for data data acquisition and telemetry;
- data acquisition and archival via custom drivers for a varety of instruments;
- managing data telemetry;
- controlling auxiliary systems, such as power management via network-attached relay switches.
The data acquisition package requires a minimum Python version of 3.11—this is the default distribution installed with Debian 12 (Bookworm), which is the operating system used by the single-board computers (SBCs). Throughout, it is assumed that the SBC has already gone through the basic setup process required to be an AVERT node.
We use the open-source tool uv (read more) to create a lightweight virtual environment that isolates the (limited) project dependencies from global Python packages. For certain instruments, some additional packages must be installed globally and included in the virtual environment (primarily because these can be installed directly as binaries built for the target system architecture). This pre-installation is done using apt:
sudo apt install python3-numpy python3-pilInstall uv with:
curl -LsSf https://astral.sh/uv/install.sh | shCreate and activate a new virtual environment (we recommend the home directory):
uv venv --system-site-packages --python=3.12 ~/.venv
source ~/.venv/bin/activateThe node management software has to be installed from source (i.e., this GitHub repository). First, clone the software. For the sake of neatness, we choose to store the source repository in the /opt directory. This requires some editing of permissions/ownership of the files. Then, having made sure your virtual environment is activated (see above), we install using uv:
git clone https://github.com/AVERT-System/nodectl
sudo mv nodectl /opt/.
sudo chown -R user:avert /opt/nodectl
cd /opt/nodectl
uv pip install .Note: This installation process will require an internet connection in order to download and install the package dependencies.
Once the package has been installed, you will have access to the nodectl command-line utility, which provides a number of services:
nodectl init: used to initialise a new node. Creates a new node config file, or can be used to initialise a node with a pre-written config file in the AVERT platform format;nodectl config: used to manage an existing config. The config specifies some basic metadata about the node, such as its geographical location. Internally we use the TOML (Tom's Obvious Minimal Language) to structure our config files, which is a standard, human-readable config file format used by Python. Instruments attached to each node are managed separately, via thecomponentsentry point;nodectl schedule: used to manage data acquisition, with options for either continuous data acquisition (e.g., continuous instruments connected via serial) or acquisition at set intervals (e.g., useful for images). Under the hood, we use the Linuxsystemdmanagement tool to create services that are robust to pwoer outages and occasional driver crashes;nodectl telemeter: a utility that can be used to move files from one node to another, or from the site to a remote server for archival.nodectl query: a utility that is used behind the scenes to provide a uniform interface to the various instruments attached to the system.nodectl relay: used to control any network-attached relays that may be in the system.nodectl components: used to manage the different instruments that are attached to the system.
The process of configuring a node for deployment is detailed step-by-step below. Note: the nodectl package must have been installed, along with any additional dependencies (e.g., for nodes with camera components, a number of additional packages are needed). All of the commands are run in the terminal, but configuration files can be carefully constructed and installed manually by copying them into the relevant .config directory (default ~/.config/avert).
nodectl init: initialise the new node. Here, you will need to provide the data archive location, the node name, and geographic information. These can be changed later withnodectl config edit.nodectl relay add: add in any networked-attached power control relays. The specific switches can be configured after components have been added withnodectl relay switch.nodectl telemetry add: add in information about telemetry equipment, which are defined by the local transceiver IP, the remote destination IP, and the type of connection, e.g., Local Area Network (LAN) by radio or satellite. These are then associated with instruments in the following step.nodectl components register: each individual component instrument is now added in turn. Each component is assigned a unique ID, which is used to help track data sources throughout the network.nodectl relay switch: if a relay is used in the system to control power to specific components, this should now be set up.nodectl schedule add: Specify, for each component (using the component ID) the type of data acquisition schedule (either continuous or interval). This will populate, from templates, a systemd service (with name<component-id>-interval-data.serviceor<component-id>-continuous-data.service). In order to start the data acquisition, one must runsudo systemctl start <service-name>andsudo systemctl enable <service-name>if continuous, orsudo systemctl enable <service-name>.timerif interval. The status of the service can be checked withsudo systemctl status <service-name>. Throughout,<service-name>should be replaced with the actual service name.
You can contact us directly at: avert-system [ at ] proton.me
Any additional comments/questions can be directed to:
- Conor Bacon - conor.bacon [ at ] norsar.no
This package is written and maintained by the AVERT System Team, Copyright AVERT System Team 2025. It is distributed under the GPLv3 License. Please see the LICENSE file for a complete description of the rights and freedoms that this provides the user.