MIDAS API is a RESTful service for managing instrumentation data in the U.S. Army Corps of Engineers' Monitoring Instrumentation Data Acquisition System (MIDAS) platform. It enables the collection, storage, and access of time-series measurements from field instruments and serves as the backend for the MIDAS web application.
This guide is intended for internal developers. It includes setup instructions, usage details, and contribution practices.
Purpose: MIDAS API centralizes instrumentation data collection and management. It enables storing time-series measurements from various instruments, organizing them into projects (monitoring projects or sites), and provides data to the MIDAS web UI and other services. Key features include:
- REST API Endpoints: Provides structured endpoints to manage instruments, projects, measurements, and related entities.
- Authentication & Authorization: By default, uses JSON Web Tokens (JWT) for request authentication. This can be disabled in development for testing convenience. An API key mechanism is also used for trusted internal components (e.g., data loaders).
- Tech Stack: Written in Go (Echo framework) for high performance and clarity. Uses PostgreSQL for data persistence and embeds database migrations written in Go (not managed by Flyway).
- Containerized Development: Docker Compose is used to run the API along with its dependencies for local development and testing.
- Mocked AWS Services via LocalStack: LocalStack is used to emulate AWS services (such as S3 with event notifications, SQS, and SES). Note: LocalStack does not persist data across restarts.
- Environment Configuration: The
env_files/directory contains a complete set of.envtemplates organized by service. These should be used as a reference or starting point when configuring each environment (e.g.,api.env,report.env, etc.). Note that the following environment variables are not included, but are required for some features to be enabled:- For Survey123 Webhooks, add
SURVEY123_IP_WHITELIST - For SeedLink
sl-client, addSEEDLINK_SERVER_URI
- For Survey123 Webhooks, add
The fastest way to set up the entire stack is by using Docker Compose. A convenience script compose.sh is provided to streamline common Compose tasks (build, start, stop, etc.).
From the project root, run:
./compose.sh upThis builds and launches:
- API service: The Go server, available at
http://localhost:$API_PORT/v4. - Postgres database: Accessible at
localhost:5432using the credentials from your.env. - LocalStack (mock AWS): Provides emulated services such as S3, SQS, and SES for local development. These do not persist state across restarts.
The .env file defines credentials and connection details. Example database connection URI:
postgresql://postgres:postgres@localhost:5432/postgres?sslmode=disable
You can preview the API endpoints and test requests directly using the built-in Swagger UI:
- Navigate to: http://localhost:8080/v4/docs
If authentication is enabled, a default token is injected when running locally to allow immediate use of the endpoints.
All core web API endpoints are prefixed with /v4. Endpoints outside of this versioned namespace exist for backward compatibility during the transition and should not be relied upon in new development.
This project includes a suite of regression tests to verify that the API endpoints and business logic work as expected. Tests are written in Go and are mostly integration tests that spin up the API and send HTTP requests against it.
The easiest way to run the test suite is through Docker Compose:
./compose.sh test [-rm] [<go test flags>]- The
-rmflag will remove the container after the test run. - You can pass Go test flags to filter test cases. For example:
./compose.sh test -rm -run ProjectThis runs only tests with names matching "Project".
To run tests directly from your machine:
- Make sure PostgreSQL is running and accessible (use the same database defined in your
.env). - Export the required environment variables:
export INSTRUMENTATION_DB_USER=postgres
export INSTRUMENTATION_DB_PASS=postgres
export INSTRUMENTATION_DB_NAME=postgres
export INSTRUMENTATION_DB_HOST=localhost
export INSTRUMENTATION_DB_SSLMODE=disable- Run the tests with:
go test ./...Important: The tests may reset or migrate the database schema. Avoid pointing to a development database with real data.
This project uses sqlc to generate type-safe Go code from SQL queries.
The typical workflow for adding a new database query is:
- Write the SQL query in a
.sqlfile under theapi/queries/directory. - Run the following command to regenerate the Go code:
./compose.sh genThis will output .gen.go files in api/internal/db, containing type-safe Go functions corresponding to your SQL queries.
Notes:
- Generated files always end with the
.gen.gosuffix. - In some cases, manual queries or custom logic may be required. These should be written in separate
.manual.gofiles withinapi/internal/db. - Avoid editing
.gen.gofiles directly, as they will be overwritten bysqlc generate.
For more about how sqlc works, see the official repo: https://github.com/sqlc-dev/sqlc
OpenAPI documentation is automatically generated using the Huma v2 library.
Huma uses Go generics and reflection to define request/response models and operation metadata in a structured, type-safe way. Endpoint definitions in the API handlers include inline descriptions, path/query/JSON body definitions, and example values. The OpenAPI spec is served automatically at runtime and is available via Swagger UI at:
http://localhost:8080/v4/docs
Any changes to operation schemas or endpoint behavior are reflected automatically in the documentation with no separate generation step.
The MIDAS UI and the /report Node.js PDF renderer use the openapi-ts project to consume the OpenAPI specification generated by the API.
This tool is used to generate:
- TypeScript types for all request/response objects defined in the API
- A typed fetch client to call endpoints with full "over-the-wire" type safety
This ensures consistency between the backend and frontend, enabling developers to safely rely on strong typing across the stack. Generated types and clients are typically refreshed as part of the frontend build or setup process when the OpenAPI spec changes.
The ./report directory contains an asynchronous lambda function that is called via AWS SQS (Simple Queue Service). Locally, the API service mocks this call with the Lambda RIE (Runtime Interface Emulator). If AUTH_JWT_MOCKED,(an API service environment variable) is true, the API container will attempt to invoke the Lambda via the RIE endpoint instead of the queue.
Development deployments are done though CI (Continuous Integration) scripts using Github Actions. Each services is tested, built, and pushed to AWS ECR. They re-deploy on container push when the CI pipelines successfully finish. If the container does not automattically re-deploy on ECR push, it can be manually deployed from the AWS console or cli. Unfortunately, CI for test and production environments is not currently available.
Test and production deployments are currently done manually. ./compose.sh build [local,develop,test,prod] can be used to build the application with hardened images sourced from Ironbank. Afterwards, the build images should be pushed to test and/or prod via cli.
Many of the Ironbank images only support amd64 architecture. When running on aarch64 platforms (such as Silicon Macs), cross-compilation can be very unreliable within a VM using QEMU for cross-platform builds and will result in repeated segmentation faults. It is reccomended that you run a dedicated x86_64 builder VM.
For example, if you are running on an M1 MacBook using colima installed with the homebrew package manager:
brew install colima qemu lima lima-additional-guestagents
# stop the default colima VM if it's already started
colima stop
colima start --profile amd64-builder \
--arch x86_64 \
--vm-type qemu \
--runtime docker \
--cpu 8 \
--memory 16 \
--disk 50 # provision resources appropriate for your system
colima ls
# <list of your VMs including amd64-builder>
docker context use colima-amd64-builder
docker buildx create colima-amd64-builder \
--name colima-amd64 \
--driver docker-container \
--platform linux/amd64 \
--use
./compose.sh build testNote that you should only use this colima VM for builds, as running the services locally in the VM (./compose.sh up) will result in very high CPU usage. For this reason, some alternative images are used for local development.
SQS-Worker to parse CSV Files of timeseries measurements on AWS S3 and post contents to the core api.
Works with localstack for local testing with a SQS-compatible interface. Variables noted "Used for local testing" typically do not need to be provided when deployed, for example to AWS. They can be omitted completely or set to "" if not required.