April tag code for 2026
- Two Raspberry Pi 5 devices
- Two Arducam cameras connected to each Raspberry Pi
- ROS 2 Humble
- Code runs inside Docker containers orchestrated with docker-compose
- Development is done on a Linux machine
- Once ready, the code is tested on the Raspberry Pi
From the terminal (Raspberry Pi IP may be different than shown):
./build_front.sh
scp frc4048-ros2-front.tar pi@10.40.48.200:
scp docker-compose.yaml pi@10.40.48.200:
ssh pi@10.40.48.200
docker load -i frc4048-ros2-front.tar
docker compose down
docker compose upCleaning up old images on the rpi: if the following shows many old images
docker imagesyou can do some cleanup (images take space):
docker images prune -a
docker load -i frc4048-ros2-front.tarIf you then want to connect to any container and display ROS topics:
ssh pi@10.40.48.200
docker compose ps
docker execute -it camera1 /bin/bash
source /opt/ros/humble/setup.bash
source ros2-ws/install/setup.bash
ros2 topic listWe run the following Docker containers:
- camera1
- camera2
- tcp
- lifesigns
Each camera container publishes robot pose to /tf based on its detections.
The tcp node subscribes to /tf and sends the robot poses to the RoboRio using a TCP socket.
lifesigns is a node that publishes a counter through NetworkTables to prove Raspberry Pi connectivity.
This counter should be displayed on the Elastic dashboard.
The main node is redshift_cam_node, which performs the pose calculation and publishes it.
It is started through a launch file called camera_launch.
To start it:
ros2 launch redshift_odometry camera_launch.py-
camera_instance:='cam1'
Which camera to run -
camera_type:='A'A= ArducamL= Logitech (for testing)
Example with parameters:
ros2 launch redshift_odometry camera_launch.py camera_instance:='cam'$CAM camera_type:='A'- Multiple instances of the camera container are run, each one with a different camera.
- Each instance uses its own ROS namespace.
camera_instanceis hardcoded in the launch file. If you want to add a camera, the launch file must be updated.
The goal of this node is to find the pose of the middle of the robot relative to the world (field). The simple explanation:
- we have a world -> tag (TagTable)
- we get tag -> camera transformation (dynamic)
- we have camera -> (middle of) robot transformation (static) from these 3 transformations, we can find the world -> robot transformation which is the pose!
Camera driver node
- Publishes:
/cam1/camera_info/cam1/image
Rectify image node (removes lens distortion)
- Subscribes:
/cam1/camera_info/cam1/image
- Publishes:
/cam1/image_rect
AprilTag detection node
- Subscribes:
/cam1/image_rect/cam1/camera_info
- Publishes:
/cam1/detections/tf
Robot odometry node
- Subscribes:
/tf/tf_static/cam1/detections
- Publishes:
/pose/tf
Each camera container also publishes static transforms:
- World → tag
- Robot → camera
- Development is easier on a Linux machine.
- Once things seem to be working, build the Docker image and test on the Raspberry Pi.
- Echo topics to make sure they are publishing information.
/cam1/detectionsshould show a detection if the camera sees a tag.- We don't need a static TF for the tag, we just use the TagTable to do the math, but is very helpful to publish them for diagnostics (you can see where the tags are in RVIZ) there is commented out code in the launch file to publish that!
If there are no detections:
- Make sure there are no ERRORs or WARNINGs in the camera node logger output.
- Make sure the AprilTag YAML file is correct.
- Make sure the video device is correct (use
v4l2-ctlif needed).
Use RViz to visualize transformations:
ros2 run rviz2 rviz2ros2 node list
ros2 node info /cam1/odometry
ros2 topic list
ros2 topic echo /cam1/detectionsv4l2-ctl --list-devices
ros2 run image_view image_view --ros-args -r image:=/cam1/image_rect