Skip to content
This repository was archived by the owner on Jul 3, 2019. It is now read-only.

Software System Overview

Carl Hofmeister edited this page Oct 30, 2017 · 2 revisions

Work in Progress

Here is as overview of how the software systems work together. A big focus in 2018 is testing, and as a result we need a software system that supports testing easily and is moduler enough that the majority of components can be developed early on before the whole framework is completed.

Below is the currently proposed framework:

test

As you can see there are four conceptual layers to the system. First at the bottom is the "physical" layer which isn't software at all, just what I'm calling all physical devices that our software interacts with. That can be sensors, cameras, motors, servos, etc.

Because we want it to be easy to develop software components from a standard computer, we need another layer between the high level software and physical devices. This second layer is the "microcontroller" layer. Microcontrollers typically have built in hardware for measuring analog voltages or communicating over SPI or I2C, so are more suited to interacting with sensors and actuaters than your laptop. Microcontrollers will do the job of reading raw data from sensors, optionally doing preprocessing on the values, and sending that up to the next layer. Starting from this next layer, we are now dealing with software that runs on a standard computer (laptop, desktop, raspberry pi). Most of our microcontrollers will be connected to main computers with USB cables.

The third layer is the driver layer. This takes low level data from the microcontrollers and translates it to the robocluster system. Any interaction with devices that doesn't require a microcontroller can also happen here.

Finally, the "logic" layer is what handles most of the control and information processing. Data from sensors are fed into this layer in a format that is most convenient for the particular task that uses that sensor information. This is the layer that handles navigation, translating joystick movements to wheel or arm movement, sensor fusion, etc. All the other lower layers are either sending sensory data to this layer, or recieving commands from it.

The down side of having this multi layerd architecture is performance, as every data packet and command has to go through multiple components which adds latency. Through experiance with previous iterations of the rover software however, this design should be performant enough, and offers more benefits with regards to development. Each layer has a specific responsibility that it performs: rover logic assumes that it will get the data it needs and that the commands it sends will be interpreted correctly, and just focuses on the behaviour of the rover. Microcontrollers just have to worry about interacting with circuits correctly, and don't need to worry about the high level concepts such as turning the rover or inverse kinematics for the arm.

Having a driver layer between the microcontrollers and rover logic may technically be unnecessary, but it makes testing the rover logic much easier. In addition to drivers that forward microcontroller data to rover logic, we can also write drivers that generate simulated data, or playback recorded messages from previous rover operation sessions. Now the high level logic and low level control can be developed and tested independantly and in parallel.

Clone this wiki locally