Skip to content
Ben Przybyszewski edited this page Jan 31, 2023 · 10 revisions

Welcome to the motion-planning wiki! This wiki is used by the duckietown team on ISU to keep the code used for motion planning class.

Our current architecture plan is to provide an interface where we will allow an input of coordinates and enqueue those to the path. We then will keep those points in the queue until the robot has reached the point at which point they will be dequeued. This will use the action communication pattern. This is covered in more detail here

This functionality is not yet implemented, and we are currently working on the following:

  • Simple motion of the bot
  • Write a node to set wheel speed (PID speed control loop)
  • Wheel odometry node

Here is a simple flow chart:


---
title: Motion Planning
---
flowchart LR


    odo -- odo_estimate ---> spn
    spn -- trajectory --> ppc
    lwenc[left_wheel_encoder_node] -- tick --> wenc
    rwenc[right_wheel_encoder_node] -- tick --> wenc

    subgraph perception
    imu[IMU]-- some topic ---> odo[Odometry]
    wenc[OdometryEncoder]-- some other topic---> odo
    end

    subgraph planning
    spn[some planning node]
    end

    subgraph control
    ppc[Pure Pursuit Controller]
    end
Loading

Where:

  • OdometryEncoder:
    • takes input from the motor ticks, which are already implemented nodes. It uses this to figure out how much the wheel has turned.
    • publishes the necessary parameters for use in an integration with the Odometry node. The message type will be
  • IMU:
    • not yet integrated, but should be updated by duckietown
  • Odometry
    • tries to figure out how the robot has moved since the last point. This For now it will just act as a pass thru for wheel encoder node, but will take IMU into account at some point, which will use a Kalman Filter
  • Pure Pursuit Controller will take in the type defined in packages/as_msgs/msg/trajectory_point.msg, which means that the planning node will have to output these messages

Clone this wiki locally