Skip to content

Using OpenCV for localization. #442

@AhmedSamara

Description

@AhmedSamara

After much struggling, it seems that we won't be able to use the modified QRCodeStateEstimation library on the final robot, due to the ZBAR library being incompatible with the BeagleBone, as well as our struggling to get BOOST library working to work.

Looking at the source code for that library though, it's not that complicated and we should be easily recreated in python.
All you really need is a set of (known) points, and to be able to identify them in the images captured by the webcam.

This is the part that made Zbar really simple. When it finds a QR code, it also tells you where the vertices are, and this is the part that I'm running into problems with now.

I was hoping to solve this problem using SURF points

The change in how we're approaching this is explained in my senior design presentation:

opencv

Basically the problem I'm running into is that even though I can now identify a bunch of known points on the QR code, there's still a few problems:

  • Not all of the SURF points are unique to each QR code.
  • When we find SURF points on the cameras captured frames, there are points on everything.
    How do we isolate the points just on the QR code?
  • Is there something better than surf points to use?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions