Monday, January 22, 2018

Follow me feature

I've been playing with two RS5 robots, it is possible to do some cool stuff.
I've implemented object tracking using color and this way is possible for one robot to follow the other one. Take a look at the video, the blue robot is remote controlled and the pink robot is autonomous.





Color tracking can be tricky due to changing light conditions. If the ambient light is stable it works really nice.

Hope you like :)

Sunday, January 14, 2018

Old vs new robot

I still have my RS4 robot and it is fully functional.
You can see in the picture the size difference between them, they have the same capabilities.
I managed to better integrate all the components in the newer robot.



Monday, December 18, 2017

RS5 - 3D printed Raspberry Pi 3 robot

I always wanted to continue the development and add new features to the RS4 robot, the initial idea was to make a more autonomous robot. I decided to do some more work, but the platform needed to be updated. As 3d printers are becoming cheap that was the way to go, developing parts is way easier and no more cutting and drilling.
I'm an electronics engineer so 3D modelling is not something I am used to do, I was able to learn a few things in tutorial videos and began to 3d printing my first parts with success. I'm using SketchUp, it is not the most friendly software but with some dedication I was able to design  and print some parts.
In this images you can see the complete assembly model including all the parts, printable and not printable.




I also wanted a smaller robot but with all the modules that I used in the previous one. I picked up some similar but smaller wheels and nema17 steppers for the motion. This time I'm using a STM32 microcontroller, much more powerful than the PIC I was using before. The STM32 board controls every module on the robot, balance/movement, head servos, eye color and distance readings.
In this video it is possible to see all the modules being tested over a Bluetooth connection, no Raspberry Pi present. The board communicates via UART, using a Bluetooth UART converter is straight forward.




At this point adding Raspberry Pi to the robot is not that hard, just need to implement the UART commands, compile and install OpenCV and the integration is complete. I'm using Raspberry Pi 3, the old camera module and the Raspicam Api http://www.uco.es/investiga/grupos/ava/node/40.
Performance is not great but it is acceptable, I know that there are some optimizations that can be used to achieve better OpenCV performance, I'll try them later. At the moment I'm using 640x480 resolution. In the next video the robot is following a black line using image processing, similar process has been used before in the RS4 robot.



Here you can see some close photos of the robot, not the best color :)




The idea for this robot is to have a platform to implement cool features, it has some potential that can be used to develop a really nice robot.
I would like to make it available for other people, that would require some more time and money to develop it, make it more close to a product. I'm not sure if there are people interested in a robot that requires some advanced programming skills.
Would you like to have one? Just let me know :)




Sunday, November 19, 2017

New robot for future work

Hello all,

I'm working in a new robot that will be my new platform for development, it has the same architecture as the previous one, two wheel self balancing robot, but it is built using 3D printed parts which make everything easier to develop and replicate.
You can see it working in this video, remote control only for now.



It was designed to integrate a raspberry pi 3 and camera module, at the moment there's some work to do on it for the robot to be autonomous. I'll reveal more information in the next posts. My time for this hobby is limited but will try to keep it moving forward.


Best regards :)

Monday, February 10, 2014

Signs reading with OpenCV (Code)

Hello,

Some people have been asking for the robot OpenCV source code. Here is the source code that the robot uses to read the signs and perform actions. Remember that this is a test code, it is not carefully written.
The zip file also contains the source images needed in the process. I'm using Linux and Eclipse IDE.

The method used to read the signs was already described in my previous posts.

https://drive.google.com/open?id=0B-p_RLyuewtoSlpuMTBXZmVfVlk

Saturday, November 16, 2013

RS4 - Robot line following feature

The robot has a new feature, it can follow a black line painted on the floor.
I've created a new sign with a line, when the robot reads this sign it will begin the line following process.



How it works

In fact the line following feature is implemented in a very simple way. Because the line is black is easy to isolate from the ground and this is performed using a simple binarization. Here are the steps of the implemented feature.

1. ROI

First thing to do is to chose a ROI (region of interest) like showed in the next picture.



In this case the middle region of the image will look like this:


Changing the ROI image (up an down) will change the robot behavior in the corners. If a top region is chosen the robot will turn sooner, otherwise it will turn later. This requires some tuning, it will depend on the robot speed and camera tilt angle.

Code looks like this:

Rect roi(0, 190, 640, 100);
greyImg(roi).copyTo(roiImg);

2. Threshold

Next thing to do is to threshold ROI image, the threshold level has to be tuned, the ideia is to get something like this:


I'm using morphological operations to reduce noise. Code looks like this:

threshold(roiImg, roiImg, thVal , 255, 0);
bitwise_not(roiImg, roiImg); // negative image
Mat erodeElmt = getStructuringElement(MORPH_RECT, Size(3, 3));
Mat dilateElmt = getStructuringElement(MORPH_RECT, Size(5, 5));
erode(roiImg, roiImg, erodeElmt);
dilate(roiImg, roiImg, dilateElmt);

3. Find contours and center

Next step is to find image contours, in this case it will have just one contour (white quadrilateral). After finding image contour is easy to find its center that will be used to turn the robot. If the contour center moves to one side, the robot must turn to follow it.

Code to find center:

findContours(roiImg, contours, hierarchy, CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE, Point(0,0));
for (size_t i = 0; i < contours.size(); i++) {
float area = contourArea(contours[i]);
if (area > 2000) {
Moments mu;
mu = moments(contours[i], false);
Point2f center(mu.m10 / mu.m00, 240); // point in center (x only)
circle(camera, center, 5, Scalar(0, 255, 0), -1, 8, 0);
}
}






Sunday, November 10, 2013

RS4 - Robot Update (new features)


RS4 now can detect and read some signs and perform associated actions. In these videos it follows and reads various signs.





How it can read signs using OpenCV


First it must locate a sign and move to it, this is performed by following the blue color around around the signs. I'm having issues with color tracking because the AWB keeps changing the image lightning and at the moment I can't turn it off. The problem is already exposed in the raspberrypi.org forum and they are working in a solution.

When it is close to the sign it performs the next steps to read it:

1. Find image contours


Apply GaussianBlur to get a smooth image an then use Canny to detect edges. Now use this image to find countour using OpenCV method findContours. The Canny output looks like this:



2. Find approximate rectangular contours


All the signs have a black rectangle around them so the next step is to find rectangular shaped contours. This is performed using approxPolyDP method in OpenCV. At this point found polygons are filtered by number of corners (4 corners) and minimum area.



3. Isolate sign and correct perspective

Because the robot moves it will not find perfectly aligned signs. A perspective correction is necessary before try to match the sign with the previous loaded reference images. This is done with the warpPerspective method. Details how to use it here:

In the next image you can see the result of the process, in "B" is showed the corrected image.


4. Binarize and match

After isolate and correct the interest area, the result image is binarized and a  comparison is performed with all the reference images to look for a match. At the moment the system has 8 reference images to compare.
Comparison is performed using a binary XOR function (bitwise_xor). In the next images is showed the comparison result for match and not match, using countNonZero method is possible to detect the match sign.

Match image


 Not match image


Result

This methodology works well and it is fast enough to use it in Raspberry Pi. I've tried known methods like SURF and SIFT but are to slow for real time application using Raspberry Pi.