A simple ROS package using OpenCV on a 1/10 RC car chassis with ackerman steering that can detect and track road lines or lanes in a driver-less mode.
- Dependencies
- Environment Configuration
- Work Flow To Use This Repository
- Nodes
- Topics
- Launch
- Tools
- Troubleshooting
- Demonstration videos
OpenCV is a library, in our case for Python, that provides high-level functions for computer vision and image processing.
Adafruit Servokit is a Python library that provides a high-level interface with low-level PWM controls. For this package, the library is used to control PWM servos and a ESC connected to channels of the PCA9685 I2C breakout board. more details here
CV Bridge provides functions to easily convert (encode/decode) in between ROS image message types to OpenCV-workable Numpy arrays.
a. Check if you have openCV for python3
python3
then enter
import cv2
b. If no error occurs, you're good to go. Otherwise issue the command below for barebones version
sudo apt-get install python3-pip
pip3 install --upgrade pip
pip3 install opencv-python
c. Check again to see if opencv was compiled correctly for python3
python3
then enter
import cv2
No errors should have happened, if so, make sure you used pip3 and not pip when running the install command above
more details here
if you want to compile from source follow steps below
d. (OPTIONAL) build instructions for openCV here
a. Creat environment
python3 -m pip install --user virtualenv
sudo apt-get install python3-venv
python3 -m venv --system-site-packages env
source env/bin/activate
python3 -m pip install requests
b. Environment details
Get path to executable
which python
Get python version
python --version
List of packages
pip list
Site packages location
python -m site
c. Add PYHTHONPATH
nano ~/.bash_profile
Add this line to bash file
export PYTHONPATH="<path to virtual env>/lib/python3.6"
d. Activate Environment (for new terminals)
source env/bin/activate
NOTE WHILE IN VIRTUAL ENVIRONMENT, DO NOT USE "sudo" TO INSTALL PIP PACKAGES, THESE WILL INSTALL TO ROOT INSTEAD OF VIRTUAL ENVIRONMENT
pip install pyyaml
pip install rospkg
pip install --upgrade pip
pip install --upgrade pyinstaller
pip install adafruit-circuitpython-pca9685
pip install adafruit-circuitpython-servokit
more details here
Instructions found here
a. Generate an SSH key and provide it to Gitlab for access to repositories
ssh-keygen # Use all defaults
b. Then press enter until you get to an empty comand line, then
cat $HOME/.ssh/id_rsa.pub
c. Then copy the ssh key and go back to Gitlab. Click on your user profile at the top right corner of the screen then
click on preferences from the drop down menu. Now a new panel on the left hand side of the screen wil apear, click on SSH Keys,
then paste your SSH key into the text field and submit it.
d. Create ROS workspace and obtain copy of ucsd_robo_car_simple_ros repository
mkdir projects && cd projects
mkdir catkin_ws && cd catkin_ws
mkdir src && cd src
git clone [email protected]:djnighti/ucsd_robo_car_simple_ros.git
e. Build ucsd_robo_car_simple_ros package:
cd ..
catkin_make
source devel/setup.bash
rospack profile
f. OPTIONALLY (RECOMMENDED) add some lines of code to the bash script so that every time a new terminal is opened, the virtual env is activated and this ROS package is compiled and sourced
nano ~/.bashrc
add the following lines of code at the end of the bash script
cd
source env/bin/activate
cd projects/catkin_ws
catkin_make
source devel/setup.bash
Then press
ctrl-x
Then press
y (yes)
and then press
enter
to save an quit
g. Now try this to make sure it was compiled correctly:
roscd ucsd_robo_car_simple_ros
h. Now give yourself permissions to access all files in repo:
chmod -R 777 .
i. (ONLY DO THIS AS NEEDED) Now as this remote repository is updated, enter the following commands to update the local repository on the jetson:
roscd ucsd_robo_car_simple_ros
git stash
git pull
chmod -R 777 .
Associated file: x11_forwarding_steps.txt
Some jetsons may not have this enabled, so if needed please read the steps in this file to setup X11 forwarding
- ALWAYS RUN ROSCORE IN A TERMINAL ON EVERY BOOT UP OF THE JETSON
roscore
- Calibrate the camera, throttle and steering values using the ros_racer_calibration_node
roslaunch ucsd_robo_car_simple_ros ros_racer_calibration_launch.launch
- Launch [ros racer launch](#ros racer launch)
roslaunch ucsd_robo_car_simple_ros ros_racer_launch.launch
- Tune parameters in step 2 until desired behavior is achieved
Associated file: throttle_client.py
This node subscribes to the throttle topic. We use subscriber callback function to validate and normalize throttle value, and then use the adafruit_servokit module on channel 2 for sending signals to the hardware.
This node is also responsible for reading and setting the throttle calibration values.
Associated file: steering_client.py
Similar to throttle_client, this node subscribes to the steering topic and passes the signals to the hardware. The steering servo is on channel 1.
Plenty of information on how to use the adafruit_servokit libraries can be found here and here
Associated file: camera_server.py
This node simply reads from the camera with cv2's interface and publishes the image to the camera_rgb topic. Before publishing, the image is reformatted from the cv image format so it can be passed through the ROS topic message structure.
Associated file: lane_detection.py
This node subscribes from the camera_rgb topic and uses opencv to identify line information from the image, and publish the information of the lines centroid to the centroid.
The color scheme is defined as follows:
- 2 contours : green bounding boxes and a blue average centroid
- 1 contour : green bounding box with a single red centroid
Below show the image post processing techniques, cv2 methods and the logic applied respectively.
Associated file: lane_guidance.py
This node subscribes to the centroid topic, calculates the throttle and steering based on the centroid value, and then publish them to their corresponding topics. Throttle is based on whether or not a centroid exists - car goes faster when centroid is present and slows down when there is none. Steering is based on a proportional controller implemented by the calculating the error between the centroid found in lane_detection_node and the heading of the car.
Gains can be tweaked in the lane_guidance.py script.
Associated file: ros_racer_calibration_node.py
Calibrate the camera, throttle and steering in this node by using the sliders to find:
- the right color filter
- desired image dimmensions
- throttle values for both the optimal condtion (error = 0) and the non optimal condtion (error !=0) AKA go fast when error=0 and go slow if error !=0
- steering sensitivty change the Kp value to adjust the steering sensitivty (as Kp --> 1 steering more responsive, as Kp --> 0 steering less responsive)
Property | Info |
---|---|
lowH, highH | Setting low and high values for Hue |
lowS, highS | Setting low and high values for Saturation |
lowV, highV | Setting low and high values for Value |
Inverted_filter | Specify to create an inverted color tracker |
min_width, max_width | Specify the width range of the line to be detected |
number_of_lines | Specify the number of lines to be detected |
error_threshold | Specify the acceptable error the robot will consider as approximately "no error" |
frame_width | Specify the width of image frame (horizontal cropping) |
rows_to_watch | Specify the number of rows (in pixels) to watch (vertical cropping) |
rows_offset | Specify the offset of the rows to watch (vertical pan) |
Steering_sensitivity | Specify the proportional gain of the steering |
Steering_value | Specify the steering value |
Throttle_mode | Toggle this slider at the end of calibration to the following 3 modes. |
Throttle_mode 0 | zero_throttle_mode (find value where car does not move) |
Throttle_mode 1 | zero_error_throttle_mode (find value for car to move when there is no error in steering) |
Throttle_mode 2 | error_throttle_mode(find value for car to move when there is some error in steering) |
Throttle_value | Specify the throttle value to be set in each of the throttle modes |
More morphological transfromations and examples can be found here and here
These values are saved automatically to a configuration file, so just press control-c when the deepracer is calibrated.
Name | Msg Type | Info |
---|---|---|
/throttle | std_msgs.msg.Float32 | Float value from -1 to 1 for controlling throttle |
Name | Msg Type | Info |
---|---|---|
/steering | std_msgs.msg.Float32 | Float value from -1 to 1 for controlling steering |
Name | Msg Type | Info |
---|---|---|
/camera_rgb | sensor_msgs.msg.Image | Image last read from USB camera image |
Name | Msg Type | Info |
---|---|---|
/centroid | std_msgs.msg.Float32 | Float value for that represents the error of the x coordinate of centroid in camera image space |
Associated file: throttle_and_steering_launch.launch
This file launches both throttle_client and steering seperately because these topics can take some time to initialize which can delay productivity. Launch this script once and use the other launch files listed below to get the robot moving.
roslaunch ucsd_robo_car_simple_ros throttle_and_steering_launch.launch
Associated file: laneDetection_launch.launch
This file will launch lane_detection_node, lane_guidance_node, camera_server and load the color filter parameters created using ros_racer_calibration_node
Before launching, please calibrate the robot first while on the stand!
roslaunch ucsd_robo_car_simple_ros laneDetection_launch.launch
Associated file: ros_racer_calibration_launch.launch
This file will launch camera_server, ros_racer_calibration_node and throttle and steering launch
roslaunch ucsd_robo_car_simple_ros ros_racer_calibration_launch.launch
[ros racer launch](#ros racer launch) This file will launch throttle and steering launch and lane detection launch
roslaunch ucsd_robo_car_simple_ros ros_racer_launch.launch
For help with using ROS in the terminal and in console scripts, check out this google doc here to see tables of ROS commands and plenty of examples of using ROS in console scripts.
To run any indvidual program, enter this into the terminal and change file_name.py to whatever python file is in the repo
rosrun ucsd_robo_car_simple_ros file_name.py
Associated file: decoder.py
This provides a solution for cv_bridge not working and decodes the incoming image into a numpy array that is then passed to the camera_rgb topic. If cv_bridge is built with python3, then this file is not neccessary.
If while running deepracer_calibration_node or aws_rosracer.launch and if the cv2 windows do not open, then follow the procedure below to potentially resolve the issue.
- Make sure camera is plugged in all the way into its USB socket
- See if image feed is coming through in another application like cheese. (Enter
cheese
into terminal window) - Check to see if the camera topic is publishing data
rostopic echo /camera_rgb
- Restart ROS core
- Reboot if none of the above worked and try again
sudo reboot now
If the camera is still not working after trying the procedure above, then it could be a hardware issue. (Did the car crash?)
If while running ros_racer_calibration_node or [ros racer launch](#ros racer launch) and the throttle and steering are unresponsive, then follow the procedure below to potentially resolve the issue.
- Make sure ESC is turned on
- Make sure battery is plugged in
- Make sure battery has a charge
- Make sure servo and ESC wires are plugged into the pwm board into the correct channels correctly
- Check to see if the steering and throttle topics are publishing data
rostopic echo /steering
androstopic echo /throttle
- Verify that the throttle values found in ros_racer_calibration_node were loaded properly when running [ros racer launch](#ros racer launch) (Values will be printed to the terminal first when running the launch file)
- Restart ROS core
- Reboot if none of the above worked and try again
sudo reboot now
If the Throttle and steering are still not working after trying the procedure above, then it could be a hardware issue. (Did the car crash?)
Using bridge_object.imgmsg_to_cv2() threw errors on our Jetson Nano environment, so we had to resort to our own image decoder function. Function decodeImage() can be imported from decoder.py. If you don't want to use our function, the problem can be avoided by properly building CV_Bridge with Python3 in the ROS package.
An alternative solution can be found here
If your're having issues using python3, then there is a chance that the virtual environment (explained in Environment Configuration) was not setup properly. Try setting up another environment to see if that solves the issue.
More info found here