Skip to content

Post processing pipeline for the Object Recognition Kitchen

Notifications You must be signed in to change notification settings

xMutzelx/colorhist

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Colorhist

This repository contains a new post processing pipeline for ORK (Object Recognition Kitchen). Colorhist provides a support for color detection and comparision of color histograms.

If you want to use this pipeline, you have to take care of some things.
Installation tutorial:

create catkin_ws
RUN source /opt/ros/$ROS_DISTRO/setup.bash && \
mkdir -p /root/catkin_ws/src && \
cd /root/catkin_ws/src && \
catkin_init_workspace && \
cd /root/catkin_ws && \
catkin_make && \
echo 'source /root/catkin_ws/devel/setup.bash' > /root/.bashrc

install ork dependencies
RUN apt-get -y install libopenni-dev \
ros-$ROS_DISTRO-ecto* \
ros-$ROS_DISTRO-opencv-candidate \
ros-$ROS_DISTRO-moveit-msgs \
ros-$ROS_DISTRO-openni* && \
source /root/catkin_ws/devel/setup.bash

clone ork gitlab repos
RUN cd /root/catkin_ws/src && \
git clone http://github.com/wg-perception/object_recognition_core && \
git clone http://github.com/wg-perception/capture && \
git clone http://github.com/wg-perception/reconstruction && \
git clone https://github.com/xMutzelx/ork_renderer.git && \
git clone http://github.com/wg-perception/tabletop && \
git clone https://github.com/xMutzelx/tod.git && \
git clone http://github.com/wg-perception/transparent_objects && \
git clone http://github.com/wg-perception/object_recognition_msgs && \
git clone http://github.com/wg-perception/object_recognition_ros && \
git clone http://github.com/wg-perception/object_recognition_ros_visualization && \
git clone https://github.com/xMutzelx/ork_tutorials.git

change package versions
RUN cd /root/catkin_ws/src/capture && \
git checkout 0.3.1 && \
cd /root/catkin_ws/src/object_recognition_core && \
git checkout fb3b3df && \
cd /root/catkin_ws/src/object_recognition_ros_visualization && \
git checkout f072ccf && \
cd /root/catkin_ws/src/ork_renderer && \
git checkout glut_fix && \
cd /root/catkin_ws/src/reconstruction && \
git checkout 8adb948 && \
cd /root/catkin_ws/src/tabletop && \
git checkout 7d49e3e && \
cd /root/catkin_ws/src/tod && \
git checkout kinectv2_refactoring && \
cd /root/catkin_ws/src/transparent_objects && \
git checkout 75d7663

install ros dependencies
RUN cd /root/catkin_ws && \
apt-get update && apt-get install -y libvlccore-dev python-apt && \
rosdep install --from-paths src --ignore-src --rosdistro $ROS_DISTRO -y

build all this shit
RUN cd /root/catkin_ws && \
source devel/setup.bash && \
catkin_make -j4

build linemod
RUN cd /root/catkin_ws/src && \
git clone https://github.com/xMutzelx/linemod.git && \
cd /root/catkin_ws/src/linemod && \
git checkout colorhist_ready && \
cd /root/catkin_ws && \
rosdep install --from-paths src -r -y -i && \
source devel/setup.bash && \
catkin_make -j4

build colorhist
RUN cd /root/catkin_ws/src && \
git clone https://github.com/xMutzelx/colorhist.git && \
cd /root/catkin_ws && \
rosdep install --from-paths src -r -y -i && \
source devel/setup.bash && \
catkin_make -j4

I will soon publish an ready to use docker container. Link will follow.

Training: rosrun object_recognition_core training -c rospack find object_recognition_colorhist/conf/training.ork --visualize
The trainer takes the captured color pixels and creates a color histogram.
Detection: rosrun object_recognition_core detection -c rospack find object_recognition_colorhist/conf/detection.ros.ork
The output from Linemod (Object ID, segmented Image) gets analized and compared with the relevant models in the database.
Now you can detect different sorts of one object (Pringles red, green, purple, ...).

Caution: Everything is under construction! It might not work in the current state. A new release will follow soon.

Tutorial (I asume that you have basic knowledge about ORK and Linemod):

  1. Add some objects to your database. Important: You can only use trained objects, not modeled objects (example: blender).

  2. You have to define one "parent" model in your database

  • You have to add the field "Skip" in your database document. Path: by_object_id_and_mesh:
    • 0: train it in Linemod (parent model, example: Pringles purple)
    • 1: do not train it in Linemod (child model, example: Pringles green, yellow, ...)
  1. Train Linemod: rosrun object_recognition_core training -c rospack find object_recognition_linemod/conf/training.ork --visualize

  2. Train Colorhist: rosrun object_recognition_core training -c rospack find object_recognition_colorhist/conf/training.ork --visualize

  3. You have to define the variations of an object by hand
  • Add the field "Variations" in all parent database documents (example: Pringles purple). Path:by_object_id_and_ColorHist:
    • Add all child ID's (example: Pringels yellow, red, ...) into this field, and seperate them with ";" Caution: Don't use blanks and add an ";" at the end. (example: "7d7583e438b42e673d3c6a358e00009f;7d7583e438b42e673d3c6a358e03a12a;")
  1. Test:
  • Start your camera: roslaunch openni_launch openni.launch
  • Start depth regestration: rosrun rqt_reconfigure rqt_reconfigure (camera->driver->depth_registration)
  • Start your detection: rosrun object_recognition_core detection -c rospack find object_recognition_colorhist/conf/detection.ros.ork
  • You can visualize the results in RVIZ: rosrun rviz rviz

The code for Kinectv2 is already implemented, but it is commented. I will soon add a flag for an easy switch between both cameras.

About

Post processing pipeline for the Object Recognition Kitchen

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published