Skip to content

C-A-V-ROBOT/DR2

Repository files navigation

DR2

ISR-Lab project 2017

What we are going to do:

The robot we are going to work on is the Kobuki. We are going to use its depth camera with ROS environment, in order to use it in AI (so it will be compatible with SWI-Prolog, SICStus Prolog and DALI).

Needed hardware:

  1. Kobuki robot
  2. JetsonTK1 board
  3. Orbbec Astra 3D Camera

Software constraints:

  • We will use Python, Prolog and ROS.
  • We are going to follow the Sense-Think-Act paradigm.
  • We will use Docker, in simulation, to avoid compatibility conflicts.
  • We will use ROS Indigo due to JetsonTK1 driver limitation (at the moment Ubuntu 16 has some problem)

Project Goals:

  1. Reconfigure JetsonTK1 board (Done)
  2. Configure Orbbec Astra 3D Camera (Done)
  3. Create a communication channel between the camera and ROS environment (Done)
  4. Create a Docker container with all the needed environment in it to avoid compatibility conflicts (Done)
  5. Integrate Prolog in ROS environment (Done)
  6. Create a simulation and run it on the robot (Working with a dummy simulation. We will add some effort here. Done)
  7. (Additional) Take camera timestamps and images (Done)
  8. (Optional) Facial recognition ability

Use case 1:

Kobuki goes around into the room, avoiding obstacles thanks to the bumper sensor. Kobuki is able to forward camera data to proper ROS topics.

(Optional) Use case 2:

Kobuki goes around into the room, avoiding obstacles and "looking" around thanks to the camera. Kobuki is able to forward camera data to its Prolog brain. Kobuki is able to detect people face and to recognize them (by making queries to its Prolog brain).

Links:

Software Links:

(OPTIONAL, if you have all the needed hardware, so you don't want to make a simulation only) Robot Instructions:

  1. Start with a clean Ubuntu 14 installation on jetson TK1, then follow the instructions at GitLab or GitHub. (Note the two repository are mirrored, so they are the same and you can chose just one of them)

  2. Now you have a full working Ros Indigo installation wich is able to talk with kobuki base and Astra camera and a pre-configured catkin workspace at ~/catkin_ws

  3. It is recommended to connect to jetson using different instance of putty client to manage ROS modules.

N.B. Due to some incompatibility packages, it has not been possible to install ros-turtlebot packages on Jetson TK1. Ubunti dpkg has some missing packages and can not complete the apt-get installation. (Maybe some further time-costly investigations can fix this problem)

PC Instructions:

  1. To have and use an Indigo ROS container, follow the instructions at: https://github.com/agnsal/docker-IndigoROSdisPyPl
  2. To Install turtlebot ROS packages:
sudo apt-get install ros-indigo-turtlebot ros-indigo-turtlebot-apps ros-indigo-turtlebot-interactions 
ros-indigo-turtlebot-simulator ros-indigo-kobuki-ftdi ros-indigo-rocon-remocon ros-indigo-rocon-qt-library 
ros-indigo-ar-track-alvar-msgs
  1. To run the ROS core:
roscore
  1. To launch the simulation in Gazebo:
roslaunch turtlebot_gazebo turtlebot_world.launch
  1. (TEST) To give commands to the Gazebo turtlebot manually (via keyboard):
roslaunch turtlebot_teleop keyboard_teleop.launch
  1. To download Turtlebot code into the workspace, build it and run it, follow the instructions at: https://github.com/agnsal/kobukiROSindigo
  2. (IMPORTANT) To save all your work on the container, you have to exit it and save its state:
exit
docker commit "IndigoROSdisPyPl"

Screenshots:

Releases

No releases published

Packages

No packages published