back to

The building of an autonomous robot that roams farm/forest tracks in Germany

Key Design Questions Going In

What sensors to be used to avoid collision with osbtacles on the path? Right now I focus on the pico flexx module from pmd.

Which battery type to use and how to power required microcontrollers, engines, sensors and telemetry? led or lipo is the question, charged from solar panel.

What to use for telemetry (e.g. if rover is stuck, if interaction with bypassers via speakerphone is needed, for condition monitoring or to give general directions to the rover)? GSM is probably the best initially.

What configuration of wheels, motors and steering to be used that is simple? Right now I'm thinking of 2 wheels with a gear driver motor with encoder for the front left and right and swiveling wheels for the back.

How to protect against water, beeing overrun and vandalism? Main cover will be the solar panel, underneath i will have to create a sturdy box that contains all electronics and sensors. I need to have a 50 cm flag sticking out in the back and white leds in the front, red ones in the back and green and red on the side blinking to make the device more visible to everyone. I plan to write "vogelzählung - bitte lärm vermeiden" on top. If people are approaching I would like to record in a closed circuit what they do to the machine in the cloud - if nothing happens the recording will be deleted.

bill of material

initial scope part purpose cost notes link
y, o scanning lidar YDLIDAR scanning lider to detect obstacles and to identify path 100,70€ ydlidar won over rplidar due to slightly better mechanics and specs
raspberry pi noir camera v2 allow to capture scene in combination with lidar to analyze performance of detection based on lidar and to provide imjage to operator if the robot is stuck 24,99€
lte surfstick + external antenna ip-based connectivity for raspberry pi, stream measurements and images, receive commands
y,o raspberry pi high level functions and robot operating system ros, interface to all usb devices, analyzing sensor signal, telemetry hub 34,50€
y,o 16gb sd card for pi os 10€
y,o, r arduino compatible board POLOLU A-STAR 32U4 PRIME LV MICROSD low level functions like battery, motor control, imu, led display and sleep functions 28.51€
adafruit bno055 breakout imu - absolute orientation intertial measurement unit, also measures temperature in the body, self-calibrating 35.36€ need to remember calibration values on power off
y, o, r body and sides, wood box initialy
y, o 4 wheels 24.99€
y, o 4x wheel adapter hex 12mm to 6mm to connect axles to rc car wheels 2x3.95€
y,o silberstahlachse 6mm diameter, 30cm 3€ needs to be cut to length
y, o 4 gleitlager 6mm 4x1.49€
y,o 2 kupplungen 6mm auf 4mm 2x2.99
y, o 2x metal gear motor with encoder controlled acceleration and deceleration, breaking 2x12.73€
y, o,r dual motor driver driver capable of driving engine voltage and powers based on control signals 29.94€
2xbracket for gear motor controlled position of gear motor on base plate 6.94€
y,o,r fuse to avoid short circuit burning the weakest part of the circuit away
y,o,r 2s lipo battery to power the engines and potentially the rest make sure to only charge it when temperature in the body is above 0°
y,o lipo connector cables
lipo pouch
y,o 1 emergceny shut off switch 4.49€
------ not in the design anymore -----
pmd pico flexx generating depth map for obstacle detection and detection of the path 200-300€ reach not far enough

command reference

to change between booting to command prompt or desktop

​sudo systemctl get-default
sudo systemctl set-default
sudo systemctl set-default


to install the usb-cam driver for ros

sudo apt-get install ros-kinetic-usb-cam
rosrun usb_cam usb_cam_node

to run rviz, ydlidar

rosrun ydlidar ydlidar_client
roslaunch ydlidar lidar_view.launch
roslaunch ydlidar rviz.launch
roslaunch ydlidar lidar.launch
nohup rosrun usb_cam usb_cam_node &
roslaunch analyze_scan analyze.launch
rosrun rviz rviz -d /home/ubuntu/catkin_ws/src/ydlidar/launch/lidar.rviz

ros stuff

rosparam set /ydlidar_node/frequency 1

nohup rosbag record /usb_cam/image_raw/compressed /scan --split --duration=5m --lz4 &
rosbag play 2018-10-24-20-06-32.bag
rosbag reindex

rostopic list
rostopic echo battery_state
rostopic echo /scan -n1
rostopic info pose
rostopic pub /pose geometry_msgs/PoseStamped '{header: {stamp: now, frame_id: "world"}, pose: {position: {x: 0, y: 0, z: 1}, orientation: {w: 1.0}}}'

rosrun rosserial_python /dev/ttyACM0

to copy all files from pi to my machine (and into my vm on the way)

rsync -avz ubuntu@rover.local:catkin_ws/src/ana* /users/d039026/dropbox/rover

network and system stuff

sudo shutdown
ssh ubuntu@rover.local
sudo service network-manager restart
sudo pifi set-hostname rover
ps -aux

to make a new project

catkin_create_pkg analyze_scan
cd analyze_scan/
catkin_make analyze_scan

end-to-end install and test ubuntu and ros on a vmware image and on a pi and arduino and all sensors

This script is my record of all the steps taken from an empty sd card and an empty vmware fusion on a mac to get to a setup where ros and all needed libraries are installed and working with each other. It assumes that the mac with vmware and the pi are in the same wifi network (which could be the mobile phone hotspot or your home network) for testing.

download ubuntu mate 64 bit 16.04.5 xenial from and install the iso file in vm ware fusion 11 to run the ros environment on the laptop to test.

download ubuntu mate arm 16.04.2 xenial from and install the iso file via etcher on a 16GB Sandisk Ultra HC1 to run ros in the robot on a raspberry pi.

(? configure vm to make use of 2 cores and 3d hardware acceleration and 1024 MB to make rviz work well)

boot raspberry pi 3 b+ from sd card connected to monitor, keyboard and mouse to configure it

system configuration

connect to wifi network behrens

change keyboard to german keyboard

set user marcusbehrens

set machine name rover

set password

set login automatically

run software updater - on a raspberry pi this will fail due to not enough space on boot partition - use sudo apt update and sudo apt upgrade instead (see - it will take some time

right-click on the terminal application to add an icon to the desktop or at the top - we will need a lot of terminal windows to launch different ros drivers for the lidar and for rosserial and for rviz ...

activate ssh via sudo systemctl enable ssh to allow remote access to the pi or to the workstation - this allows for example to remote control the robot via a ssh session

(failed: install ros using the two line method: - did not work, indigo seems to be the default and file not found)

install ros following this guide: using ros-kinetic-desktop to have ros installed

configure workspace called catkin_ws based on this guide:

do echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc to set your working directory permanently

install ydlidar driver following the YDLIDAR ROS Manual in rhe for the X4 device to the letter to be able to run ydlidar node

run roscore in an extra window

try running the ydlidar node using connecting ydlidar to any serial port using roslaunch ydlidar lidar_view.launch

-> if you see the laserscan in the rviz window you know pi and ydlidar and ros know of each other and work together


sudo apt install python-pip to install python installer to install rosserial_python (do not upgrade it with pip install --upgrade pip to the latest version - then it will not work anymore), on the pi pip is already part of the ubuntu image but to make it work you need to add export LC_ALL=C to your .bashrc - otherwise you get a locale.Error.

pip install pyserial to install serial support for python

sudo apt-get install ros-kinetic-rosserial-python to install rosserial_python package which is required to read data from the arduino

sudo adduser marcusbehrens dialout to add your user to the dialout group so this user can access the serial port and reboot

----- on another pc

add arduino ide from

(on linux go through additional install scripts in arduino folder (not required on mac or windows) - but beware of howthese configuration changes might interfere with serial configuration for ydlidar or rosserial)

hook up arduino mega to your pc via usb

choose arduino mega from the boards menu

select the port to which the arduino mega is connected

upload blink sketch from examples to arduino mega

-> arduino led should be blinking

download rosserial library using library manager in arduino ide

upload helloworld example to arduino mega

------ on the pi or in the vm

hook up the arduino with helloworld example running via usb to the pi or the vm

run roscore in an extra terminal window

rosrun rosserial_python _port:=/dev/ttyACM0 to get data from the Arduino

rostopic list in an extra window to find the chatter topic

rostopic echo chatter to see the data beeing posted

-> now you know the connection arduino to vm or pi is working

------ next hookup the motor shield to the arduino

using excel hook up motors and encoders to shield and arduino pins

try mc33926 demo example from pololu first to only drive the motors

now try motors9 to see it work together with ros and the pi

on the pi start roscore

start rosserial by using rosrun rosserial_python _port:=/dev/ttyACM0

sudo apt-get install ros-kinetic-teleop-twist-keyboard to install the remote control app
rosrun teleop_twist_keyboard to start it

> robotshould now be controlled with remote keyboard commands, this should work both vomr the vmware or on thepi connected via ssh sessions via wireless

----- hook up bno055 to the arduino mega

connect the wires via a level shifter

start sensorapi example from adafruit library to see values via serial monitor

start orientation2 to see values posted as rostopic

----- hook up neo-6m-c to the arduino mega

connect the wires via a level shifter

start neogps example to test

--- bring workstation (vmware image on mac) and pi to consider pi the rosmaster - this way you can run rviz, rosbag, remote control app and rqplot on the vmware taking the message broker data from the pi

configure the vmware network to be a bridged network - this make the ubuntu machin show up in your network (see also "to simplify things" under

follow this tutorial to try out connection:

now you can run rviz in the vm and the sensors and motors on the pi

---- install webcam ros driver on the pi (help from

sudo apt-get install ros-kinetic-usb-cam


ls /dev | grep video

rosrun usb_cam usb_cam_node

---- move analyse_scan to the pi

rsync -rv catkin_ws marcusbehrens@rover:.

--- create roslaunch on the pi to start up core and its sensors with one command

nov 21 - motors, encoders and the attitude and heading reference system

Ok, I pulled together the motorshield, the motors, the encoders and got it to work with the libraries provided by pololu rather quickly.

To measure the speed the encoder generates the position and then every tenth of a second i checked how much progress we made and from this i caluclate the speed in an interrupt (using TimerThree libary as the TimerOne interferes with the PWM needed for the motors).

Then I added the PID control - I used the official pid library and after some trials i got a decent set of parameters - i will need to adjust once the wheels and the weight of the vehilce is on as this will chaneg the whole framework. Now I can ride at constant speed controlled by the encoders and also accelerate and decelarte to desried speeds without to much rocking.

Big challenge was how to connect the bno055 sensor for heading and attitude to my boards. It failed via i2c from bno055 to the 32u4 based board i have been using - all sorts of tricks did not help (pull-up resistors). I can still try to go down from default 400khz to 50khz to see if this helps.

It also failed via i2c from a nodemcu to the bno055. Actually it did work out of the box with the serial.print example. But then piping it through rosserial did not work for some reason. I adjusted baud rate but this did not help.

Next attempt could be via i2c to the pi but i have lots of reports where pi and i2c to bno055 are troublesome and i might want to wake up the whole system on movement intrrupt from the bno055 and this will not be possible via the pi.

I guess I will have to iterate a little on this or simply add an arduino nano or pro or mini to the mix to make it work.

nov 27 - hooking it all up

after a lot of experimentation I got everything hooked up to the arduino or to the raspberry pi. but thebig problem now is hooking it all up in parrallel and keeping it working. here is what i did in order:

as discussed before i put a pid controller and the motor driver onto the arduino. then i added a library that would use the serial port to communicate to the raspberry via the rosserial protocol. this also meant no more debgging output from the arduino. but i was able to see the motors node created in ros on the raspberry and could even remote control it via the teleop package in ros. this was promising.

then the ordeal started with hooking up the bno055 - i wanted it to eventually wake a sleeping arduino - hence i attached it to the arduino. the default is i2c but for whatever reason i2scanner and other examples did not find it. i checked the wiring, added smaller pull-up resistors to increase speed to no avail. so i hooked it up to another microcontroller that was 3.3v based - an esp8266. this worked right away but when i tried to hook this up via rosserial to the raspberry i did not get this to work althgouh i made sure on both sides to use 57600 baud. oh well. i then figured out, that after the arduino, the ydlidar sensor, the webcam and a potential usb lte stick i would not have any usb port available on the pi anymore out of the 4 available to hook up another microcontroller - so i had to hook up the bno055 with the pi or the arduino anyway. oh well.

i procrastinated and then went ahead hooking up the gps - starting with the pi. i could not hook it up to the arduino as there i still wanted to hook up the bno055 somehow and of the 4 individually addressable interrupt pins on a 32u4 2 were taken by the interrupts needed for the wheel encoders. using the serial port via uart (and not via usb) on the pi should have been easy but it was not. i had to follow instructions to switch the 2 uarts around between bluetooth and allowing to reach the console which took me 4 hours to get to work. then i added the bno055 to the arduino and for a reason i cannot tell it now worked - this was goode. but when i then tried rosserial again to at the same time see data coming from the arduino to the pi it did not work anymore where before it worked so well. the rosserial is hard to debug as you have no serial output. at this point i got really frustrated and decided to build up the pi image from scratch to get rosserial working first and then add the other options. on the pi side i also will build up the image much more carefully remembering each command with this help and also documenting each command.

as a bottom line i think using an arduino mega where you have plenty of interrupt lines and 4 serial ports is better to hook up sensors which usually work out of the box with arduino. then use raspberry only for the usb based sensors and for bringing it all together. and then use logging available via rosserial for debugging the arduino sketch.

oct 27 - first test of the sensor in the wild

today i went out for the first time with the robot with wheels, pi, webcam and lidar only. So I pushed it like a stroller in front of me and guided it around the forest and neighbourhood for about an hour.

to get started i used 2 usb battery packs to power the lidar any my mobile phone hotspot to have both the pi and my laptop be in the same network. Then i did an ssh from the laptop to the pi and triggered the webcam via "nohup rosrun usb_cam usb_cam_node &", the lidar via "nohup roslaunch ydlidar ydlidar.launch &" and the recordign via "nohup rosbag record /usb_cam/image_raw/compressed /scan --split --duration=5m --lz4 &-2.

The nohup and the & take care of running the job in the background and not quiting when my ssh session ends. The record options compress the recording and split it every 5 minutes (if power fails then i do not loose the whole recording).

Back at home i stopped the recording and all else and then sat down in my basement to review 50 minute of recording.

I did not get virtualbox to work and vm ware costs 50€ - so I did not get a decent vm with rviz runnogn on the recordings - so i looked at all of it on the pi itself.

Observations in the data:

leaves make the line ruther rugged

the line is really not straight but the closer it is the more likely it is in the same position

the road can be tilted based on shaking or based on the road - so the line is typically not horizontal

key is for a line that is not jagged but a smooth line (need to fin the right mathematical function for this

above are patterns that I found to be re-occuring:

  1. the line from the floor is sometimes interrupted with the right line low and the left line high - i attribute this to the laser making a full circle and then a short delay before it continues. So if i turn the laser around 180° then this should go away TODO
  2. the half circle in the bacl that is me
  3. the 2 dots are either some other source of error or the stick that i use for pushing. i have to test this explicietly next time TODO

below is a situation where on the left there is a hill and on the right it goes downhill. The downhill on the right is not so easy to see but if the points are far away from the line to the top right thtis is an indication.

Next I want to create a node that takes the laserscan and comes up with a model of the world ahead and around me that shows where it is save to go and where not. i'm uncertain if this model is based on angles and depth information (very similar to the laserscan - could be input to "traditional" navigation algortihms in ros) or if it should be more like ranges of angles where going towards is save to go and a value indicating how close i'm to trouble if going forward further.

What is really good with the pushbroom laser configuration is that the area in front of me comes in at a very high resolution compared to all of the rest.

Coming up next: a hole in the ground! Not very visible in the lidar - in the image its filled with leaves. I guess "smoothness" of the curve means that the first derivation ("erste ableitung") does not have any jumps.

Our first real obstacle is approaching:

Paved Paths are of course very nice to sense:

software installation and first steps with ydlidar and webcam on a dummy robot

  1. Downloaded
  2. Used Etcher on a Mac to install image on an sd card
  3. Followed instructions at to boot to terminal
  4. Was able to follow the manual for the ydlidar x4 device (make would fail if run in desktop environment)
  5. I reconfigured to have the desktop show up again (see step 3)
  6. then I run the launch commands in the manual from step 4 (i initially forgot to unplug the device and plug it back in - but after a reboot i was ok)
  7. initially I failed to get the lidar to work but then i added a pwoer bank as an additional power supply
  8. Then I could see the lidar operating with the rviz application

My next goal was to do a test recording from the lidar sensor combined with a camera image. I thought initially doing a screen recording of both rviz with the lidar and a webcam view would be a good idea. But of 4 screen recording tools for unix none of them really worked well.

So i looked at rosbag which allows to record all of what is capture.

First hurdle was to be able to play it back in rviz. After finding out how to start rviz relative to the ydlidar sensor i got this to work (see command list below).

Next hurdle was to install the usb cam driver. This was easy (see commands below).

Now I was able to generate a rosbag with the image and the lidar with timestamps that i could replay into rviz later. Data rate was 1MB per second which at 9GB free meant if i record 5GV its about 80 minutes of recording - that would be good enough for the first session.

Splitting up the recordign into multiple files might be a good idea to not have one bige but cirrupted file in the end.


downloaded image of raspbian stretch with ros kinect from and installed in on a 16gb sd card to go into a raspberry 3b.

followed the instructions in the ros manual to try out the ydlidar in ros viewer.

October 12th, 2018: The pmd pico flexx seems to not provide enough range outdoors. If you look at you will see, that the left and right side of a farm track will be hardly visible. Due to this I now consider using a 360 rotating lidar. I plan to tilt the lidar foward to allow to understand the profile of the ground in front of me. I also plan to use 2 mirrors to use the rear-facing 90° for another level of scanning further into the distance looking forward. This way I get 2 scans 5 times a second and If I get a distant line like this: ^^^\___/^^^and a closer line like this ^\_____/^ then I get a good impression of the how the track runs.