智能机器人(42):IP-camera

一、GStreamer
二、gst-launch
三、在linux环境使用ip-camera
四、在ros环境使用ip-camera

  • 一、GStreamer

GStreamer是个建立音视频管道的工具包,例如stream一个媒体文件到互联网,再例如把一个test.avi文件stream到一个V4L的camera。V4L的capture主要是用:
1–gst-launch-1.0 for capturing video
2–FFMpeg for saving and editing video
3–v4l-ctl for controlling your video card
4–mpv for viewing videos
5–gst-inspect for listing elements

  • 二、gst-launch

操作GStreamer主要用gst-launch这个CLI,这个命令行工具可以创建pipeline、初始化、然后运行,例如可以把一个avi媒体文件模拟成摄像头,把ip-camera回环成一个usb-camera,等等。
1–例如最简单的:
$ gst-launch-1.0 fakesrc ! fakesink
这样建立了一条simplest pipline,实现connects a single (fake) source to a single (fake) sink。
2–例如记录视频,create an AVI file with raw video and no audio:
$ gst-launch-1.0 v4l2src device=$VIDEO_DEVICE ! $VIDEO_CAPABILITIES ! avimux ! filesink location=test.avi
3–例如记录音频,create an AVI file with raw audio and no video:
$ gst-launch-1.0 $alsasrc device=$AUDIO_DEVICE ! $AUDIO_CAPABILITIES ! avimux ! filesink location=test.avi
4–记录音频和视频:
$ gst-launch-1.0 v4l2src device=$VIDEO_DEVICE ! $VIDEO_CAPABILITIES ! mux. alsasrc device=$AUDIO_DEVICE ! $AUDIO_CAPABILITIES ! mux. avimux name=mux ! filesink location=test-$.avi
This pipe has three parts:a video source leading to a named element (! name. with a full stop means “pipe to the name element”) an audio source leading to the same element a named muxer element leading to a file sink
5–把一个source分为多个out:
$ gst-launch-1.0 v4l2src device=$VIDEO_DEVICE ! $VIDEO_CAPABILITIES ! avimux ! tee name=network ! filesink location=test.avi tcpclientsink host=127.0.0.1 port=5678
This sends your stream to a file (filesink) and out over the network (tcpclientsink). To make this work, you’ll need another program listening on the specified port (e.g. nc -l 127.0.0.1 -p 5678).
6–从相机查看图片:
$ gst-launch-0.10 v4l2src do-timestamp=true device=$VIDEO_DEVICE ! video/x-raw-yuv,format=(fourcc)UYVY,width=320,height=240 ! ffmpegcolorspace ! autovideosink
7–其它选项
gst-launch -v –gst-debug-level=3,输出的调试信息会多一些。
IP camera, gstreamer and virtual video devices,指明画幅参数。

  • 三、linux环境使用IP相机

1、安装
1.1、包
$ sudo apt-get install v4l2loopback-dkms
1.2、源码
如果失败apt-get method may fail durning install. some wranings were treated as errors. Not sure if it’s a compiler compatibility issue.
$ sudo su
$ git clone https://github.com/umlaeute/v4l2loopback.git
$ cd v4l2loopback
$ make && make install
(是否需要sudo忘记了)
2、加载模块
2.1、加载
$ sudo modprobe v4l2loopback
2.2、如果提示not found
$ sudo depmod v4l2loopback
$ sudo modprobe v4l2loopback
2.3、如果还失败
$ sudo apt-get install linux-generic<——–maybe freeze you kernel
$ sudo apt-get install v4l2loopback-dkms
$ sudo depmod v4l2loopback
$ sudo modprobe v4l2loopback<——————
or
$ sudo modprobe v4l2loopback video_nr=7 3、验证
$ v4l2-ctl –list-devices
会出来虚拟设备,这里是video1,我原有的usb-camera是video0是。
Dummy video device (0x0000) (platform:v4l2loopback-000):
/dev/video*
同样也可以
$ dmesg v4l2loopback | grep v4l2
如果出现libv4l2: error getting pixformat: Invalid argument
or:
v4l2loopback: module verification failed: signature and/or required key missing – tainting kernel
美观它。

4、建立媒体源
the simpest is:
$ gst-launch v4l2src device=/dev/video0 ! v4l2sink device=/dev/video1
4.1、接收流媒体是:
$ gst-launch-0.10 udpsrc port=1234 ! theoradec ! ffmpegcolorspace ! ximagesink
发送流媒体是:
$ gst-launch-0.10 v4l2src ! ffmpegcolorspace ! theoraenc ! udpsink host=127.0.0.1 port=1234
注意先执行接收的。
4.2、显示或保存
显示视频:
$ gst-launch v4l2src ! xvimagesink
视频保存成文件:
$ gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! jpegenc ! avimux ! filesink location=osug-1.avi
4.3、某段音频伪装成相机video1:
$ gst-launch videotestsrc ! v4l2sink device=/dev/video1
查看下,ok:
$ cheese –device=/dev/video1
相机0伪装成相机1:
$ gst-launch v4l2src device=/dev/video0 ! v4l2sink device=/dev/video1
查看下,ok:
$ cheese –device=/dev/video1
文件伪装成相机1:
$ gst-launch filesrc location=”/home/zdh991/tmp/osug-1.avi” ! avidemux ! v4l2sink device=/dev/video1
查看下,ok:
$ cheese –device=/dev/video1
IP相机伪装成相机1:
$ gst-launch souphttpsrc location=http://192.168.1.126:80 ! jpegdec ! ffmpegcolorspace ! v4l2sink device=/dev/video1
查看下,ok:
$ cheese –device=/dev/video1

5、利用iphone模拟测试
IP camera, gstreamer, and virtual video devices.
如果没有IP-camera,可以使用iphone的摄像头。
安装一个app例如third eye,让iphone接入局域网分配。
在iphone获取到ip地址后,从pc上测试通过浏览器输入iphone-ipaddress:8091,如果可以实时显示iphone摄像头的视频,说明okay。

利用gat-launch操作iphone的这个ip:port把这个摄像头作为wifi的ip-camera来使用:
$ gst-launch souphttpsrc location=”http://172.20.10.11:8091″ ! jpegdec ! ffmpegcolorspace ! v4l2sink device=/dev/video1
这里面172.20.10.11:8091就是IP-camera的地址和端口。注意pc和iphone在同一subnet。

  • 四、ROS环境使用IP-camera

ros提供gscam这个node,可以用image view查看,例如:
$ gst-launch souphttpsrc location=http://[user]:[password]@[camera_ip]/mjpg/video.mjpg ! jpegdec ! v4l2sink device=/dev/video0
$ GSCAM_CONFIG=”rtspsrc location=rtsp://CameraIP/ipcam.sdp ! video/x-raw-rgb,framerate=30/1 ! ffmpegcolorspace”
$ rosrun image_view image_view image:=/gscam/image_raw
使用gscam也可以配合rosbridge的websocket,例如wss://ip:port。

  • A、gst-launch备忘

这个工具只能建立简单地pipeline。尤其是它只在特定层级之上模拟pipeline和应用的交互。可以很简单的快速测试pipeline。注意gst-launch主要是一个调试工具,真正用gst_parse_launch()这个API来创建pipeline。

gst-launch的命令行包括一个在PIPELINE-DESCRIPTION之后的一系列选项。简单说,一个PIPELINE-DESCRPTION是一系列用!分隔开的元素:
$ gst-launch-0.10 videotestsrc ! ffmpegcolorspace ! autovideosink
这用videotestsrc,ffmpegcolorspace和autovideosink三个element。GStreamer会把他们的输出pad和输入pad连接起来,如果存在超过1个可用的输入/输出pad,那么就用pad的Caps来确定兼容的pad。

element
element可能是有属性的,在命令行里格式就是“属性=值”,多个属性用空格来分开。可以用gst-inspect工具来查一下element的属性。
$ gst-launch-0.10 videotestsrc pattern=11 ! ffmpegcolorspace ! autovideosink
element可以用name这个属性来设置名称,这样一些复杂的包含分支的pipeline可以创建了。有了名字,就可以使用前面创建的element,这在使用有多个pad的element(比如demuxer或者tee等)时是必不可少的。
$ gst-launch-0.10 videotestsrc ! ffmpegcolorspace ! tee name=t ! queue ! autovideosink t. ! queue ! autovideosink
这把videotestsrc先连接了ffmpegcolorspace,然后连接了tee element 这个tee就被命名成‘t’,然后一路输出到queue以及autovideosink,另一路输出到另一个queue和autovideosink。

Pads
在连接两个element时与其让GStreamer来选择哪个Pad,宁可直接指定Pad。可以在命名element后使用.+pad名字的方法来做到这点(element必须先命名)。同样可以用gst-inspect来查看element里面pad的名字。
$ gst-launch-0.10.exe souphttpsrc location=http://docs.gstreamer.com/media/sintel_trailer-480p.webm ! matroskademux name=d d.video_00 ! matroskamux ! filesink location=sintel_video.mkv
这使用souphttpsrc在internet上锁定了一个媒体文件,这个文件是webm格式的。可以用matroskademux来打开这个文件,因为媒体包含音频和视频,所以创建了两个输出Pad,名字分别是video_00和audio_00。把video_00和matroskamux element连接起来,把视频流重新打包,最后连接到filesink,这样就把流存到了一个名叫intel_video.mkv的文件。总之找了一个webm文件,去掉了声音,仅把视频拿出来存成了一个新文件。如果保持声音,那么就应该这样:
$ gst-launch-0.10.exe souphttpsrc location=http://docs.gstreamer.com/media/sintel_trailer-480p.webm ! matroskademux name=d d.audio_00 ! vorbisparse ! matroskamux ! filesink location=sintel_audio.mka
这里的vorbisparse element会从流里面取出一些信息,然后放到Pad的Caps里面,这样下一个element,也就是matroskamux就可以知道如何处理这个流了。这个处理在抓取视频的时候是不用做的,因为matroskademux已经做了这件事情。
注意上面两个例子中媒体没有被解码和播放,仅仅只是把数据搬动了一下而已。

Caps过滤
当一个element有不止一个pad时,连接下一个element可能是模糊不清的,因为下游的element可能有不止一个的兼容的输入pad,或者它的输入pad可以和所有的输出pad兼容。在这样的情况下,GStreamer会使用第一个可以连接的Pad,这样相当于说GStreamer是随机找一个pad来连接的。
$ gst-launch-0.10 souphttpsrc location=http://docs.gstreamer.com/media/sintel_trailer-480p.webm ! matroskademux ! filesink location=test
这里和上一个例用了同样的媒体文件和demuxer。finksink输入pad是任意格式,这意味着它可以接受所有的媒体格式。那么matroskademux的哪个pad可以接到filesink呢?video_00还是audio_00?无法知道。为了消除这种不确定性,前面例子中用了pad的名字的方法,这里使用Caps过滤的方法:
$ gst-launch-0.10 souphttpsrc location=http://docs.gstreamer.com/media/sintel_trailer-480p.webm ! matroskademux ! video/x-vp8 ! matroskamux ! filesink location=sintel_video.mkv
一个Caps过滤动作类似于让element不做任何动作,仅仅接受给出的Caps。在这个例子中,在matroskademux和matroskamux中间加入了一个ievideo/x-vp8的Caps过滤,这样就表明在matroskademux中我们仅仅需要能生成这种类型视频的输出Pad。
需要用gst-inspect工具来查看一个element能接受和生成的Caps,用gst-discoverer来查看文件里面包含的Caps。如果需要查看在pipeline里面一个element生成的Caps,在gst-launch里面使用-v参数即可。
一个调整视频比例的pipeline。videoscale element可以调整输入尺寸然后再输出。例子里面用Caps过滤设置了视频大小为320×200:
$ gst-launch-0.10 uridecodebin uri=http://docs.gstreamer.com/media/sintel_trailer-480p.webm ! queue ! videoscale ! video/x-raw-yuv,width=320,height=200 ! ffmpegcolorspace ! autovideosink

智能机器人(41):USB-camera

一、camera的驱动
二、在linux种使用uvc驱动
三、在ros中使用uvc驱动
四、在ros中使用usb驱动

  • 一、camera的驱动

1、UVC
第一个概念是UVC:USB Video Device Class,大部分usb接口(包括主办集成)都兼容uvc,uvc的实现独立于linux/weindows等系统。
2、V4L
第二个概念是V4L或者V4L2:Video4Linux,它提供了一系列的API,新版本是2,V4L2。
3、usb_cam
ROS的usb_cam这个package实现了V4L,提供的node是usb_cam_node,发布的topic是<camera_name>/image,msg类型是sensor_msgs/Image ,参数主要有video_device,camera_name ,pixel_format ,camera_frame_id (The camera’s tf frame ),brightness。
4、uvc_camera
ROS的uvc_camera这个package实现了uvc视频流的用法,还支持两个相机组成的双目视觉,提供的node是UVC/USB camera,发布的topic主要有image_raw这个视频流,参数主要有device,fps,frame_id。该package已经不再维护,被libuvc_camera替代。
5、libuvc_camera
这个要求用户具有/dev/bus/usb/具体相机的写权限,所以最好:
$ sudo -E rosrun libuvc_camera camera_node vendor:=具体厂家
6、cv_camera
这个package是ROS提供的openCV驱动。

  • 二、在linux种使用uvc驱动

1、查看ID
$ lsusb
–Bus 001 Device 003: ID 1578:0076
–Bus 001 Device 004: ID 0547:4d35 Anchor Chips, Inc.
–Bus 001 Device 005: ID 04f2:b2ea Chicony Electronics Co., Ltd Integrated Camera [ThinkPad]
如果开发过usb设备的驱动程序,就很清楚04f2:b2ea是description里面的vendor和product,最常见的罗技c270是046d:0825,可以通过google这个ID,确定是否在uvc的支持列表。也可以在本机上运行:
$ lsusb -d 04f2:b2ea -v | grep “14 Video”
bFunctionClass         14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
这些输出信息说明兼容UVC,否则就是non-UVC camera。
2、看下设备
$ ls /dev | grep video
–video0
这说明有了硬件。
3、再看
$ lsmod | grep uvcvideo
uvcvideo               72275  0
videobuf2_core         39510  1 uvcvideo
videodev              108952  2 uvcvideo,videobuf2_core
videobuf2_vmalloc      13048  1 uvcvideo
4、如果没出现可以显式的加载模块
$ sudo modprobe uvcvideo
5、验证
$ lshal | grep video
system.hardware.primary_video.product = 1029  (0x405)  (int)
system.hardware.primary_video.vendor = 5549  (0x15ad)  (int)
info.linux.driver = ‘uvcvideo’  (string)
info.linux.driver = ‘uvcvideo’  (string)
info.capabilities = {‘video4linux’, ‘video4linux.video_capture’} (string list)
info.category = ‘video4linux’  (string)
info.subsystem = ‘video4linux’  (string)
info.udi = ‘/org/freedesktop/Hal/devices/usb_device_4f2_b2ea_noserial_if0_video4linux’  (string)
linux.device_file = ‘/dev/video0’  (string)
linux.subsystem = ‘video4linux’  (string)
linux.sysfs_path = ‘/sys/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-2/1-2:1.0/video4linux/video0’  (string)
video4linux.device = ‘/dev/video0’  (string)
video4linux.version = ‘2’  (string)
$ lshal | grep Cam
info.product = ‘Integrated Camera [ThinkPad]’  (string)
usb_device.product = ‘Integrated Camera [ThinkPad]’  (string)
usb.interface.description = ‘Integrated Camera’  (string)
info.product = ‘Integrated Camera’  (string)
info.product = ‘Integrated Camera’  (string)
input.product = ‘Integrated Camera’  (string)
info.product = ‘5.0M USB2.0 Camera’  (string)
usb_device.product = ‘5.0M USB2.0 Camera’  (string)
6、说明ok的
$ dmesg | grep video
[  409.572621] Linux video capture interface: v2.00
[  409.663484] usbcore: registered new interface driver uvcvideo
[ 4822.404813] uvcvideo: Found UVC 1.00 device Integrated Camera (04f2:b2ea)
$ dmesg | grep Cam
[  168.027246] usb 1-1: Product: USB2-Camera
[  267.368271] usb 1-1: Product: USB2-Camera
[ 3700.024321] usb 1-1: Product: 5.0M USB2.0 Camera
[ 4822.259972] usb 1-2: Product: Integrated Camera
[ 4822.404813] uvcvideo: Found UVC 1.00 device Integrated Camera (04f2:b2ea)
[ 4822.530930] input: Integrated Camera as /devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-2/1-2:1.0/input/input5
$ dmesg|tail -n20
7、开启视频
$ cheese –device=/dev/video0
这里用cheese查看视频。

  • 三、在ros中使用uvc驱动

1、安装uvc_cam
$ cd ~/catkin_ws
$ git clone https://github.com/ericperko/uvc_cam.git
$ cd ..
$ rosmake uvc_cam
这个,[uvc_cam] 的package位于用户的catkin_ws下面,所以要保证ROS_PACKAGE_PATH包含这个位置,可以roscd到uvc_cam。
当然,安装ros提供的也是可以的:
$ sudo apt-get install ros-hydro-uvc-camera
这样的话,[uvc_camera] 这个package就位于/opt的share下面了,关键的还是没法测试后续的rbx1的例程。
2、驱动测试
运行测试脚本
$ roslaunch  uvc_cam test_uvc.launch
查看视频
$ rosrun image_view image_view image:=/camera/image_raw
ok
3、进一步测试
3.1、借用下rbx1现成的launch脚本来运行[uvc_cam]驱动,(不是[uvc_camera])
$ roslaunch rbx1_vision  uvc_cam.launch  device:=/dev/video0
此时node增加了一个/uvc_cam_node,topic增加了许多。
3.2、或者使用[uvc_camera]而不用[uvc_cam]:
$ roslaunch rbx1_vision  uvc_camera.launch  device:=/dev/video0
同样的,node增加了一个/uvc_camera_node,topic增加了许多。
3.3、按照例程,后面还是使用 [uvc_cam] 。
3.4、查看视频:
利用vision_opencv这个package里面的image_view
$ rosrun image_view image_view image:=/camera/rgb/image_color
或者rqt里面plugins->imageview->选择/camera/rgb/image_color
$ rqt
ok

  • 四、在ros中使用usb驱动

1、安装usb_cam
$ cd ~/catkin_ws/src
$ git clone https://github.com/bosch-ros-pkg/usb_cam.git
$ cd ~/catkin_ws
$ catkin_make
或者:
$ cd ~
$ rosmake usb_cam
2、驱动测试
运行测试脚本
$ roslaunch usb_cam usb_cam-test.launch
在usb_cam的launch下面的usb_cam-test.launch这个脚本,把pixel_format设成了yuyv,有可能需要修改成mjpeg,然后保存。
也可以外部设置:
$ rosparam set /usb_cam/pixel_format yuyv
查看视频:
$ rosrun image_view image_view image:=/?????
3、为了进一步测试,写一个usb_cam.launch
$ cd ~/catkin_ws/src/rbx1/rbx1_vision/launch
仿照uvc_cam.launch的格式:
<launch>
<arg name=”device” default=”/dev/video0″ />
<node name=”uvc_cam_node” pkg=”uvc_cam” type=”uvc_cam_node” output=”screen”>
<remap from=”camera/image_raw” to=”camera/rgb/image_color” />
<param name=”device” value=”$(arg device)” />
<param name=”width” value=”320″ />
<param name=”height” value=”240″ />
<param name=”frame_rate” value=”20″ />
<param name=”exposure” value=”0″ />
<param name=”gain” value=”100″ />
</node>
</launch>
写成usb_cam.launch的脚本:
$ nano usb_cam.launch
<launch>
<arg name=”device” default=”/dev/video0″ />
<node name=”usb_cam_node” pkg=”usb_cam” type=”usb_cam_node” output=”screen”>
<remap from=”usb_cam_node/image_raw” to=”camera/rgb/image_color” />
<param name=”video_device” value=”$(arg device)” />
<param name=”image_width” value=”320″ />
<param name=”image_height” value=”240″ />
<param name=”framerate” value=”20″ />
<param name=”brightness” value=”128″ />
<param name=”contrast” value=”128″ />
<param name=”saturation ” value=”70″ />
</node>
</launch>
这里面为了使usb_cam和uvc_cam一样remap到camera/rgb/image_color,如果需要的话也加上修改图像格式这个:
<param name=”pixel_format ” value=”yuyv” />
4、进一步测试
运行驱动
$ roslaunch rbx1_vision  usb_cam.launch  device:=/dev/video0
查看视频
$ rosrun image_view image_view image:=/camera/rgb/image_color

智能机器人(37):IMU 传感器

1. setup robot
3. od_node
5. laser_node
7. imu_node

1. about imu sensor
2. anout imu usage
3. about imu code
3.1 odom-node.cpp的
3.2 imu-node.cpp的
4. about imu debug
REF:
https://answers.ros.org/question/200480/imu-message-definition/
https://github.com/jeskesen/i2c_imu/blob/master/src/i2c_imu_node.cpp
http://codegist.net/code/flora-9dof-accelerometer%2Bgyroscope%2Bmagnetometer—lsm9ds0/

(如果运行在Oracle 的VM VirtualBox , 需要install extend tools,使能 networks interface 桥接, 使能 com为主机设备 主机设备com1,然后install ubuntu 14.04 AMD64, ros ,setup env, init catkin_ws, start master )

1. setup robot
$ roslaunch funbot r_rbt.launch

.. …

这launch加载: odom的串口驱动,laser的以太网驱动,imu的USB驱动,
同时静态发布一个base_link到base_laser的tf变换。
同时静态发布一个base_link到imu_link的tf变换。
后面这项工作如果已经创建了机器人模型urdf,就不要再发布了。

3. od_node

5. laser_node

7. imu_node

1. about imu sensor

ros官方和开发者都存在的很大问题, 是imu和odom的消息格式。
odom的消息nav_msgs/Odometry, 指明了两个框架,tf就没有疑问。
imu的消息sensor_msgs/Imu,却只有一个坐标框架,就含混不清:
Q:
I am confused about the IMU though: it’s heading estimate should be in the map frame as it’s absolute, but it’s motion estimates should be in the base_link frame because they are relative to the robot ? The imu message only has a single frame_id, so which should I set it to?
Q:
It seems clear that nav_msgs::Odometry.pose.pose.orientation goes from header.frame_id to child_frame_id (or parent to child), but there is no child_frame_id in the sensor_msgs::Imu message and I do not see that . The two obvious choices are map and odom, but I don’t know if one is preferred over the other.

关键在于,要理解同样存在两个frame_id就可以:
传感器的线加速度和角速度, 是在传感器坐标框架内,例如imu_link;
传感器的转向旋转, 却是传感器相对于他的上层父坐标系,例如odom、map的那世界坐标系。

例如,写imu驱动程序时,定义imu_frame_id和tf_parent_frame_id及tf_frame_id这样三个框架:

那么,典型的:
**1. imu_frame_id would be something like imu_link, the acceleration and velocities are measured in this frame.
**2. the orientation is between a fixed inertial frame to sensor, the “fixed inertial frame” is implicitly assumed to be oriented in the same way as odom, map, or whatever top-level other fixed frame your application uses.
**3.1. orientation is calculated by user’ code using data both the accelerometer (pitch and roll) and magnetometer (yaw) and others …
**3.2. packages like robot_pose_ekf do not use the orientation vector in an absolute style ,it actually calculates changes in heading.

So:
You should populate the frame_id of the sensor_msgs/IMU message with the frame in which the sensor is mounted.
If IMU is oriented to base link, you can just fill the frame with base link, or publish a static tf publisher from base_link to imu_link.

2. about imu usage

方式1,需要-1来禁用传感器的转向数据。方式2,安全做法,像turtlebot这样的机器人,以及robot_pose_ekf这样的算法,对于imu的转向旋转,都是只作为相对值,微分,并不绝对值,积分。方式3,这种常见方式即使利用了电子罗盘,由于不能满足严格垂直这样的苛刻场景,需要解决。

*1. Orientation not provided
This is just gyros and accelerometers, no fusion yet.
In this case, ‘yaw’ or ‘heading’ still drifts since the gravity component for yaw is 0,
so you cannot determine absolute heading with only those 6 measurements.
We can set msg.orientation_covariance[0] = -1,
which tell other software not use the orientation.
For example, turtlebot and robot_pose_ekf all use the IMU only for the orientation changence, not the absolute orientation.
This is ultimately safest, as any orientation can be fused in this manner.
Unfortunately we lose the ability to remove drift, so more complicated approaches exist.

*2. Orientation integrated with drift from where the IMU started. This IMU doesn’t guarantee a known reference position. It’s possible for the ROS software to startup after the IMU calibrated, so the reference position is completely unknown due to drift. These IMUs are uncommon, but essentially only integrate the gyros for orientation and do not use a gravity reference. This a purely inertial output and the best you could do in empty space. The issue using this type in ROS is that the message itself does not let you know that the orientation is relative rather than absolute.
This is the case that ROS handles today and is a form of dead reckoning/Odometry. Turtlebot and robot_pose_ekf all use the IMU only for the change in orientation, not the absolute orientation. This is ultimately safest, as any orientation can be fused in this manner. Unfortunately you lose the ability to remove drift, so more complicated approaches exist.

*3. Orientation integrated with drift but pitch and roll referenced to gravity. This IMU is the most common. It uses the gyros to determine angle from startup, but references pitch and roll to gravity. The assumption here is that you will have to re-reference before you travel far enough for the Earth’s gravity vector to not aligned down in your Cartesian frame. In this case, ‘yaw’ or ‘heading’ still drifts since the gravity component for yaw is 0, so you cannot determine absolute heading with only those 6 measurements. Where this becomes a problem depends on the specific application.

*4. Orientation reference to gravity and magnetic north
This is very common that add magnetometers to reference the heading to magnetic north.
The problem using this method is that the frame must still be tangential to the Earth’s surface,
and that the magnetic field has ‘declination’ that varies based on where the IMU is located on the Earth’s surface.

*5. Orientation fused from a more complicated system (GPS INS).
This can be output a full sensor fusion system, but we highly recommend to NOT use this method.
If there is GPS, Odometry, INS, and more being fused into a globally corrected x, y, z, and orientation,
the output should primarily be a nav_msgs/Odometry.
However, if acceleration is required, the IMU message can be output, but we’d recommend it be the ‘raw’ sensor values.

What can we do to clarify this?
** Add an orientation_reference_frame_id. This is not a good idea since this is unknown and undefined for (1), (2), and (3). These don’t specify a reference that is repeatable. In the cases where you could provide it (4), (5), it still does not really solve the problem since the gravity and magnetometer reference frames are not Cartesian. So in order to accommodate ALL use cases, we’d have to reference something like this for gravity, but also factor in the magnetic field. Oh, and did I mention this varies over time?
** Let the sensor fusion stage handle it. This is ultimately the best solution. Since the sensor fusion stage has to handle custom orientation parameters, you might as well also define an imu_type parameter that can be changed to determine how best to use the IMU.

3. about imu code

3.1 odom-node.cpp的实现中,pose是在odom坐标,twist是在base_link坐标,tf是odom->base_link。

geometry_msgs::TransformStamped odom_trans;
double angle_radi =od->position.positionr*3.1415926/180.0;
geometry_msgs::Quaternion odom_quat = tf::createQuaternionMsgFromYaw(angle_radi);

odom_trans.header.stamp = current_time;
odom_trans.header.frame_id = “odom”;
odom_trans.child_frame_id = “base_link”;
odom_trans.transform.translation.x = od->position.positionx;
odom_trans.transform.translation.y = od->position.positiony;
odom_trans.transform.translation.z = 0;
odom_trans.transform.rotation = odom_quat;
//send the transform
if (tf_broadcast)
odom_broadcaster->sendTransform(odom_trans);
—————————————————-
nav_msgs::Odometry odom;
odom.header.stamp = current_time;
odom.header.frame_id = “odom”;
//set the position
odom.pose.pose.position.x = od->position.positionx;
odom.pose.pose.position.y = od->position.positiony;
odom.pose.pose.position.z = 0.0;
odom.pose.pose.orientation = odom_quat;
//set the velocity
odom.child_frame_id = “base_link”;
odom.twist.twist.linear.x = od->speed.speedx;
odom.twist.twist.linear.y = od->speed.speedy;
odom.twist.twist.angular.z = od->speed.speedr;
//publish the message
odom_pub->publish(odom);

3.2 imu-node.cpp的实现中,angular_velocity和linear_acceleration是在imu_link坐标,oritation应该是在odom或map坐标,tf是odom/map->base_link。
tf::Quaternion orientation(xf, yf, zf, wf);
delta_rotation = matrix_orientation.inverse() * orientation;
imu.header.stamp = measurement_time;
imu.header.frame_id = frame_id; // imu_link
quaternionTFToMsg(delta_rotation, imu.orientation);
imu.angular_velocity.x = gxf;
imu.angular_velocity.y = gyf;
imu.angular_velocity.z = gzf;
imu.linear_acceleration.x = axf;
imu.linear_acceleration.y = ayf;
imu.linear_acceleration.z = azf;
// publish imu message
imu_pub.publish(imu);
————————————-
// publish tf transform
if (tf_broadcast) {
transform.setOrigin(tf::Vector3(0,0,0));
transform.setRotation(delta_rotation);
tf_br.sendTransform(tf::StampedTransform(transform, measurement_time, tf_parent_frame_id, tf_frame_id));
}

4. about imu debug

4.1 ERROR: Covariance specified for measurement on topic xxx is zero
Each measurement that is processed by the robot pose ekf needs to have a covariance associated with it. The diagonal elements of the covariance matrix can not be zero. This error is shown when one of the diagonal elements is zero. Messages with an invalid covariance will not be used to update the filter.
Creating the covariance matrices for individual sensors:
For IMU and Odometry, the covariance matrix can be formed from the datasheet.
For Visual Odometry, covariance matrix may be obtained from the measurement equation that relates the measured variables to the pose coordinates [1].
*none of the covariance matrices should be zero, they should have some value, but I am not sure how it gets passed.

4.2
the odometry covariance is a float[36] a 6×6 matrix. diagonal terms are the trust you have in your sensor for each Dof. You have 6 Dof, position (x, y, z) and orientation (x, y, z)

to estimate your sensor or algorithm accuracy with experiment. If you see your data are good for 1cm in translation and 0.1 radian in rotation you can use this matrix:

[0.01 0.0 0.0 0.0 0.0 0.0,
0.0 0.01 0.0 0.0 0.0 0.0,
0.0 0.0 0.01 0.0 0.0 0.0,
0.0 0.0 0.0 0.1 0.0 0.0,
0.0 0.0 0.0 0.0 0.1 0.0,
0.0 0.0 0.0 0.0 0.0 0.1]

If have no information for one Dof you can put a huge value.

for example segway_rmp:
this->odom_msg.pose.covariance[0] = 0.00001;
this->odom_msg.pose.covariance[7] = 0.00001;
this->odom_msg.pose.covariance[14] = 1000000000000.0;
this->odom_msg.pose.covariance[21] = 1000000000000.0;
this->odom_msg.pose.covariance[28] = 1000000000000.0;
this->odom_msg.pose.covariance[35] = 0.001;

智能机器人(35):惯导驱动-mpu

1. 硬件
2. 软件
3. 软件 – ROS
3.1 arduino uno 部分
3.2 pc ros 部分
3.3 usage

MPU 6050 is a sensor based on MEMS technology with I2C protocol communication.
The accelerometer works on the principle of the piezoelectric effect.
The gyroscopes work on the principle of Coriolis acceleration.
REF: http://blog.csdn.net/eaibot/article/details/51398074/

1. 硬件

Arduino-UNO + MPU-6050

XCL,XDA:这2个pin是MPU6050做I2C的master时用来控制slave的。不接。
SDA,SCL:这2个pin是I2C接口。 接UNO的pin4(SDA)和pin5(SCL)。
AD0: 用来选定I2C地址的,接地时地址是0x68,接3.3V时是0x69. 接地。
INT:打断信号,MPU6050内建1024byte的FIFO buffer,当buffer有数据时可以interrupt通知arduino。接UNO的pin-2 (interrupt pin 0)
共享代码中都设置400KHz的I2C bus speed,100K应该也可。

2. 软件

下载:I2Cdev library: http://diyhacking.com/projects/I2Cdev.zip
下载:MPU 6050: http://diyhacking.com/projects/MPU6050.zip
两个库解压,复制到Arduino的libraries目录
File –> Examples –> MPU6050 –> Examples –> MPU6050_DMP6
编译,上传,打开串口监控设置波特率115200:
Send any character to begin DMP programming and demo:
ypr 28.37 -30.82 10.95
ypr 28.46 -30.82 10.95
ypr 28.56 -30.82 10.96
ypr 28.66 -30.82 10.96

注:
最初数据不准确。
注:
这里不是raw data,是DMP方式。
DMP即Digital Motion Processing,a built-in motion processor. It processes the values from the accelerometer and gyroscope to give accurate 3D values.

3. 软件 – ROS

REF: https://github.com/fsteinhardt/mpu6050_serial_to_imu/blob/master/README.md

3.1 arduino uno 部分

复制前面的ino代码,创建新项目。
修改部分:

*1 comment define OUTPUT_READABLE_YAWPITCHROLL:
#define OUTPUT_READABLE_YAWPITCHROLL ->
//#define OUTPUT_READABLE_YAWPITCHROLL
*2 uncomment OUTPUT_TEAPOT:
//#define OUTPUT_TEAPOT ->
#define OUTPUT_TEAPOT
*3 change code:
//uint8_t teapotPacket[14] = { ‘$’, 0x02, 0,0, 0,0, 0,0, 0,0, 0x00, 0x00, ‘r’, ‘n’ };
———–>>>>>>>>>>>>>>>
uint8_t teapotPacket[28] = { ‘$’, 0x03, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0x00, 0x00, ‘r’, ‘n’ };
*4 change code:
teapotPacket[2] = fifoBuffer[0];
teapotPacket[3] = fifoBuffer[1];
teapotPacket[4] = fifoBuffer[4];
teapotPacket[5] = fifoBuffer[5];
teapotPacket[6] = fifoBuffer[8];
teapotPacket[7] = fifoBuffer[9];
teapotPacket[8] = fifoBuffer[12];
teapotPacket[9] = fifoBuffer[13];
Serial.write(teapotPacket, 14);
teapotPacket[11]++; // packetCount, loops at 0xFF on purpose
————>>>>>>>>>>>>>>>>>>>>>>>>
teapotPacket[2] = fifoBuffer[0];
teapotPacket[3] = fifoBuffer[1];
teapotPacket[4] = fifoBuffer[4];
teapotPacket[5] = fifoBuffer[5];
teapotPacket[6] = fifoBuffer[8];
teapotPacket[7] = fifoBuffer[9];
teapotPacket[8] = fifoBuffer[12];
teapotPacket[9] = fifoBuffer[13];
// gyro values
teapotPacket[10] = fifoBuffer[16];
teapotPacket[11] = fifoBuffer[17];
teapotPacket[12] = fifoBuffer[20];
teapotPacket[13] = fifoBuffer[21];
teapotPacket[14] = fifoBuffer[24];
teapotPacket[15] = fifoBuffer[25];
// accelerometer values
teapotPacket[16] = fifoBuffer[28];
teapotPacket[17] = fifoBuffer[29];
teapotPacket[18] = fifoBuffer[32];
teapotPacket[19] = fifoBuffer[33];
teapotPacket[20] = fifoBuffer[36];
teapotPacket[21] = fifoBuffer[37];
//temperature
int16_t temperature = mpu.getTemperature();
teapotPacket[22] = temperature >> 8;
teapotPacket[23] = temperature & 0xFF;
Serial.write(teapotPacket, 28);
teapotPacket[25]++; // packetCount, loops at 0xFF on purpose
编译,上传到arduino uno。

或者,
直接下载代码: https://github.com/fsteinhardt/mpu6050_serial_to_imu

解压, 对MPU6050.ino项目用Arduino IDE打开。
不过arduino带的wire版本和这里代码不兼容, wire的好几个函数改名了。
error: ‘class TwoWire’ has no member named ‘setClock’
需要修改两处:
// Wire.setClock(400000); // 400kHz I2C clock. Comment this line if having compilation difficulties
TWBR = 24; // 400kHz I2C clock (200kHz if CPU is 8MHz)

// enable Arduino interrupt detection
Serial.println(F(“Enabling interrupt detection (Arduino external interrupt 0)…”));
//attachInterrupt(digitalPinToInterrupt(INTERRUPT_PIN), dmpDataReady, RISING);
attachInterrupt(0, dmpDataReady, RISING);
后面这个版本时间长了串口不稳定。

3.2 pc ros 部分

复制解压后的剩余代码到~/atkin_ws/src,
或者:
$ cd ~/catkin_ws/src/
$ git clone https://github.com/fsteinhardt/mpu6050_serial_to_imu
$ cd ~/catkin_ws/
$ catkin_make

如果需要:
$ sudo apt-get install ros-indigo-serial

Serial is a cross-platform, simple to use library for using serial ports on computers. This library provides a C++, object oriented interface for interacting with RS-232 like devices on Linux and Windows.
Not like ros-indigo-rosserial.

$ source ~/.bashrc
$ roslaunch mpu6050_serial_to_imu demo.launch
$ rostopic echo /imu_data

可以看到mpu6050的值转换为ROS的/imu 的消息:
header:
seq: 36872
stamp:
secs: 1508381861
nsecs: 177936189
frame_id: imu_link
orientation:
x: -0.0668025501072
y: -0.0957503132522
z: -0.175129953772
w: 0.977523017675
orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
angular_velocity:
x: 0.0
y: -0.0266316109008
z: -0.0266316109008
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration:
x: 0.185614013672
y: 2.42735229492
z: 11.7942736816
linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

Published Topics
* imu/data ([sensor_msgs::Imu])
The measured accelerometer, gyro and orientation values.
* imu/temperature ([sensor_msgs::Temperature])
The measured temperature in degrees Celsius.

Services:
* set_zero_orientation ([std_srvs/Empty])
This service sets the current orientation as the new zero orientation
so that from now on only the difference to this orientation is sent.

TF Published:
The resulting orientation can be published as a tf transform,
the frame names can be set using parameters.

Parameters:
* broadcast_tf (bool, default: true)
If true: publish the orientation of the IMU as a tf transform.
* tf_parent_frame_id (string, default: “imu_base”)
Sets the name of the parent frame in the tf transform.
… WE CHANGE TO base_link …
* tf_frame_id (string, default: “imu_link”)
Sets the name of the own frame in the tf transform.
* imu_frame_id (string, default: “imu_link”)
Sets the name of the frame for imu messages.

3.3 usage

Needed a tf:

Sample Launch:


4. 校准和标定

Put MPU6050, as horizontal as possible
– Write down offsets for that particular MPU6050 and use them with library’s functions “…setoffset…” in projects.

四元数到角度 : Node发布的是四元数、角速度、线加速度, 如果直观看左转右转的角度:

#! /usr/bin/python

import PyKDL
import rospy
from sensor_msgs.msg import Imu
from nav_msgs.msg import Odometry
from geometry_msgs.msg import Twist
from math import *
import threading
import os
import subprocess
import yaml

def quat_to_angle(quat):
rot = PyKDL.Rotation.Quaternion(quat.x, quat.y, quat.z, quat.w)
return rot.GetRPY()[2]
def normalize_angle(angle):
res = angle
while res > pi:
res -= 2.0*pi
while res < -pi: res += 2.0*pi return res class CalibrateRobot: def __init__(self): self.lock = threading.Lock() self.sub_imu = rospy.Subscriber('/imu/data', Imu, self.imu_cb) self.last_imu_angle = 0 self.imu_angle = 0 while not rospy.is_shutdown(): rospy.sleep(0.1) rospy.loginfo("imu_angle:"+str(self.imu_angle*180/3.1415926)) def imu_cb(self, msg): with self.lock: angle = quat_to_angle(msg.orientation) self.imu_angle = angle self.imu_time = msg.header.stamp def main(): rospy.init_node('scan_to_angle') CalibrateRobot() if __name__ == '__main__': main() --- [INFO] [WallTime: 1508384569.829275] imu_angle:-12.3451541995 [INFO] [WallTime: 1508384569.930287] imu_angle:-12.3600526937 [INFO] [WallTime: 1508384570.031151] imu_angle:-12.3667251945 [INFO] [WallTime: 1508384570.132128] imu_angle:-12.3829488555 可见即使位置不变也是存在漂移的,这个偏移可以加回去。