机器人应用

智能机器人(41):USB-camera

一、camera的驱动
二、在linux种使用uvc驱动
三、在ros中使用uvc驱动
四、在ros中使用usb驱动

  • 一、camera的驱动

1、UVC
第一个概念是UVC:USB Video Device Class,大部分usb接口(包括主办集成)都兼容uvc,uvc的实现独立于linux/weindows等系统。
2、V4L
第二个概念是V4L或者V4L2:Video4Linux,它提供了一系列的API,新版本是2,V4L2。
3、usb_cam
ROS的usb_cam这个package实现了V4L,提供的node是usb_cam_node,发布的topic是<camera_name>/image,msg类型是sensor_msgs/Image ,参数主要有video_device,camera_name ,pixel_format ,camera_frame_id (The camera’s tf frame ),brightness。
4、uvc_camera
ROS的uvc_camera这个package实现了uvc视频流的用法,还支持两个相机组成的双目视觉,提供的node是UVC/USB camera,发布的topic主要有image_raw这个视频流,参数主要有device,fps,frame_id。该package已经不再维护,被libuvc_camera替代。
5、libuvc_camera
这个要求用户具有/dev/bus/usb/具体相机的写权限,所以最好:
$ sudo -E rosrun libuvc_camera camera_node vendor:=具体厂家
6、cv_camera
这个package是ROS提供的openCV驱动。

  • 二、在linux种使用uvc驱动

1、查看ID
$ lsusb
–Bus 001 Device 003: ID 1578:0076
–Bus 001 Device 004: ID 0547:4d35 Anchor Chips, Inc.
–Bus 001 Device 005: ID 04f2:b2ea Chicony Electronics Co., Ltd Integrated Camera [ThinkPad]
如果开发过usb设备的驱动程序,就很清楚04f2:b2ea是description里面的vendor和product,最常见的罗技c270是046d:0825,可以通过google这个ID,确定是否在uvc的支持列表。也可以在本机上运行:
$ lsusb -d 04f2:b2ea -v | grep “14 Video”
bFunctionClass         14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
bInterfaceClass        14 Video
这些输出信息说明兼容UVC,否则就是non-UVC camera。
2、看下设备
$ ls /dev | grep video
–video0
这说明有了硬件。
3、再看
$ lsmod | grep uvcvideo
uvcvideo               72275  0
videobuf2_core         39510  1 uvcvideo
videodev              108952  2 uvcvideo,videobuf2_core
videobuf2_vmalloc      13048  1 uvcvideo
4、如果没出现可以显式的加载模块
$ sudo modprobe uvcvideo
5、验证
$ lshal | grep video
system.hardware.primary_video.product = 1029  (0x405)  (int)
system.hardware.primary_video.vendor = 5549  (0x15ad)  (int)
info.linux.driver = ‘uvcvideo’  (string)
info.linux.driver = ‘uvcvideo’  (string)
info.capabilities = {‘video4linux’, ‘video4linux.video_capture’} (string list)
info.category = ‘video4linux’  (string)
info.subsystem = ‘video4linux’  (string)
info.udi = ‘/org/freedesktop/Hal/devices/usb_device_4f2_b2ea_noserial_if0_video4linux’  (string)
linux.device_file = ‘/dev/video0’  (string)
linux.subsystem = ‘video4linux’  (string)
linux.sysfs_path = ‘/sys/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-2/1-2:1.0/video4linux/video0’  (string)
video4linux.device = ‘/dev/video0’  (string)
video4linux.version = ‘2’  (string)
$ lshal | grep Cam
info.product = ‘Integrated Camera [ThinkPad]’  (string)
usb_device.product = ‘Integrated Camera [ThinkPad]’  (string)
usb.interface.description = ‘Integrated Camera’  (string)
info.product = ‘Integrated Camera’  (string)
info.product = ‘Integrated Camera’  (string)
input.product = ‘Integrated Camera’  (string)
info.product = ‘5.0M USB2.0 Camera’  (string)
usb_device.product = ‘5.0M USB2.0 Camera’  (string)
6、说明ok的
$ dmesg | grep video
[  409.572621] Linux video capture interface: v2.00
[  409.663484] usbcore: registered new interface driver uvcvideo
[ 4822.404813] uvcvideo: Found UVC 1.00 device Integrated Camera (04f2:b2ea)
$ dmesg | grep Cam
[  168.027246] usb 1-1: Product: USB2-Camera
[  267.368271] usb 1-1: Product: USB2-Camera
[ 3700.024321] usb 1-1: Product: 5.0M USB2.0 Camera
[ 4822.259972] usb 1-2: Product: Integrated Camera
[ 4822.404813] uvcvideo: Found UVC 1.00 device Integrated Camera (04f2:b2ea)
[ 4822.530930] input: Integrated Camera as /devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-2/1-2:1.0/input/input5
$ dmesg|tail -n20
7、开启视频
$ cheese –device=/dev/video0
这里用cheese查看视频。

  • 三、在ros中使用uvc驱动

1、安装uvc_cam
$ cd ~/catkin_ws
$ git clone https://github.com/ericperko/uvc_cam.git
$ cd ..
$ rosmake uvc_cam
这个,[uvc_cam] 的package位于用户的catkin_ws下面,所以要保证ROS_PACKAGE_PATH包含这个位置,可以roscd到uvc_cam。
当然,安装ros提供的也是可以的:
$ sudo apt-get install ros-hydro-uvc-camera
这样的话,[uvc_camera] 这个package就位于/opt的share下面了,关键的还是没法测试后续的rbx1的例程。
2、驱动测试
运行测试脚本
$ roslaunch  uvc_cam test_uvc.launch
查看视频
$ rosrun image_view image_view image:=/camera/image_raw
ok
3、进一步测试
3.1、借用下rbx1现成的launch脚本来运行[uvc_cam]驱动,(不是[uvc_camera])
$ roslaunch rbx1_vision  uvc_cam.launch  device:=/dev/video0
此时node增加了一个/uvc_cam_node,topic增加了许多。
3.2、或者使用[uvc_camera]而不用[uvc_cam]:
$ roslaunch rbx1_vision  uvc_camera.launch  device:=/dev/video0
同样的,node增加了一个/uvc_camera_node,topic增加了许多。
3.3、按照例程,后面还是使用 [uvc_cam] 。
3.4、查看视频:
利用vision_opencv这个package里面的image_view
$ rosrun image_view image_view image:=/camera/rgb/image_color
或者rqt里面plugins->imageview->选择/camera/rgb/image_color
$ rqt
ok

  • 四、在ros中使用usb驱动

1、安装usb_cam
$ cd ~/catkin_ws/src
$ git clone https://github.com/bosch-ros-pkg/usb_cam.git
$ cd ~/catkin_ws
$ catkin_make
或者:
$ cd ~
$ rosmake usb_cam
2、驱动测试
运行测试脚本
$ roslaunch usb_cam usb_cam-test.launch
在usb_cam的launch下面的usb_cam-test.launch这个脚本,把pixel_format设成了yuyv,有可能需要修改成mjpeg,然后保存。
也可以外部设置:
$ rosparam set /usb_cam/pixel_format yuyv
查看视频:
$ rosrun image_view image_view image:=/?????
3、为了进一步测试,写一个usb_cam.launch
$ cd ~/catkin_ws/src/rbx1/rbx1_vision/launch
仿照uvc_cam.launch的格式:
<launch>
<arg name=”device” default=”/dev/video0″ />
<node name=”uvc_cam_node” pkg=”uvc_cam” type=”uvc_cam_node” output=”screen”>
<remap from=”camera/image_raw” to=”camera/rgb/image_color” />
<param name=”device” value=”$(arg device)” />
<param name=”width” value=”320″ />
<param name=”height” value=”240″ />
<param name=”frame_rate” value=”20″ />
<param name=”exposure” value=”0″ />
<param name=”gain” value=”100″ />
</node>
</launch>
写成usb_cam.launch的脚本:
$ nano usb_cam.launch
<launch>
<arg name=”device” default=”/dev/video0″ />
<node name=”usb_cam_node” pkg=”usb_cam” type=”usb_cam_node” output=”screen”>
<remap from=”usb_cam_node/image_raw” to=”camera/rgb/image_color” />
<param name=”video_device” value=”$(arg device)” />
<param name=”image_width” value=”320″ />
<param name=”image_height” value=”240″ />
<param name=”framerate” value=”20″ />
<param name=”brightness” value=”128″ />
<param name=”contrast” value=”128″ />
<param name=”saturation ” value=”70″ />
</node>
</launch>
这里面为了使usb_cam和uvc_cam一样remap到camera/rgb/image_color,如果需要的话也加上修改图像格式这个:
<param name=”pixel_format ” value=”yuyv” />
4、进一步测试
运行驱动
$ roslaunch rbx1_vision  usb_cam.launch  device:=/dev/video0
查看视频
$ rosrun image_view image_view image:=/camera/rgb/image_color

智能机器人(37):IMU 传感器

1. setup robot
3. od_node
5. laser_node
7. imu_node

1. about imu sensor
2. anout imu usage
3. about imu code
3.1 odom-node.cpp的
3.2 imu-node.cpp的
4. about imu debug
REF:
https://answers.ros.org/question/200480/imu-message-definition/
https://github.com/jeskesen/i2c_imu/blob/master/src/i2c_imu_node.cpp
http://codegist.net/code/flora-9dof-accelerometer%2Bgyroscope%2Bmagnetometer—lsm9ds0/

(如果运行在Oracle 的VM VirtualBox , 需要install extend tools,使能 networks interface 桥接, 使能 com为主机设备 主机设备com1,然后install ubuntu 14.04 AMD64, ros ,setup env, init catkin_ws, start master )

1. setup robot
$ roslaunch funbot r_rbt.launch

.. …

这launch加载: odom的串口驱动,laser的以太网驱动,imu的USB驱动,
同时静态发布一个base_link到base_laser的tf变换。
同时静态发布一个base_link到imu_link的tf变换。
后面这项工作如果已经创建了机器人模型urdf,就不要再发布了。

3. od_node

5. laser_node

7. imu_node

1. about imu sensor

ros官方和开发者都存在的很大问题, 是imu和odom的消息格式。
odom的消息nav_msgs/Odometry, 指明了两个框架,tf就没有疑问。
imu的消息sensor_msgs/Imu,却只有一个坐标框架,就含混不清:
Q:
I am confused about the IMU though: it’s heading estimate should be in the map frame as it’s absolute, but it’s motion estimates should be in the base_link frame because they are relative to the robot ? The imu message only has a single frame_id, so which should I set it to?
Q:
It seems clear that nav_msgs::Odometry.pose.pose.orientation goes from header.frame_id to child_frame_id (or parent to child), but there is no child_frame_id in the sensor_msgs::Imu message and I do not see that . The two obvious choices are map and odom, but I don’t know if one is preferred over the other.

关键在于,要理解同样存在两个frame_id就可以:
传感器的线加速度和角速度, 是在传感器坐标框架内,例如imu_link;
传感器的转向旋转, 却是传感器相对于他的上层父坐标系,例如odom、map的那世界坐标系。

例如,写imu驱动程序时,定义imu_frame_id和tf_parent_frame_id及tf_frame_id这样三个框架:

那么,典型的:
**1. imu_frame_id would be something like imu_link, the acceleration and velocities are measured in this frame.
**2. the orientation is between a fixed inertial frame to sensor, the “fixed inertial frame” is implicitly assumed to be oriented in the same way as odom, map, or whatever top-level other fixed frame your application uses.
**3.1. orientation is calculated by user’ code using data both the accelerometer (pitch and roll) and magnetometer (yaw) and others …
**3.2. packages like robot_pose_ekf do not use the orientation vector in an absolute style ,it actually calculates changes in heading.

So:
You should populate the frame_id of the sensor_msgs/IMU message with the frame in which the sensor is mounted.
If IMU is oriented to base link, you can just fill the frame with base link, or publish a static tf publisher from base_link to imu_link.

2. about imu usage

方式1,需要-1来禁用传感器的转向数据。方式2,安全做法,像turtlebot这样的机器人,以及robot_pose_ekf这样的算法,对于imu的转向旋转,都是只作为相对值,微分,并不绝对值,积分。方式3,这种常见方式即使利用了电子罗盘,由于不能满足严格垂直这样的苛刻场景,需要解决。

*1. Orientation not provided
This is just gyros and accelerometers, no fusion yet.
In this case, ‘yaw’ or ‘heading’ still drifts since the gravity component for yaw is 0,
so you cannot determine absolute heading with only those 6 measurements.
We can set msg.orientation_covariance[0] = -1,
which tell other software not use the orientation.
For example, turtlebot and robot_pose_ekf all use the IMU only for the orientation changence, not the absolute orientation.
This is ultimately safest, as any orientation can be fused in this manner.
Unfortunately we lose the ability to remove drift, so more complicated approaches exist.

*2. Orientation integrated with drift from where the IMU started. This IMU doesn’t guarantee a known reference position. It’s possible for the ROS software to startup after the IMU calibrated, so the reference position is completely unknown due to drift. These IMUs are uncommon, but essentially only integrate the gyros for orientation and do not use a gravity reference. This a purely inertial output and the best you could do in empty space. The issue using this type in ROS is that the message itself does not let you know that the orientation is relative rather than absolute.
This is the case that ROS handles today and is a form of dead reckoning/Odometry. Turtlebot and robot_pose_ekf all use the IMU only for the change in orientation, not the absolute orientation. This is ultimately safest, as any orientation can be fused in this manner. Unfortunately you lose the ability to remove drift, so more complicated approaches exist.

*3. Orientation integrated with drift but pitch and roll referenced to gravity. This IMU is the most common. It uses the gyros to determine angle from startup, but references pitch and roll to gravity. The assumption here is that you will have to re-reference before you travel far enough for the Earth’s gravity vector to not aligned down in your Cartesian frame. In this case, ‘yaw’ or ‘heading’ still drifts since the gravity component for yaw is 0, so you cannot determine absolute heading with only those 6 measurements. Where this becomes a problem depends on the specific application.

*4. Orientation reference to gravity and magnetic north
This is very common that add magnetometers to reference the heading to magnetic north.
The problem using this method is that the frame must still be tangential to the Earth’s surface,
and that the magnetic field has ‘declination’ that varies based on where the IMU is located on the Earth’s surface.

*5. Orientation fused from a more complicated system (GPS INS).
This can be output a full sensor fusion system, but we highly recommend to NOT use this method.
If there is GPS, Odometry, INS, and more being fused into a globally corrected x, y, z, and orientation,
the output should primarily be a nav_msgs/Odometry.
However, if acceleration is required, the IMU message can be output, but we’d recommend it be the ‘raw’ sensor values.

What can we do to clarify this?
** Add an orientation_reference_frame_id. This is not a good idea since this is unknown and undefined for (1), (2), and (3). These don’t specify a reference that is repeatable. In the cases where you could provide it (4), (5), it still does not really solve the problem since the gravity and magnetometer reference frames are not Cartesian. So in order to accommodate ALL use cases, we’d have to reference something like this for gravity, but also factor in the magnetic field. Oh, and did I mention this varies over time?
** Let the sensor fusion stage handle it. This is ultimately the best solution. Since the sensor fusion stage has to handle custom orientation parameters, you might as well also define an imu_type parameter that can be changed to determine how best to use the IMU.

3. about imu code

3.1 odom-node.cpp的实现中,pose是在odom坐标,twist是在base_link坐标,tf是odom->base_link。

geometry_msgs::TransformStamped odom_trans;
double angle_radi =od->position.positionr*3.1415926/180.0;
geometry_msgs::Quaternion odom_quat = tf::createQuaternionMsgFromYaw(angle_radi);

odom_trans.header.stamp = current_time;
odom_trans.header.frame_id = “odom”;
odom_trans.child_frame_id = “base_link”;
odom_trans.transform.translation.x = od->position.positionx;
odom_trans.transform.translation.y = od->position.positiony;
odom_trans.transform.translation.z = 0;
odom_trans.transform.rotation = odom_quat;
//send the transform
if (tf_broadcast)
odom_broadcaster->sendTransform(odom_trans);
—————————————————-
nav_msgs::Odometry odom;
odom.header.stamp = current_time;
odom.header.frame_id = “odom”;
//set the position
odom.pose.pose.position.x = od->position.positionx;
odom.pose.pose.position.y = od->position.positiony;
odom.pose.pose.position.z = 0.0;
odom.pose.pose.orientation = odom_quat;
//set the velocity
odom.child_frame_id = “base_link”;
odom.twist.twist.linear.x = od->speed.speedx;
odom.twist.twist.linear.y = od->speed.speedy;
odom.twist.twist.angular.z = od->speed.speedr;
//publish the message
odom_pub->publish(odom);

3.2 imu-node.cpp的实现中,angular_velocity和linear_acceleration是在imu_link坐标,oritation应该是在odom或map坐标,tf是odom/map->base_link。
tf::Quaternion orientation(xf, yf, zf, wf);
delta_rotation = matrix_orientation.inverse() * orientation;
imu.header.stamp = measurement_time;
imu.header.frame_id = frame_id; // imu_link
quaternionTFToMsg(delta_rotation, imu.orientation);
imu.angular_velocity.x = gxf;
imu.angular_velocity.y = gyf;
imu.angular_velocity.z = gzf;
imu.linear_acceleration.x = axf;
imu.linear_acceleration.y = ayf;
imu.linear_acceleration.z = azf;
// publish imu message
imu_pub.publish(imu);
————————————-
// publish tf transform
if (tf_broadcast) {
transform.setOrigin(tf::Vector3(0,0,0));
transform.setRotation(delta_rotation);
tf_br.sendTransform(tf::StampedTransform(transform, measurement_time, tf_parent_frame_id, tf_frame_id));
}

4. about imu debug

4.1 ERROR: Covariance specified for measurement on topic xxx is zero
Each measurement that is processed by the robot pose ekf needs to have a covariance associated with it. The diagonal elements of the covariance matrix can not be zero. This error is shown when one of the diagonal elements is zero. Messages with an invalid covariance will not be used to update the filter.
Creating the covariance matrices for individual sensors:
For IMU and Odometry, the covariance matrix can be formed from the datasheet.
For Visual Odometry, covariance matrix may be obtained from the measurement equation that relates the measured variables to the pose coordinates [1].
*none of the covariance matrices should be zero, they should have some value, but I am not sure how it gets passed.

4.2
the odometry covariance is a float[36] a 6×6 matrix. diagonal terms are the trust you have in your sensor for each Dof. You have 6 Dof, position (x, y, z) and orientation (x, y, z)

to estimate your sensor or algorithm accuracy with experiment. If you see your data are good for 1cm in translation and 0.1 radian in rotation you can use this matrix:

[0.01 0.0 0.0 0.0 0.0 0.0,
0.0 0.01 0.0 0.0 0.0 0.0,
0.0 0.0 0.01 0.0 0.0 0.0,
0.0 0.0 0.0 0.1 0.0 0.0,
0.0 0.0 0.0 0.0 0.1 0.0,
0.0 0.0 0.0 0.0 0.0 0.1]

If have no information for one Dof you can put a huge value.

for example segway_rmp:
this->odom_msg.pose.covariance[0] = 0.00001;
this->odom_msg.pose.covariance[7] = 0.00001;
this->odom_msg.pose.covariance[14] = 1000000000000.0;
this->odom_msg.pose.covariance[21] = 1000000000000.0;
this->odom_msg.pose.covariance[28] = 1000000000000.0;
this->odom_msg.pose.covariance[35] = 0.001;

智能机器人(35):惯导驱动-mpu

1. 硬件
2. 软件
3. 软件 – ROS
3.1 arduino uno 部分
3.2 pc ros 部分
3.3 usage

MPU 6050 is a sensor based on MEMS technology with I2C protocol communication.
The accelerometer works on the principle of the piezoelectric effect.
The gyroscopes work on the principle of Coriolis acceleration.
REF: http://blog.csdn.net/eaibot/article/details/51398074/

1. 硬件

Arduino-UNO + MPU-6050

XCL,XDA:这2个pin是MPU6050做I2C的master时用来控制slave的。不接。
SDA,SCL:这2个pin是I2C接口。 接UNO的pin4(SDA)和pin5(SCL)。
AD0: 用来选定I2C地址的,接地时地址是0x68,接3.3V时是0x69. 接地。
INT:打断信号,MPU6050内建1024byte的FIFO buffer,当buffer有数据时可以interrupt通知arduino。接UNO的pin-2 (interrupt pin 0)
共享代码中都设置400KHz的I2C bus speed,100K应该也可。

2. 软件

下载:I2Cdev library: http://diyhacking.com/projects/I2Cdev.zip
下载:MPU 6050: http://diyhacking.com/projects/MPU6050.zip
两个库解压,复制到Arduino的libraries目录
File –> Examples –> MPU6050 –> Examples –> MPU6050_DMP6
编译,上传,打开串口监控设置波特率115200:
Send any character to begin DMP programming and demo:
ypr 28.37 -30.82 10.95
ypr 28.46 -30.82 10.95
ypr 28.56 -30.82 10.96
ypr 28.66 -30.82 10.96

注:
最初数据不准确。
注:
这里不是raw data,是DMP方式。
DMP即Digital Motion Processing,a built-in motion processor. It processes the values from the accelerometer and gyroscope to give accurate 3D values.

3. 软件 – ROS

REF: https://github.com/fsteinhardt/mpu6050_serial_to_imu/blob/master/README.md

3.1 arduino uno 部分

复制前面的ino代码,创建新项目。
修改部分:

*1 comment define OUTPUT_READABLE_YAWPITCHROLL:
#define OUTPUT_READABLE_YAWPITCHROLL ->
//#define OUTPUT_READABLE_YAWPITCHROLL
*2 uncomment OUTPUT_TEAPOT:
//#define OUTPUT_TEAPOT ->
#define OUTPUT_TEAPOT
*3 change code:
//uint8_t teapotPacket[14] = { ‘$’, 0x02, 0,0, 0,0, 0,0, 0,0, 0x00, 0x00, ‘r’, ‘n’ };
———–>>>>>>>>>>>>>>>
uint8_t teapotPacket[28] = { ‘$’, 0x03, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0x00, 0x00, ‘r’, ‘n’ };
*4 change code:
teapotPacket[2] = fifoBuffer[0];
teapotPacket[3] = fifoBuffer[1];
teapotPacket[4] = fifoBuffer[4];
teapotPacket[5] = fifoBuffer[5];
teapotPacket[6] = fifoBuffer[8];
teapotPacket[7] = fifoBuffer[9];
teapotPacket[8] = fifoBuffer[12];
teapotPacket[9] = fifoBuffer[13];
Serial.write(teapotPacket, 14);
teapotPacket[11]++; // packetCount, loops at 0xFF on purpose
————>>>>>>>>>>>>>>>>>>>>>>>>
teapotPacket[2] = fifoBuffer[0];
teapotPacket[3] = fifoBuffer[1];
teapotPacket[4] = fifoBuffer[4];
teapotPacket[5] = fifoBuffer[5];
teapotPacket[6] = fifoBuffer[8];
teapotPacket[7] = fifoBuffer[9];
teapotPacket[8] = fifoBuffer[12];
teapotPacket[9] = fifoBuffer[13];
// gyro values
teapotPacket[10] = fifoBuffer[16];
teapotPacket[11] = fifoBuffer[17];
teapotPacket[12] = fifoBuffer[20];
teapotPacket[13] = fifoBuffer[21];
teapotPacket[14] = fifoBuffer[24];
teapotPacket[15] = fifoBuffer[25];
// accelerometer values
teapotPacket[16] = fifoBuffer[28];
teapotPacket[17] = fifoBuffer[29];
teapotPacket[18] = fifoBuffer[32];
teapotPacket[19] = fifoBuffer[33];
teapotPacket[20] = fifoBuffer[36];
teapotPacket[21] = fifoBuffer[37];
//temperature
int16_t temperature = mpu.getTemperature();
teapotPacket[22] = temperature >> 8;
teapotPacket[23] = temperature & 0xFF;
Serial.write(teapotPacket, 28);
teapotPacket[25]++; // packetCount, loops at 0xFF on purpose
编译,上传到arduino uno。

或者,
直接下载代码: https://github.com/fsteinhardt/mpu6050_serial_to_imu

解压, 对MPU6050.ino项目用Arduino IDE打开。
不过arduino带的wire版本和这里代码不兼容, wire的好几个函数改名了。
error: ‘class TwoWire’ has no member named ‘setClock’
需要修改两处:
// Wire.setClock(400000); // 400kHz I2C clock. Comment this line if having compilation difficulties
TWBR = 24; // 400kHz I2C clock (200kHz if CPU is 8MHz)

// enable Arduino interrupt detection
Serial.println(F(“Enabling interrupt detection (Arduino external interrupt 0)…”));
//attachInterrupt(digitalPinToInterrupt(INTERRUPT_PIN), dmpDataReady, RISING);
attachInterrupt(0, dmpDataReady, RISING);
后面这个版本时间长了串口不稳定。

3.2 pc ros 部分

复制解压后的剩余代码到~/atkin_ws/src,
或者:
$ cd ~/catkin_ws/src/
$ git clone https://github.com/fsteinhardt/mpu6050_serial_to_imu
$ cd ~/catkin_ws/
$ catkin_make

如果需要:
$ sudo apt-get install ros-indigo-serial

Serial is a cross-platform, simple to use library for using serial ports on computers. This library provides a C++, object oriented interface for interacting with RS-232 like devices on Linux and Windows.
Not like ros-indigo-rosserial.

$ source ~/.bashrc
$ roslaunch mpu6050_serial_to_imu demo.launch
$ rostopic echo /imu_data

可以看到mpu6050的值转换为ROS的/imu 的消息:
header:
seq: 36872
stamp:
secs: 1508381861
nsecs: 177936189
frame_id: imu_link
orientation:
x: -0.0668025501072
y: -0.0957503132522
z: -0.175129953772
w: 0.977523017675
orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
angular_velocity:
x: 0.0
y: -0.0266316109008
z: -0.0266316109008
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration:
x: 0.185614013672
y: 2.42735229492
z: 11.7942736816
linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

Published Topics
* imu/data ([sensor_msgs::Imu])
The measured accelerometer, gyro and orientation values.
* imu/temperature ([sensor_msgs::Temperature])
The measured temperature in degrees Celsius.

Services:
* set_zero_orientation ([std_srvs/Empty])
This service sets the current orientation as the new zero orientation
so that from now on only the difference to this orientation is sent.

TF Published:
The resulting orientation can be published as a tf transform,
the frame names can be set using parameters.

Parameters:
* broadcast_tf (bool, default: true)
If true: publish the orientation of the IMU as a tf transform.
* tf_parent_frame_id (string, default: “imu_base”)
Sets the name of the parent frame in the tf transform.
… WE CHANGE TO base_link …
* tf_frame_id (string, default: “imu_link”)
Sets the name of the own frame in the tf transform.
* imu_frame_id (string, default: “imu_link”)
Sets the name of the frame for imu messages.

3.3 usage

Needed a tf:

Sample Launch:


4. 校准和标定

Put MPU6050, as horizontal as possible
– Write down offsets for that particular MPU6050 and use them with library’s functions “…setoffset…” in projects.

四元数到角度 : Node发布的是四元数、角速度、线加速度, 如果直观看左转右转的角度:

#! /usr/bin/python

import PyKDL
import rospy
from sensor_msgs.msg import Imu
from nav_msgs.msg import Odometry
from geometry_msgs.msg import Twist
from math import *
import threading
import os
import subprocess
import yaml

def quat_to_angle(quat):
rot = PyKDL.Rotation.Quaternion(quat.x, quat.y, quat.z, quat.w)
return rot.GetRPY()[2]
def normalize_angle(angle):
res = angle
while res > pi:
res -= 2.0*pi
while res < -pi: res += 2.0*pi return res class CalibrateRobot: def __init__(self): self.lock = threading.Lock() self.sub_imu = rospy.Subscriber('/imu/data', Imu, self.imu_cb) self.last_imu_angle = 0 self.imu_angle = 0 while not rospy.is_shutdown(): rospy.sleep(0.1) rospy.loginfo("imu_angle:"+str(self.imu_angle*180/3.1415926)) def imu_cb(self, msg): with self.lock: angle = quat_to_angle(msg.orientation) self.imu_angle = angle self.imu_time = msg.header.stamp def main(): rospy.init_node('scan_to_angle') CalibrateRobot() if __name__ == '__main__': main() --- [INFO] [WallTime: 1508384569.829275] imu_angle:-12.3451541995 [INFO] [WallTime: 1508384569.930287] imu_angle:-12.3600526937 [INFO] [WallTime: 1508384570.031151] imu_angle:-12.3667251945 [INFO] [WallTime: 1508384570.132128] imu_angle:-12.3829488555 可见即使位置不变也是存在漂移的,这个偏移可以加回去。

智能机器人(33):惯导驱动-lps

LpSensor-1.3.5 library (Ubuntu Linux 32-bit binary)
LpSensor-1.3.5 library (Ubuntu Linux 64-bit binary)
LpSensor-1.3.5 library (Ubuntu Linux armv7hf binary, suits rasberry pi)
LpSensor-1.3.5 library (Ubuntu Linux armv7hf binary, suits Jetson TX1)

$ tar xvzf ~/Downloads/LpSensor-1.3.5-Linux-x86-64.tar.gz
$ sudo apt-get install libbluetooth-dev //note no needed
$ sudo dpkg -i LpSensor-1.3.5-Linux-x86-64/liblpsensor-1.3.5-Linux.deb
$ dpkg -L liblpsensor

$ cd ~/catkin_ws/src
$ git clone https://github.com/larics/timesync_ros.git
$ git clone https://github.com/larics/lpms_imu.git

ERROR: unrecognized command line option ‘-std=c++14’

which use c++14 and need g++ >=4.9

$ sudo add-apt-repository ppa:ubuntu-toolchain-r/test & $ sudo apt-get update
$ sudo apt-get install gcc-4.9 g++-4.9 cpp-4.9
GCC 4.9 has installed but it is not working as a default compiler.
So to set it as a default compiler following commands should be executed that is shown in picture.

we can fix it with simple symbolic
$ cd /usr/bin
$ sudo rm gcc g++ cpp
$ sudo ln -s gcc-4.9 gcc
$ sudo ln -s g++-4.9 g++
$ sudo ln -s cpp-4.9 cpp

after that if you check the version of gcc you will find the old version
$ gcc –version
–4.8.4

REF:
https://www.lp-research.com/support/
https://www.lp-research.com/ros-and-lp-research-imus-simple/
https://www.lp-research.com/site/wp-content/uploads/2017/02/screencast_cropped.gif