robot slam service (09):脚本自动启动

1. 开机启动
1.1 方式一 Add an Upstart job
1.2 方式二 Add to /etc/rc.local
1.3 方式三 Add an init script

2. 图形界面
gnome-terminal的快捷方式

3. ROS途径
robot_upstart方式

4 upstart自启动服务中的串口通讯

5. REFs:
multi subscriber topic one node parallel callback spin block site:ros.org
https://answers.ros.org/question/185153/ros-multithreading-example/
https://answers.ros.org/question/208587/shared-variable-between-callbacks/
https://answers.ros.org/question/241561/ros-threading-asyncspinners-and-control/
https://wiki.ros.org/roscpp/Overview/Callbacks%20and%20Spinning

1. 开机启动
I have a script in a folder: /path/to/my/script.sh, I need this script to run every time the system starts (even if no one logs in to the system).

1.1 方式一 Add an Upstart job
Create /etc/init/myjob.conf
$ vi /etc/init/myjob.conf

with content like the following:
description “my job”
start on startup
task
exec /path/to/my/script.sh

Okay !

1.2 方式二 Add to /etc/rc.local
$ sudo vi /etc/rc.local

with content like the following:
# This script is executed at the end of each multiuser runlevel
/usr/sbin/openvpn –config /home/dehaou1404/openv/client.conf > /dev/null & #others
/path/to/my/script.sh || exit 1 # Added new one
exit 0

Okay !

1.3 方式三 Add an init script
Create a new script in /etc/init.d/myscript.
$ vi /etc/init.d/myscript.sh
Make it executable.
$ chmod ugo+x /etc/init.d/myscript

#!/bin/bash
# … here do whatever you want to do …
exit 0

Configure the init system to run this script at startup:
$ cd /etc/init.d/
$ sudo update-rc.d myscript.sh defaults 98
Adding system startup for /etc/init.d/00.start.sh …
/etc/rc0.d/K9800.start.sh -> ../init.d/00.start.sh
/etc/rc1.d/K9800.start.sh -> ../init.d/00.start.sh
/etc/rc6.d/K9800.start.sh -> ../init.d/00.start.sh
/etc/rc2.d/S9800.start.sh -> ../init.d/00.start.sh
/etc/rc3.d/S9800.start.sh -> ../init.d/00.start.sh
/etc/rc4.d/S9800.start.sh -> ../init.d/00.start.sh
/etc/rc5.d/S9800.start.sh -> ../init.d/00.start.sh

when you want to remove:
$ sudo update-rc.d -f myscript.sh remove
Removing any system startup links for /etc/init.d/00.start.sh …
/etc/rc0.d/K9800.start.sh
/etc/rc1.d/K9800.start.sh
/etc/rc2.d/S9800.start.sh
/etc/rc3.d/S9800.start.sh
/etc/rc4.d/S9800.start.sh
/etc/rc5.d/S9800.start.sh
/etc/rc6.d/K9800.start.sh

update-rc.d install and remove System-V style init script links. update-rc.d automatically updates the System V style init script links /etc/rcrunlevel.d/NNname to scripts /etc/init.d/name. by init, 0123456789S, and NN is the two-digit sequence code used by init to decide which order to run the scripts in.

bash sample:
#!/bin/bash
# like to log things, so this creates a new log for this script
LOG=”/home/funbot/funbot.log”
# Time/Date stamp the log
echo -e “n$(date +%Y:%m:%d-%T) – Starting ROS daemon at startup” >> $LOG
# For bootup time calculations
START=$(date +%s.%N)
#This is important to wait until the IP address of the machine is actually configured
while true; do
IP=”`ifconfig | grep ‘inet addr:’172.168.100.137”| cut -d: -f2 | awk ‘{ print $1}’`”
if [ “$IP” ] ; then
echo “Found”
break
fi
done
END=$(date +%s.%N)
echo ” Use $(echo “$END – $START”|bc ) seconds for IP come up” >> $LOG
# … here do whatever you want to do …
exit 0

2. 图形界面
gnome-terminal的快捷方式

2.1 created bash file including start roslaunch command.
$ vi 00.start.sh

2.2. Step 2: Make the script executable by using command
$ sudo chmod +x /path/to/00.start.sh

2.3. System->preference->startup application, Add:
gnome-terminal -x /home/funstep/00.start.sh
we executed the command to autostart the bash when Ubuntu boots.

2.4. reboot system and your application would auto start on boot every time.

Notice 1:
It is not possible or rather difficult to open multiple terminals from a single script,
so it is recommended to use a single launch file to include everything.
Notice 2:
if using & ampersand or wait to run task in background:
{ command1 & command2 & } &
To execute a group of commands
( command1 & command2 ) &
execute commands in a list (subshell)
Notice 3:
It is bash rather than sh!
IF DEBUG needed, On one terminal, Edit –>profile preferences–>title and command tab , select when command exits hold the terminal open.

3. ros途径
robot_upstart方式

3.1 Intro
The robot_upstart package may be used to install/uninstall Ubuntu Linux upstart jobs which launch groups of roslaunch files.
This package aims to assist with creating simple platform-specific jobs to start robot’s ROS launch files when its PC powers up.
This package allows robot administrators to quickly and easily generate one or more background jobs to run roslaunch invocations on system startup.

关于robot_upstart主要优点:
automatically takes care of waiting for network and any other required services to come up.
roslaunch will automatically start a roscore if not already one running.
所以,Unless you have very specific reasons (ie: no upstart support), it is the best choise.

3.2 Install
$ unzip robot-upstart-src.zip ./
$ cartkin …
$ rosrun robot-upstart install xxx/launch/yyy.launch
you will meet fail, then:
$ mv ./robot-upstart-src/ ~/
$ sudo apt-get install ros-indigo-robot-upstart
$ rosrun robot-upstart install xxx/launch/yyy.launch
meet failed again, then:
$ mv ~/robot-upstart-src ./
$ cartkin …
Be Okay!
$ sudo apt-get install ros-indigo-robot-upstart
WARNING: The following packages cannot be authenticated!
ros-indigo-robot-upstart
Install these packages without verification? [y/N]
y

3.3 Usage
The basic or one-off usage is with the install script:
$ rosrun robot_upstart install funbot/launch/r_rbt.launch
or
$ rosrun robot_upstart install funbot/launch/r_rbt.launch –interface wlan2
——
Install files to the following paths:
/etc/init/funbot.conf
/etc/ros/indigo/funbot.d/.installed_files
/etc/ros/indigo/funbot.d/r_rbt.launch
/usr/sbin/funbot-start
/usr/sbin/funbot-stop
Now calling: /usr/bin/sudo /opt/ros/indigo/lib/robot_upstart/mutate_files
[sudo] password for funstep:
Filesystem operation succeeded.
这样把launch作为服务,开机自动启动。
uninstall?
$ rosrun robot_upstart uninstall funbot

If the job is crashing on startup or otherwise want to see what is being output to the terminal on startup:
$ sudo vi /var/log/upstart/funbot.log

It will start automatically when you next start your machine.
You can bring it up and down manually, or via a simple Python API:
$ sudo service funbot start
$ sudo service funbot stop

3.4 upstart自启动服务中的串口通讯
The robot_upstart executes as root and uses setuidgid to run the roslaunch process as the appropriate unprivileged user:
https://github.com/clearpathrobotics/robot_upstart/blob/hydro-devel/tmpl/start#L80
Unfortunately, setuidgid does not pick up the user’s group memberships, so it need a udev rule to either make the device world RW, or to specifically chown it to your ROS user (administrator, on Clearpath-configured robot PCs).

首先,为解决当前用户没有打开串口权限问题,要把当前用户加到串口用户组。
查看串口所在组
$ ls -l /dev/ttyS0
crw-rw—- 1 root dialout 4, 64 1月 27 21:52 /dev/ttyS0
查看当前用户所在组
$ id -Gn
funstep adm lp cdrom floppy sudo audio video plugdev users netdev lpadmin scanner
将当前用户加入到串口所在组dialout
$ sudo adduser funstep dialout
注销重新登录

其次可以用临时方案:
Short term solution, this command would allow full read/write access to serial port ttyS0:
$ sudo chmod 666 /dev/ttyS0
或者永久解决:
Long term solution, it is to install a udev rule using custom udev rules
$ cd /etc/udev/rules.d
(sudo) create a new file
$ sudo vi 40-permissions.rules
KERNEL==”ttyS0″, MODE=”0666″
Then run this command to update your udev rules:
$ sudo service udev restart
All users should now have full access to ttyS0.

某机器视觉系统的方案设计

1、 相机
1.1、相机空间分辨率
1.2、相机像素分辨率
1.3、相机快门速度
2、镜头
2.1、镜头像素分辨率
2.2、镜头空间分辨率
2.3、镜头焦距
2.4、其他因素
3、光源
3.1、类型
3.2、颜色
3.3、形状
3.4、方向
3.5、角度
3.6、光域
— — —

某机械加工企业的机器视觉系统的技术需求:
使用车床加工某款钢质圆棒,随着刀具消耗加大,后期加工零件尺寸越来越大,请问:可否使用机器视觉系统,随时检测工件直径,如果超差则随时报警。
工件最大直径16mm,要求精度2丝。

1、 相机

定性分析:
由于视觉系统的仅需处理工件的二维亮度信息,因此选择黑白像机。
成本考虑,应该选CCD形式的面阵。

1.1、确定相机空间分辨率Rs=Sf/Nf=0.02/2=0.01mm/pixel,即像素当量。其中Sf为视觉系统所需识别最小特征的尺寸0.02mm,Nf为算法识别该特征所需像素的数量2。

1.2、确定相机像素分辨率Rc=FOV/Rs=16/0.01=1600pixel。其中FOV为相机视场16mm,Rs为空间分辨率0.01mm/pixel,Rc为图像分辨率1600pixel。

1.3、确定相机快门速度Exp=Rs/v=0.01/0.05=0.2ms=200us,即曝光时间小于200微妙。其中Rs为像素分辨率0.01mm,v为最大运动速度0.05m/sec。(线速度v=pi*D*n/1000=3m/min=0.05m/sec,其中pi圆周率3.14,D是刀具和工件旋转直径16mm,n是主轴转速60m/min,结果v是m/min。)

最后,结合常用形式,可以选择1600*1200的的200万像素的1/2″的黑白CCD工业相机。

2、镜头

选择镜头需要注意的第一点就是镜头与相机是否匹配。原则上,镜头的规格必须等于或大于相机的规格。特别是在测量中,最好使用稍大规格的镜头,因为镜头往往在其边缘处失真最大。

2.1、确定镜头的像素分辨率。镜头分辨率又称鉴别率或解像力,是判断镜头好坏的一个重要指标,一般用成像平面上1毫米间距内能分辨开的黑白相间的线条对数表示,单位是线对/毫米lp/mm,line pair / mm。
* 根据相机靶面求镜头分辨率N=180/靶面高度4.8=38lp/mm,(1/2”相机的靶面是6.4mm*4.8mm)
* 根据相机像素确定镜头分辨率,1600/6.4=250Pixel/mm,即镜头像素密度是250 Pixel/mm,考虑黑白两条线需要除2,所以镜头分辨率不低于125 lp/mm。
综合考虑以上,确定镜头分辨率不得低于为125lp/mm

2.2、确定镜头的空间分辨率。相机的空间分辨率为 ( FOV/相机靶面像素数),是一个与镜头分辨率根本没关系的量,它们两者按Nyquist的采样理论联系起来才有关系,绝大多数视觉系统都要按FOV/CCD像素的比值来确定视觉系统的分辨率。
这里镜头的空间分辨率,至少不得低于相机的0.01mm/pixel。

2.3、确定镜头焦距
机构设计考虑镜头安装距离工件100mm,根据已选形相机为选1/2″靶面,工件视场FOV=16mm,工作距离WD=30 mm,则:
f=30X6.4/16=12mm
根据经济原则选用常见系列1/2″标准镜头,焦距确定焦距为定f=12mm。
(一般1/2″标准镜头焦距定12 mm,1/3″定8mm)

2.4、其他因素
额外的,可以顺便确定镜头的分辨力和数值孔径。
由线对是125lp/mm,则可以分辨间隔为1/125 = 8um的两条线,而这个8um是两条线的宽度,所以分辨力就是8/2 = 4 um,这个也就是埃利班的半径。
数值孔径NA=125/1500=0.083。

3、光源

为了有效屏蔽环境噪声在工件表面的成像,拟设计在测试工位设计保护腔,由光源系统提供均匀稳定光通量、同时屏蔽外界干扰。

3.1、类型
选用LED光源,这个无须赘述,卤素灯等已经过时了。

3.2、颜色
由于工件表面存在涂层,属强反色面,目标位于漫反射区,经分析应选择:前景、低角、明域、条形红光为主的组合结构光。

3.3、形状
组合条形、环形、同轴三种常见光源进行实验。目测所见,没有高光产生也没有形成阴影形成,接近人眼真实所见。利用直方图和频域工具工具分析,效果较好。

3.4、方向
根据照射方式,有前向照明和背向照明。前向照明,光源和像机位于工件同侧,利于拍摄表面特征图像,适用于物体的外观检测。背向照明,光源和像机在被测工件的两侧,易于获得高对比度(即清晰)的图像。
为有利于机械结构设计和现场安装调试,兼顾特征图像的获取,项目采用前向照明方式。

3.5、角度
根据光源与工件表面夹角,以45o为界,大者为高角度照明,小者为低角度照明。高角度照明常用于检测工件的丝印、商标、条码、字符等。低角度照明主要用于检测对象表面的凹凸部分,例如轮廓、边界、刻字、划伤等。
项目选择低角度方式。

3.6、光域
根据光源、像机、工件三者相对位置,有明域照明和暗域照明。明域方式把相机放置在光源反射的线路上;暗域方式把相机放置在与光源入射的方向上。
明域照明,主要用于散射和吸收光线的缺陷的检测,大多数是背景亮缺陷暗。暗域照明主要用于平滑工件表面的含有散射光的缺陷检测,大多数是背景黑暗而缺陷可见,常见于表面污垢和表面突起。根据经验翘边坑洞适用明域照明,而裂纹砂眼适用暗域照明。
选择明域方式。

智能机器人(77):ROS的TF-transform(04)

使用变换的方式, 如何跟踪坐标系树?

这棵tf-tree一直随着时间的推移而变化,并且每个变换都被存储了时间快照,默认的是最多保存10秒之内的。
可以用lookupTransform()函数来访问该tf-tree中最新的可用转换,但其实并不知道该转换会在什么时间发生。

例如,以下广播了一个新的坐标系转换,从父turtle1到新的carrot1,carrot1的左偏离turtle1了2米:
br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), “turtle1”, “carrot1”));
这里现给变换一个时间戳,用的就是当前时间ros :: Time :: now()。

随后, 可以开始使用carrot1的各种信息了:
listener.lookupTransform(“/turtle2”, “/carrot1”, ros::Time(0), transform);
这里指定了一个等于0的时间,对于tf来说, time0就是表示缓冲区中最新可用的变换。

当然,也可以获取特定的时间所进行的转换, 例如, 改为当前时间的变换:
listener.lookupTransform(“/turtle2”, “/carrot1”, ros::Time::now(), transform);

但是这时候, lookupTransform会失败,反复出错说:
You requested a transform that is 0.018 miliseconds in the past,
but the most recent transform in the tf buffer is 7.681 miliseconds old.
When trying to transform between /carrot1 and /turtle2.

或者这样的出错:
Exception thrown:
Lookup would require extrapolation into the past.
Requested time 1516087296.727826475 but the earliest data is at time 1516087297.522204578,
when looking up transform from frame [laser_link] to frame [map]

或者:
Transform [sender=unknown_publisher]
For frame [laser_link]: No transform to fixed frame [map]. TF error: [Lookup would require extrapolation into the future. Requested time 1516092877.852408757 but the latest data is at time 1516092875.632156846, when looking up transform from frame [laser_link] to frame [map]]

为什么?

因为每个listener都有一个缓冲区,存储来自不同broadcaster的所有坐标变换。
而当broadster发送一个转换时,该转换真正进入缓冲区需要几个毫秒的时间。

所以,如果在“现在”的时刻请求请求“现在“进行的转换的时候,应该等待几毫秒才能到达。
ros::Time now = ros::Time::now();
listener.waitForTransform(“/turtle2”, “/carrot1”, now, ros::Duration(3.0));
listener.lookupTransform(“/turtle2”, “/carrot1”, now, transform);
意思是,等待从这个t1到c1的变换,在now这个时候,等待不要超过3.0这个最长持续时间。

因此,这个waitForTransform()实际上会阻塞,直到两个frame间的转换可用;
这通常需要几毫秒,或者,严重的,如果转换一直不可用,那么一直等到3.0second的超时。

总结: lookupTransform是一个较低级别的方法,它返回两个坐标系之间的变换,这个方法是tf库的核心功能
然而最常见的是transform *方法,将被最终用户使用。
这个方法被设计用于transform *()方法中。

智能机器人(75):ROS的TF-transform(03)

PART I – 几个坐标系
*1. map
*2. odom
*3. base_link

PART II – 几个变换
0. base_link–>-base_laser
1. odom–>base_link
2. map–>base_link
*3. map–>odom

PART III – map -> odom的代码

PART IV – map-> odom的作用

PART I – 几个坐标系

关于map,odom, base_link, base_laser,fixed_frame,target_frame的关系。

这些frames具体表述于 http://www.ros.org/reps/rep-0105.html :
earth -> map -> odom -> base_link,
the map frame is the parent of odom, and odom is the parent of base_link.

1. map:代表了一个固定不变的世界。定位了机器人的绝对位姿
2. odom:代表了一个固定不变的世界。定位了机器人的相对位姿。
这个是里程计所测绘到的世界(odom is the world as measured by odometry)。用于定位机器人的相对位姿。
如果里程计/激光雷达等均是完美无缺的,那么odom frame无差别的等同于map frame。也就是map->odom的变换为0。
3. base_link: is rigidly attached to the robot base

*1. map
map是虚拟世界中的固定frame, 一般与odom(或者odom_combined)相连。 它的Z轴指向正上方,也就是天空。map与odom同为全局坐标系,原点为地图原点(地图原点在地图相应的yaml文件中有规定)。

The map frame is useful as a long-term global reference, but discrete jumps in position estimators make it a poor reference frame for local sensing and acting.
In a typical setup, a localization component constantly recomputes the robot pose in the map frame based on sensor observations, therefore eliminating drift, but causing discrete jumps when new sensor information arrives.

*2. odom
在一个典型设置中,odom 坐标系是基于测距源来计算的,如车轮里程计odom,视觉里程计vo或惯性测量单元imu。
In a typical setup the odom frame is computed based on an odometry source, such as wheel odometry, visual odometry or an inertial measurement unit.

odom是一个很好的固定参照frame。机器人的姿态相对odom而言是随时间是经常可变,所以在长远中它不是一个好的全局参照系。但在某一时间点而言它相对机器人的位置是不变的。
However, the pose of a robot in the odom frame is guaranteed to be continuous, meaning that the pose of a mobile platform in the odom frame always evolves in a smooth way, without discrete jumps.

Odom frame is called so because it is the frame where the odometry is used to update the pose of the robot.
The odom frame represents the starting point of the robot, and, the transform from odom to base_link represents the current position of the robot as measured by odometry.
The odometry measurements are a measure of how far the robot has traveled with respect to the /odom frame. The point of odometry messages is to give a measure of the distance the robot has traveled.
In this frame, there is no correction or jumps in the robot’s pose, it is continually updated based on the data coming from the motor’s encoders (sometimes fused with accelerometer and gyro).

*3. base_link
base_link参照系紧紧粘在移动机器人基座上的任何一个位置和角度。
一般位于tf-tree的最根部,物理语义原点一般为表示机器人中心,为相对机器人的本体的坐标系。其它所有的 frame都是相对于base_link定义并粘在一起的。
它们一起相对于大地图map移动,让机器人移动就是向tf发布geometry_msgs::TransformStamped 消息通知 base_link相对于map的tf转换关系。机器人监听到这个变换后移动。

PART II – 几个变换

0. base_link–>-base_laser,提供激光先对于机器人的位置信息,可以直接提供,或者通过urdf模型中提供。
1. odom–>base_link,是里程计节点,机器人在odom中的相对位姿。
2. map–>base_link,通常是在amcl的定位软件模块中,机器人相对于map的绝对位姿。
*3. map–>odom,由于上面1和2的发布导致base_link具有两个parent, 所以通常是这种形势发布。

坐标系变化重要的是,每个帧都只有一个父帧,有时候两个坐标A/B都和C有关联的话,就是A和B关联,B再和C关联,而不是A/B都和C关联。
具体到这里的三个坐标帧map/odom/base_link,其实map和odom都应该和base_link关联,但为了遵守“每个帧只能有一个父帧”的原则,所以只能有一个节点:或者由里程计发布odom->base_link的坐标关系,或者由激光雷达发布map->base_link,但只能有一个,不能同时; 但是却又要体现三个坐标的关系,所以是发布的A->B->C,即map –> odom –> base_link。
就是说:

1.的odom->base_link的坐标关系,由里程计节点计算,并且发布,
2.的map->base_link的坐标关系,由定位节点计算,但并不发布;而是利用odom->base_link的坐标关系,
计算出3.的map->odom,并且发布。

说明:
应用中也并不总是正规的1+3这种形式发布map->odom->base_link, 偶尔的也可以用1或者2这种的简单形式。
例如只有里程计没有激光雷达也能跑,只发布1. odom–>base_link
例如只有激光雷达没有里程计也能跑,只发布2. map–>base_link
或者既有激光雷达也有里程计,也可以仅仅发布1. odom–>base_link或者2. map–>base_link。

*0. base_link–>base_laser
It’s normally static, published by: static_tf published in one of launch files, or
robot_state_publisher from urdf model, such as turtlebot_description.

*1. odom->base_link
odom->base_link的转换是由里程计源计算和发布的。The link between odom and base_link is job of localization node.
It’s computed and broadcast by the odometry sources, or, published by robot_pose_ekf.
So, it may be robot_pose_ekf, or it will be controller’s node directly: just set /mobile_base_controller/enable_odom_tf to true.

*2. map–>base_link
It’s computed by a localization component.
However, the localization component does not broadcast map->base_link.
Instead, it get map->base_link, and with information of odom -> base_link, through this way, get map–>odom.

**3. map–>odom
Although someone would say that both map and odom should be attached to base_link, yes, map->base_link and odom->base_link are both correct.
From map->base_link subtracte base_link->odom will get map->odom.
Dehaoz Tips:
If we have a “perfect” odometry sensor and a “perfect” laser, the map–>odom tf will always be 0.

5. So, the tree looks like this:
But, this is not allowed , because each frame can only have one parent.

PART III – map -> odom的代码

(根据map和base_link以及odom和base_link的关系计算map和odom的坐标关系的代码)

ROS的AMCL算法,主要是:
根据订阅到的地图数据,配合激光扫描特征,使用粒子滤波器,获取最佳定位点Mp (point on map), 也就是base_link在map中的坐标。(map->base_link)
然后根据里程计得到的相对位姿信息 odom->base_link,原点是机器人启动时刻的位置
从而得到map->odom。

代码分析:

$ vi https://github.com/ros-planning/navigation/blob/kinetic-devel/amcl/src/amcl_node.cpp
at lien 1299:

AmclNode::AmclNode()
{
… …
private_nh_.param(“odom_frame_id”, odom_frame_id_, std::string(“odom”));
private_nh_.param(“base_frame_id”, base_frame_id_, std::string(“base_link”));
private_nh_.param(“global_frame_id”, global_frame_id_, std::string(“map”));
… …
odom_frame_id_ = config.odom_frame_id;
base_frame_id_ = config.base_frame_id;
global_frame_id_ = config.global_frame_id;
… …
updatePoseFromServer();
… …
void AmclNode::laserReceived(const sensor_msgs::LaserScanConstPtr& laser_scan)
{
… …
// Where was the robot when this scan was taken?
pf_vector_t pose;
if(!getOdomPose(latest_odom_pose_, pose.v[0], pose.v[1], pose.v[2],
laser_scan->header.stamp, base_frame_id_))
{
ROS_ERROR(“Couldn’t determine robot’s pose associated with laser scan”);
return;
}
… …
geometry_msgs::PoseWithCovarianceStamped p;
// Fill in the header
p.header.frame_id = global_frame_id_;
p.header.stamp = laser_scan->header.stamp;
// Copy in the pose
p.pose.pose.position.x = hyps[max_weight_hyp].pf_pose_mean.v[0];
p.pose.pose.position.y = hyps[max_weight_hyp].pf_pose_mean.v[1];
tf::quaternionTFToMsg(tf::createQuaternionFromYaw(hyps[max_weight_hyp].pf_pose_mean.v[2]), p.pose.pose.orientation);
// hyps[max_weight_hyp].pf_pose_mean.v[0], [1], [2] 代表了粒子滤波器的最佳定位点Mp,
// 也就是base_link在map中的坐标(map->base_link)

… …
pose_pub_.publish(p);
last_published_pose = p; // the pose of base_link among map.
ROS_DEBUG(“New pose: %6.3f %6.3f %6.3f”,
hyps[max_weight_hyp].pf_pose_mean.v[0],
hyps[max_weight_hyp].pf_pose_mean.v[1],
hyps[max_weight_hyp].pf_pose_mean.v[2]); // //amcl估计出的map上的点

// subtracting base_link->odom FROM map->base_link, we get map->odom !

// 这个odom->map是需要求解的关键,
// 即odom坐标系到map坐标系的变换,或者说map在odom的位姿坐标。
tf::Stamped odom_to_map;

try
{
// 根据假设的颗粒点在map的坐标
// 得到 tmp_tf = map->base_link的坐标变换,即base_link在map的位姿坐标,或者从map原点看base_link的位置。
tf::Transform tmp_tf(tf::createQuaternionFromYaw(hyps[max_weight_hyp].pf_pose_mean.v[2]),
tf::Vector3(hyps[max_weight_hyp].pf_pose_mean.v[0],
hyps[max_weight_hyp].pf_pose_mean.v[1],
0.0));

// 翻转得到tmp_tf_stamped = base_link->map
// 即获得map在base_kink坐标系中的坐标
tf::Stamped tmp_tf_stamped (tmp_tf.inverse(),
laser_scan->header.stamp,
base_frame_id_);

// transformPose (“frame_c”, b->a, ??), ?? ==>> c->a。
// 这个函数就是由a在b中的位姿,得到a在c中的位姿。
// 这里根据tmp_tf_stamped = base_link->map
// 得到odom_to_map = odom->map,也就是odom坐标系到map坐标系的变换,或者说map在odom的位姿坐标,或者说从odom原点看map原点的位置。
this->tf_->transformPose(odom_frame_id_,
tmp_tf_stamped,
odom_to_map);
}
// subtracting base_link/odom from map/base_link and get map/odom , send map to odom instead
catch(tf::TransformException)
{
ROS_DEBUG(“Failed to subtract base to odom transform”);
return;
}

// latest_tf_ = odom->map
latest_tf_ = tf::Transform(tf::Quaternion(odom_to_map.getRotation()),
tf::Point(odom_to_map.getOrigin()));
latest_tf_valid_ = true;

if (tf_broadcast_ == true)
{
// We want to send a transform that is good until a tolerance time so that odom can be used
ros::Time transform_expiration = (laser_scan->header.stamp + transform_tolerance_);

// 翻转为 map->odom,即odom在map中的位姿坐标,或者说从map原点看odom原点的位置。
tf::StampedTransform tmp_tf_stamped(latest_tf_.inverse(),
transform_expiration,
global_frame_id_, odom_frame_id_);
// 发布 map->odom
this->tfb_->sendTransform(tmp_tf_stamped);
sent_first_transform_ = true;
}
… …

odom->base_link是码盘提供的原始数据
map->base_link是激光帧间匹配计算得到的
ODOM_map = MAP_basekink * ODOM_bae_link,
odom坐标到map坐标的变换矩阵 = map坐标到baselink坐标的变换矩阵 * odom坐标到base_link坐标的变换矩阵
map在odom坐标中的位姿 = baselink在map坐标中的位姿 * map在odom坐标中位姿。

REF:
http://wiki.ros.org/amcl
https://github.com/ros-planning/navigation/tree/kinetic-devel/amcl#amcl_node.cpp
https://answers.ros.org/question/227425/why-gmapping-package-requires-the-base_link-odom-tf/
http://blog.csdn.net/crazyquhezheng/article/details/49154805

PART III – Hi58的总结

ROS的tf坐标系,最终都要归结为三个标准框架,可以解决许多常见的机器人问题:
1. 全局不准确,但局部光滑框架(odom)
2. 全局准确,但局部不连续的帧(map)
3. 机器人自身框架(base_link)

The odom pose is the /base_link frame pose relative to /odom frame.
IMU and encoders provide the odom pose which drifts over time and thus only right for a short time.
odom frame is a pseudo-global reference frame in which a dead reckoning position and orientation estimate is represented.

The absolute/global pose is the /base_link frame pose relative to /map frame.
GPS, amcl and indoor position system provide the absolute/global pose.

got odom pose means you got the tf odom->base_link,
got absolute/global pose means you got map->base_link,

To correct the robot location, you have to provide the tf map->odom,
which can be calculated from odom->base_link – map->base_link.

例如在 hector_slam_launch:

*1. publishes the map->odom transform:

*2 do not require the use of a odom frame (for example because your platform does not provide any usable odometry) ,
you can directly publish a transformation from map to base_link:

*3 it can also be used without broadcasting any tf transformation.
This can be useful if another node is responsible for publishing the map->odom or map->base_frame transform.

PART IV – map-> odom的作用

In order to run robot localization and model the drifts of the robot while it is navigating, a transform between the frame map and the frame odom was introduced.
At the beginning map and odom frame are overlapping.
While the robot navigates and localizes (using amcl for example) it will correct its drift but it will not do this correction by changing its pose in the odom frame (this pose published under the topic /odom).
It will update the transform between map and odom frames such that when you chain the transforms to get the pose of the robot in the map frame, you will get the corrected pose (thus the transform between map and odom frames represents the offset in the robot’s pose that is given in the odom frame).
Do not confuse the frame odom with the robot’s pose published under the topic /odom.
The frame odom is just a coordinate system where the robot has to be located. This location of the robot would then be the coordinates of base_link in the odom frame. These coordinates are published in the /odom topic.
So when you turn on the robot, base_link and odom are overlapping as the robot’s pose is zero at the beginning.
But once the robot’s wheels start rotating, base_link will get away from odom.
So base_link is a mobile frame and odom a fixed one.

1.
In the beginning, we assume that the origin of the world coordinate system and the origin of odometry are the same.
https://qiita-image-store.s3.amazonaws.com/0/110138/0e9bfbe3-8061-405c-c26b-191f2a87b0c4.png

2.
Then tobot move:
https://qiita-image-store.s3.amazonaws.com/0/110138/f35f03da-a3b5-f959-363d-c9e1a26f4aed.png

3.
However, in fact, due to accumulation of various disturbances, the origin of the world coordinate system and odometry will deviate.
https://qiita-image-store.s3.amazonaws.com/0/110138/40a3eada-b7ed-a304-3f74-3e7cdeaea151.png
Therefore, by matching the sensor information and the map information, we come to the idea of ​​correcting the deviation between the origin of the world coordinate system and the origin of odometry. The package for that will be amcl .