机器人应用

智能机器人(71):ROS的TF-transform(01)

PART I ====== 变换矩阵
PART II ====== 欧拉角
PART III ====== 四元数
PART IV ====== 坐标系:空间描述与变换

PART I ====== 变换矩阵

1. 二维的平移操作

一般,可以将平移、旋转、缩放操作用矩阵表示。但是,使用2×2的变换矩阵是没有办法描述平移、旋转变换中的二维平移操作的;同样,3×3矩阵也没法描述三维的平移操作。
所以,为了统一描述二维中的平移、旋转、缩放操作,需要引入3×3矩阵形式;同理,使用4×4的矩阵才能统一描述三维的变换。
因此,用齐次坐标(Homogeneous coordinates)描述点和向量。

对于二维平移,如下图所示:

P点经过x和y方向的平移到P’点,可以得到:
x′=x+tx
y′=y+ty
由于引入了齐次坐标,在描述二维坐标的时候,使用(x,y,w)的方式(一般w=1)。
于是,可以写成下面矩阵形式:
x′=1,0,tx—–x
y′=0,1,ty—–y
w’=0,0,1—–w
也就是说平移矩阵是
1,0,tx
0,1,ty
0,0,1

2. 二维的绕原点旋转操作

首先明确,在二维中,旋转是绕着一个点进行,在三维中,旋转是绕着某一个轴进行。

最简单的,二维中的旋转是绕着坐标原点进行的旋转,如下图所示,点v 绕坐标原点旋转θ 角,得到点v’:

假设,v点的坐标是(x, y) ,那么经过推导可以得到,v’点的坐标(x’, y’)
增加一些中间变量: 设原点到v点的距离为r,设原点到v点的向量与x轴的夹角为ϕ,则有:
x=rcosϕ
y=rsinϕ

x′=rcos(θ+ϕ)
y′=rsin(θ+ϕ)
通过三角函数展开得到
x′=rcosθcosϕ−rsinθsinϕ
y′=rsinθcosϕ+rcosθsinϕ
带入x和y的表达,得到:
x′=xcosθ−ysinθ
y′=xsinθ+ycosθ
写成矩阵形式,为:
x′ = [cosθ, -sinθ]—-x
y′ = [sinθ, cosθ]—–y
引入了齐次坐标,扩展为3×3形式,则绕原点旋转的齐次矩阵为:
cosθ, -sinθ,0
sinθ, cosθ,0
0,0,1

3. 二维的绕任意点旋转操作

绕原点旋转,是二维旋转最基本的情况。当需要进行绕任意点旋转时,可以转换到这种基本情况的绕原点旋转,思路如下:
1. 首先将这任意旋转点平移过来,到坐标原点处;
2. 执行最简单的绕坐标原点的旋转;
3. 最后将这任意旋转点平移回去,到原来的位置。
也就是说在处理绕任意点旋转的情况下,需要额外在开头和末尾执行两次平移的操作,中间才是真正的旋转。

假设平移的矩阵是T(x,y),也就是说我们需要得到的坐标 v’ = T(x,y)*R*T(-x,-y)
(采用列坐标描述点的坐标,因此是左乘:首先执行T(-x,-y)… 依次向左进行。)
这样就很容易得出二维中绕任意点旋转操作的旋转矩阵了,即只需要把三个矩阵乘起来即可:
M=
⎡⎣⎢100010txty1⎤⎦⎥∗⎡⎣⎢cosθsinθ0−sinθcosθ0001⎤⎦⎥∗⎡⎣⎢100010−tx−ty1⎤⎦⎥=⎡⎣⎢cosθsinθ0−sinθcosθ0(1−cosθ)tx+ty∗sinθ(1−cosθ)ty−tx∗sinθ1⎤⎦⎥

对照平移和旋转的矩阵可以看出:
这个3×3矩阵的前2×2部分是和旋转相关的,
第三列是与平移相关。

4. 三维的基本旋转

首先明确,在二维中旋转是绕着一个点进行,在三维中就是绕着某一个轴进行。
三维的方向,可以采用右手坐标系:同时旋转角度的正负,也遵循右手坐标系:如下图所示:

一个三维的旋转,可以转换为绕基本坐标轴的旋转。
因此,首先要讨论一下绕三个坐标值x、y、z的旋转。

4.1. 绕X轴的旋转
在三维中,一个点P(x,y,z)绕x轴旋转θ角,到点P’(x’,y’,z’)。
由于是绕x轴进行的旋转,因此x坐标保持不变,在y和z组成的yoz平面上进行一个二维旋转(y轴类似于二维旋转中的x轴,z轴类似于二维旋转中的y轴),于是降维后有:
x′=x
y′=ycosθ−zsinθ
z′=ysinθ+zcosθ

4.2. 绕Y轴旋转
绕Y轴的旋转和绕X轴的旋转类似,Y坐标保持不变,除Y轴之外,ZOX组成的平面进行一次二维的旋转(Z轴类似于二维旋转的X轴,X轴类似于二维旋转中的Y轴),同样有:
x′=zsinθ+xcosθ
y′=y
z′=zcosθ−xsinθ
注意这里是ZOX,而不是XOZ,观察右手系的图片可以了解到这一点。

4.3. 绕Z轴旋转
与上面类似,绕Z轴旋转,Z坐标保持不变,xoy组成的平面内正好进行一次二维旋转
(和上面旋转的情况完全一样)自由度

4.4. 三维旋转操作总结
可以将绕X、Y和Z坐标轴的旋转矩阵分别记为 Rx(α),Ry(β),Rz(θ),则有:

5. 三维的绕任意轴旋转

绕任意轴的三维旋转,可以使用类似二维的绕任意点旋转一样,将旋转分解为一些列基本的旋转。
对于点P,绕任意向量u,旋转角度θ,得到点Q,如果已知P点和向量u如何求Q点的坐标?如下图所示:

可以把向量u进行一些旋转,让它与z轴重合,之后旋转P到Q就作了一次绕Z轴的三维基本旋转,然后,再执行反向的旋转,将向量u变回到它原来的方向,也就是说需要进行的操作如下:
1. 将旋转轴u绕x轴旋转至xoz平面
2. 将旋转轴u绕y轴旋转至于z轴重合
3. 绕z轴旋转θ角 !
4. 执行步骤2的逆过程
5. 执行步骤1的逆过程

5.0. 原始的旋转轴u如下图所示:

5.1. 第1、2、3步骤如下图所示:



5.2.分步解析
步骤1将向量u旋转至xoz平面的操作是一个绕x轴的旋转操作,步骤2将向量u旋转到与z轴重合,
第1、2步骤的示意图如下:

作点P在yoz平面的投影点q,q的坐标是(0, b, c),原点o与q点的连线oq和z轴的夹角就是u绕x轴旋转的角度。通过这次旋转使得u向量旋转到xoz平面(图中的or向量)
【步骤1】
过r点作z轴的垂线,or与z轴的夹角为β, 这个角度就是绕Y轴旋转的角度,通过这次旋转使得u向量旋转到与z轴重合
【步骤2】

步骤1中绕x轴旋转的是一次基本的绕x轴的三维旋转,旋转矩阵是:(注意α角度是绕x旋转的正向的角度)
旋转矩阵(记作 Rx(α))为:
在完成步骤1之后,向量u被变换到了r的位置,继续步骤2的操作,绕y轴旋转负向的β角,经过这次变换之后向量u与z轴完全重合,这一步也是执行的一次绕Y轴的基本旋转,
旋转矩阵(记作 Ry(−β))为:

在完成前面两个步骤之后,u方向和z轴完全重合,因此执行旋转θ角,执行的是一次绕z轴的基本三维旋转(记作 R(θ))。

最后两步骤,是前面1和2的逆操作,也就是绕Y轴旋转β,和,绕X轴旋转−α,这两个矩阵分别记作 Ry(β) 和 Rx(−α)。

最终得到绕任意轴u旋转的旋转矩阵是:
M=Rx(−α)Ry(β)Rz(θ)Ry(−β)Rx(α)
(因为使用的列向量,因此执行的是左乘,从右往左)

(注意:上面的(u,v,w),对应向量(a,b,c) 。)

如果向量是经过单位化的(单位向量),那么有a2+b2+c2=1,可以简化上述的公式,得到:

PART II ====== 欧拉角

6. 欧拉角

上面讨论了绕三条坐标轴旋转的旋转矩阵,旋转矩阵C 一般形式为:
【c11,c21,c31】
【c12,c22.c32】
【c13,c23,c33】
(这里没有用齐次坐标)
(直角坐标系的三个坐标轴方向的单位向量,实际上是一组标准正交基,旋转矩阵C 是一个正交矩阵。
所以, 旋转矩阵表面上看起来有 9 个参数,实际上只有三个是独立的,另外6个有约束。)

该旋转矩阵C 的三个列向量,实际对应着,原坐标系三个坐标轴方向的单位向量在旋转后的新坐标系下的坐标。

为了更直接地指出这三个独立参数,欧拉(Euler)证明了如下事实:
任何一个旋转都可以由连续施行的三次绕轴旋转来实现,这三次绕轴旋转的旋转角就是三个独立参数,称为欧拉角。
欧拉角之所以可以用来描述旋转是来自于欧拉旋转定理:任何一个旋转都可以用三个绕轴旋转的参数来表示。

定义一个欧拉角,需要明确的内容包括:
1. 三个旋转角的组合方式(是xyz还是yzx还是zxy)
2. 旋转角度的参考坐标系统(旋转是相对于固定的坐标系 还是相对于自身的坐标系)
3. 使用旋转角度是左手系还是右手系
4. 三个旋转角的记法
不同人描述的欧拉角的旋转轴和旋转的顺序都可能是不一样的。
当使用其他人提供的欧拉角的实现时,需要首先搞清楚他用的是那种约定。
根据绕轴旋转的顺序不同,欧拉角的表示也不同。

关于描述坐标系{B}相对于参考坐标系{A}的姿态有两种方式。
* 第一种是绕固定(参考)坐标轴旋转:
假设开始两个坐标系重合,先将{B}绕{A}的X轴旋转γ,然后绕{A}的Y轴旋转β,最后绕{A}的Z轴旋转α,就能旋转到当前姿态。
可以称其为X-Y-Z fixed angles或RPY角(Roll, Pitch, Yaw)。
* 另一种姿态描述方式是绕自身坐标轴旋转:
假设开始两个坐标系重合,先将{B}绕自身的Z轴旋转α,然后绕Y轴旋转β,最后绕X轴旋转γ,就能旋转到当前姿态。
称其为Z-Y-X欧拉角,由于是绕自身坐标轴进行旋转。

ROS的TF是前者

6.1 常见的欧拉角表示有 Yaw-Pitch-Roll (Y-X-Z顺序),通过下面的图片可以形象地进行理解。
Yaw(偏航):欧拉角向量的y轴
Pitch(俯仰):欧拉角向量的x轴
Roll(翻滚): 欧拉角向量的z轴

6.2 ROS的TF-frame里,
采用欧拉角RPY,RPY指的是绕固定坐标系xyz旋转。
Roll(滚转角)Pitch(俯仰角)Yaw(偏航角)分别对应绕XYZ轴旋转。

Roll:横滚

Pitch: 俯仰

Yaw: 偏航

设Roll、Yaw 、Pitch 三个角度分别为 φ、ψ 、θ,那么利用欧拉角进行旋转对应的旋转变换矩阵为:
【cosψ cosθ−sinψ cosφ sinθcosψ sinθ+sinψ cosφ】
【cosθsinψ sinφ−sinψ cosθ−cosψ cosφ sinθ−sinψ sinθ+cosψ 】
【cosφ cosθcosψ sinφsinφ sinθ−sinφ cosθcosφ】
实际上 Roll/Pitch/Yaw的旋转就分别对应着前面给出的旋转矩阵 Rx(),Ry(),Rz(),上面矩阵就是这三个矩阵的复合。

6.3
旋转角的记法
顺序——-飞行器———-望远镜——符号——角速度
第一——heading——-azimuth——–θ——–yaw
第二——attitude——elevation——ϕ——–pitch
第三——bank———–tilt————-ψ——–roll

6.4
欧拉角的好处是简单、容易理解,但使用它作为旋转的工具有严重的缺陷—万向节死锁(Gimbal Lock)。万向节死锁是指物体的两个旋转轴指向同一个方向。
实际上,当两个旋转轴平行时,万向节锁现象发生了,换句话说,绕一个轴旋转可能会覆盖住另一个轴的旋转,从而失去一维自由度。

PART III ====== 四元数

7. 连续的旋转

假设对物体进行一次欧拉角描述的旋转,三个欧拉角分别是(a1,a2,a3);之后再进行一次旋转,三个欧拉角描述是(b1,b2,b3);那么能否只用一次旋转(欧拉角描述为(c1,c2,c3)),来达到这两次旋转相同的效果呢?

这样是非常困难的,不能够仅仅使用(a1+b1,a2+b2,b3+b3)来得到这三个角度。
一般来说,需要将欧拉角转换成前面的旋转矩阵或者后面的四元数来进行连续旋转的叠加计算,之后再转换回欧拉角。
但是这样做的次数多了可能会引入很大的误差导致旋转结果出错。比较好的方案是直接使用旋转矩阵或四元数来计算这类问题。

四元数的一个重要应用是用它来描述三维旋转,四元数从某种意义上来说是四维空间的旋转,难以想象,了解它的结论和使用场景更加重要。

7.1
四元数的由来和复数很很大的关系,因此首先讨论一下关于复数的内容。复数中一个比较重要的概念是共轭复数,将复数的虚部取相反数,得到它的共轭复数:
z=a+biz∗=a−bi

当使用i去乘以一个复数时,把得到的结果绘制在复平面上时,发现得到的位置正好是绕原点旋转90度的效果。

于是可以猜测,复数的乘法,和,旋转之间应该有某些关系,例如:
定义一个复数q , 使用q作为一个旋转的因子
q=cosθ+isinθ

写成矩阵的形式是:
a′–[cosθsinθ]—–[a]
b′–[−sinθcosθ]—-[b]
这个公式正好是二维空间的旋转公式,当把新的到的(a′+b′i)绘制在复平面上时,得到的正好是原来的点(a+bi)旋转θ角之后的位置

7.2
既然使用复数的乘法可以描述二维的旋转,那么拓展一个维度是否能表示三维旋转呢?这个也正是四元数发明者William Hamilton最初的想法,也就是说使用
z=a+ib+jc
i2=j2=−1
但是很遗憾 三维的复数的乘法并不是闭合的。也就是说有可能两个值相乘得到的结果并不是三维的复数。
William Hamilton终于意识到自己所需要的运算在三维空间中是不可能实现的,但在四维空间中是可以的。

四元数是另一种描述三维旋转的方式,四元数使用4个分量来描述旋转,四元数可以写成下面的方式:
q=xi+yj+zk +w
为了方便表示为
q=(x,y,z,w)=(v ,w)
其中v是向量,w是实数。
模为1的四元数称为单位四元数(Unit quaternions)。

注意:
这里四元数的表示形式和齐次坐标一样,但是它们之间没什么关系。
四元数常常用来表示旋转,将其理解为“w表示旋转角度,v表示旋转轴”是错误的,应“w与旋转角度有关,v与旋转轴有关”。

7.3
两四元数相加
A(a+bi+cj+dk) + B(e + fi + gj + hk) = C【 (a+e) + (b+f)i + (c+g)j + (d+h)k 】
两个四元数相减
(sa,va) – (sb,vb) = (sa-sb,va-vb)

7.4 构造四元数
欧拉定理告诉任意三维旋转都可以使用一个旋转向量和旋转角度来描述。因此四元数往往是使用旋转轴和旋转角来构造的:

a)绕向量u,旋转角度θ,构造四元数
u = (ux,uy,uz)=uxi+uyj+uzk
q = exp[θ/2(uxi+uyj+uzk)]
= cosθ/2 + (uxi+uyj+uzk)sinθ/2
所以,可以用一个四元数q=((x,y,z)sinθ/2, cosθ/2) 来执行一个旋转。

或者表述为,
如果有单位四元数 q = (u ⋅sinθ/2, cosθ/2) 的形式,那么该单位四元数可以表示绕轴 u 进行 θ 角的旋转。

b)从一个向量旋转到另一个向量,构造四元数

c)从四元数获取旋转矩阵和旋转角
设四元数是q = xi+yj+zk+w,那么旋转角度angle和旋转轴(a,b,c):
angle = 2 * acos(w)
a = x / sqrt(1-w*w)
b = y / sqrt(1-w*w)
c = z / sqrt(1-w*w)

PART IV ====== 坐标系:空间描述与变换

8. 刚体的姿态描述

刚体在空间中具有6个自由度,因此可以用6个变量描述一个刚体在空间中的位姿:
(x,y,z,alpha,beta,gamma)

这牵扯到两个坐标系:全局坐标系 和 固连在刚体上的本地坐标系。
而对应的坐标变换,也就是指同一空间向量相对于这两个坐标系的坐标之间的相互转换关系。

旋转矩阵与坐标系之间的坐标变换没有任何关系。
旋转矩阵所描述的,是坐标系的旋转运动,也就是怎样把全局坐标系旋转到本地坐标系上去的。
或者说,怎样把一个向量(坐标系其实就是三个向量)旋转到新的位置。
注意,这里说的向量旋转,是发生在一个坐标系下的事情。

坐标系旋转角度θ, 则等同于将目标点围绕坐标原点反方向旋转同样的角度θ。

8.1 可以推导两个坐标系之间坐标变换的方法。

对于空间向量P,在全局坐标系下坐标为(x0,y0,z0), 在局部坐标系下坐标为(x1,y1,z1),那么:
P = (i,j,k)(x0,y0,z0) = (i′,j′,k′)(x1,y1,z1) = R∗(i,j,k)(x1,y1,z1) = (i,j,k)∗R(x1,y1,z1)

对比左右两边,可以得出结论:
(x0,y0,z0)T = R∗(x1,y1,z1)T
POS_global = R_g−>l ∗ POS_local

即,点在全局的坐标,等于全局坐标系到局部坐标系的转换矩阵,乘以点在局部的坐标。

8.2 通用的空间描述与变换
更广泛的,有两个坐标系A和B,B坐标系中有一个点P,如何把B坐标系中的P映射到A坐标系呢,这涉及到空间描述与变换。
Ap表示p在A坐标系中的表达,Bp表示p在坐标系B中的表达。Aborig表达B坐标原点orig在A坐标系, 是平移量。
要想将Bp转换为A坐标系中的表达Ap,就需要乘以旋转矩阵R_A->B,是旋转量。
将B坐标系中的p点在A坐标系中进行表达:
Ap = R_A->B * Bp + Aboirg
中间的表达式可以成为旋转算子。
则各旋转矩阵可以里哟写成齐次坐标的形式如下
Ap = M * Bp

即,点在A的坐标,等于A坐标系到B坐标系的转换矩阵,乘以点在B的坐标。

8.3 具体到ROS的tf
有三个坐标系,一个世界坐标系world,已知坐标系known,一个未知坐标系query

要求得未知坐标系在世界坐标系当中的表达,可以用旋转算子来表示:

就说是: 未知坐标系在世界坐标系中的旋转算子 = 未知坐标系在已知坐标系中的旋转算子 * 已知坐标系在世界坐标系中的旋转算子

TK1之ros -31

  • 1、ROS在ARM的安装

1.1 个性配置
1) server上启动VPN
2) client上设置证书
3) network设置

  • 2、ROS在TK1的安装

opencv4tegra vs opencv4
2.1 HACK 1
2.2 HACK 2
2.3 HACK 3
2.4 HACK 4

  • 3、深度学习机器人

3.1 wifi
3.2 SD card

1、ROS在ARM的安装

1.1 Set Locale 设置你的环境

Boost and some of the ROS tools, all require that the system locale be set.
Linux中通过locale来设置程序运行的不同语言环境,locale由ANSI C提供支持。locale的命名规则为<语言>_<地区>.<字符集编码>
如zh_CN.UTF-8,zh代表中文,CN代表大陆地区,UTF-8表示字符集。

You can set it with:
$ sudo update-locale LANG=C LANGUAGE=C LC_ALL=C LC_MESSAGES=POSIX
LC_ALL 它是一个宏,如果该值设置了,则该值会覆盖所有LC_*的设置值。注意,LANG的值不受该宏影响。
“C”是系统默认的locale,”POSIX”是”C”的别名。所以当我们新安装完一个系统时,默认的locale就是C或POSIX。
LC_ALL=C 是为了去除所有本地化的设置,让命令能正确执行。
LC_MESSAGES , 提示信息的语言。

If there is a problem. Then try (other languages could be added):
$ export LANGUAGE=en_US.UTF-8
$ export LANG=en_US.UTF-8
$ export LC_ALL=en_US.UTF-8
$ locale-gen en_US.UTF-8
$ dpkg-reconfigure locales

1.2 Setup sources.list

Setup your computer to accept software from the ARM mirror on packages.ros.org.
Due to limited resources, there are only active builds for Trusty armhf (14.04), since this is the stable, long-term Ubuntu release and is the most-requested distribution in conjunction with ROS Indigo.

$ sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’

1.3 Set up keys 设置密码

$ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net –recv-key 0xB01FA116
or,
$ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net –recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

If you can try the following command by adding :80 if you have error:
gpg: keyserver timed out error due to a firewall
$ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net:80 –recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

If you have error
GPG error: Clearsigned file isn’t valid, got ‘NODATA’ (does the network require authentication?)
$ such as first apt-get clean then apt-get update has no effect, what you need is to get a TiZi.

1.4 make sure Debian package index is up-to-date:
$ sudo apt-get update

1.5
Desktop Install, include ROS, rqt, rviz, and robot-generic libraries
$ sudo apt-get install ros-indigo-desktop

NOT desktop-full, include ROS, rqt, rviz, robot-generic libraries, 2D/3D simulators and 2D/3D perception .
$ sudo apt-get install ros-indigo-desktop-full

1.6 Initialize rosdep
Before you can use ROS, you will need to install and initialize rosdep. rosdep enables you to easily install system dependencies for source you want to compile and is required to run some core components in ROS.
$ sudo apt-get install python-rosdep
$ sudo rosdep init
$ rosdep update

1.7 Environment setup
It’s convenient if the ROS environment variables are automatically added to your bash session every time a new shell is launched:
$ echo “” >> ~/.bashrc
$ echo “# Source ROS indigo setupenvironment:” >> ~/.bashrc
$ echo “source /opt/ros/indigo/setup.bash” >> ~/.bashrc
$ source ~/.bashrc

1.8 Getting rosinstall
rosinstall is a frequently used command-line tool in ROS that is distributed separately. It enables you to easily download many source trees for ROS packages with one command. To install this tool on Ubuntu, run:
$ sudo apt-get install python-rosinstall

1.x 设置env
检查ROS_ROOT和ROS_PACKAGE_PATH路径是否设置:
$ printenv | grep ROS
如果未设置则刷新环境:
$ source /opt/ros/indigo/setup.bash

1.9 Verifying OS name

Make sure your OS name defined at /etc/lsb-release is as the following.
Since ros does not recognize Linaro as an OS, this is necessary.
The following is for Ubuntu 14.04, trusty. Modify the release number and name as per your target.
$ vi /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION=”Ubuntu 14.04″

1.10 个性配置
1) server上启动VPN
setup vpn network
cp client.conf
cp ca.crt, ta.key
cp Client012.crt, Client012.key
chmod mod 600 ta.key Client012.key

2)client上设置证书
import cert file for firefox and specialy for chrome
需要根据证书里Issued to的域名信息修改 https certification:
$ vi /etc/hosts
10.10.0.1 dehaou14-n501jw
to visit srv

3)network设置
tegra-ubuntu上要合适配置/etc/hosts,并运行roscore。
pc和srv上要合适配置/etc/hosts,并:
export ROS_MASTER_URI=http://tegra-ubuntu:11311。
export ROS_HOSTNAME=dehaou14-n501jw
完成后可以测试:
$ rosnode ping /somenode

1.11 Using RVIZ notice:
It is not recommended to run rviz on most ARM-based CPUs.
They’re generally too slow, and the version of OpenGL provided by the software (mesa) libraries it not new enough to start rviz.

‘IF’ you have a powerful board with a GPU and vendor-supplied OpenGL libraries, it might be possible to run rviz.
The IFC6410 and the NVidia Jetson TK1 are two such boards where rviz will run, although neither is fast enough for graphics-heavy tasks such as displaying pointclouds.

NOTES:
Note that rviz will segfault , if you have the GTK_IM_MODULE environment variable set, so it’s best to unset it in your ~/.bashrc:
unset GTK_IM_MODULE

REF: http://wiki.ros.org/indigo/Installation/UbuntuARM

2、ROS在TK1的安装

opencv4tegra vs opencv4

2.1 HACK 1 – cv_bridge

With the latest opencv4tegra 21.2 released by Nvidia, the compatibility problems with cv_bridge and image_geometry packages have been solved, so installing OpenCV ROS Packages from PPA does not force opencv4tegra to be uninstalled.
???really???

but a few issues still remain since ROS searches for OpenCV v2.4.8, but opencv4tegra is based on OpenCV v2.4.12.
BUT, There are yet a bit of incompatibility since cv_bridge and image_geometry search for OpenCV 2.4.8 in “/usr/lib/arm-linux-gnueabihf” ,but opencv4tegra is based on OpenCV 2.4.12 and is installed in “/usr/lib/”.
These diversities do not allow to compile external packages based on OpenCV.

To solve the problem you can follow this guide:
http://myzharbot.robot-home.it/blog/software/ros-nvidia-jetson-tx1-jetson-tk1-opencv-ultimate-guide/
What we must “say” to cv_bridge and image_geometry is to not search for OpenCV in the default ARM path “/usr/lib/arm-linux-gnueabihf”, but in “/usr/lib” and that the current version of OpenCV is 2.4.12 and not 2.4.8. finally we must remove the references to the module OpenCL because Nvidia does not provide it.

1) Files to be modified
/opt/ros//lib/pkgconfig/cv_bridge.pc
/opt/ros//lib/pkgconfig/image_geometry.pc
/opt/ros//share/cv_bridge/cmake/cv_bridgeConfig.cmake
/opt/ros//share/image_geometry/cmake/image_geometryConfig.cmake

2) You can backup and modify each file using the following commands (example for ROS Indigo):
sudo cp /opt/ros/indigo/lib/pkgconfig/cv_bridge.pc /opt/ros/indigo/lib/pkgconfig/cv_bridge.pc-bak
sudo cp /opt/ros/indigo/lib/pkgconfig/image_geometry.pc /opt/ros/indigo/lib/pkgconfig/image_geometry.pc-bak
sudo cp /opt/ros/indigo/share/cv_bridge/cmake/cv_bridgeConfig.cmake /opt/ros/indigo/share/cv_bridge/cmake/cv_bridgeConfig.cmake-bak
sudo cp /opt/ros/indigo/share/image_geometry/cmake/image_geometryConfig.cmake /opt/ros/indigo/share/image_geometry/cmake/image_geometryConfig.cmake-bak

sudo gedit /opt/ros/indigo/lib/pkgconfig/cv_bridge.pc &
sudo gedit /opt/ros/indigo/lib/pkgconfig/image_geometry.pc &
sudo gedit /opt/ros/indigo/share/cv_bridge/cmake/cv_bridgeConfig.cmake &
sudo gedit /opt/ros/indigo/share/image_geometry/cmake/image_geometryConfig.cmake &

3) Modifications for each file
remove each instance “/usr/lib/arm-linux-gnueabihf/libopencv_ocl.so.2.4.8;“
replace each instance of “/usr/lib/arm-linux-gnueabihf/” with “/usr/lib“
replace each instance of “2.4.8” with “2.4.12” (or the current version of OpenCV in opencv4tegra package)

4)After edited, code files like this.

REF: http://myzharbot.robot-home.it/blog/software/ros-nvidia-jetson-tx1-jetson-tk1-opencv-ultimate-guide/

2.2 HACK 2 – opencv

Note about SIFT/SURF in the nonfree module: OpenCV4Tegra doesn’t include the opencv_nonfree package (containing SIFT & SURF feature detectors) since those algorithms are patented by other companies and therefore anyone using opencv_nonfree is at risk of liability.

Please note that opencv4tegra does not include “nonfree” module, so if your algorithms use SIFT or SURF and you want full CUDA support, the only solution is to compile OpenCV by yourself following this guide:
http://elinux.org/Jetson/Installing_OpenCV.

Remember that compiling OpenCV by yourself you will lose Nvidia optimizations on the code running on the CPU that give 3-4 FPS more on heavy algorithms not running on CUDA.

If you need something from the nonfree module, you have 2 options:
1) Analyze the public OpenCV source code then copy/paste the parts of the nonfree module that you want (eg: SURF feature detector) from OpenCV into your own project. You will have the CPU optimizations of OpenCV4Tegra for most of your code and will have the GPU module and will have the non-optimized patented code that you need from the nonfree package such as SURF. So this option gives full performance (for everything except the nonfree code) but is tedious.
2) Ignore OpenCV4Tegra, and instead, download & build public OpenCV (by following the instructions below: for natively compiling the OpenCV library from source). You will still have the GPU module but not any CPU optimizations, but you won’t need to spend time ripping out parts of the OpenCV non-free module code. So this option is easiest but produces slower code if you are running most of your code on CPU.

instructions: Natively compiling the OpenCV library from source onboard the device
Note: Compiling OpenCV from source will not give you NVIDIA’s CPU optimizations that are only available in the closed-source prebuilt OpenCV4Tegra packages.

1) If you haven’t added the “universal” repository to Ubuntu, then do it now:
sudo add-apt-repository universe
sudo apt-get update

2) Now you need to install many libraries:
# Some general development libraries
sudo apt-get -y install build-essential make cmake cmake-curses-gui g++
# libav video input/output development libraries
sudo apt-get -y install libavformat-dev libavutil-dev libswscale-dev
# Video4Linux camera development libraries
sudo apt-get -y install libv4l-dev
# Eigen3 math development libraries
sudo apt-get -y install libeigen3-dev
# OpenGL development libraries (to allow creating graphical windows)
sudo apt-get -y install libglew1.6-dev
# GTK development libraries (to allow creating graphical windows)
sudo apt-get -y install libgtk2.0-dev

3) Download the source code of OpenCV for Linux onto the device.
eg: Open a web-browser to “www.opencv.org” & click on “OpenCV for Linux/Mac”, or from the command-line you can run this on the device:
wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.10/opencv-2.4.10.zip

4) Unzip the OpenCV source code:
cd Downloads
unzip opencv-2.4.10.zip
mv opencv-2.4.10 ~

5) Configure OpenCV using CMake:
cd opencv-2.4.10/
mkdir build
cd build
cmake -DWITH_CUDA=ON -DCUDA_ARCH_BIN=”3.2″ -DCUDA_ARCH_PTX=”” -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF ..

6) If you want to customize any more of the build settings such as whether to support Firewire cameras or Qt GUI,
it is easiest to use the curses interactive version of CMake from here on:
ccmake ..
(Change any settings you want, then click Configure and Generate).

7) Now you should be ready to build OpenCV and then install it.
Unfortunately, OpenCV is currently experiencing a problem with CMake where installing the built libraries (that normally takes a few seconds) re-compiles the whole OpenCV (that normally takes close to an hour).
So to save time, instead of running “make -j4 ; make install”, we will build & install OpenCV using a single command.

8) To build & install the OpenCV library using all 4 Tegra CPU cores (takes around 40 minutes), including copying the OpenCV library to “/usr/local/include” and “/usr/local/lib”:
sudo make -j4 install

9)Finally, make sure your system searches the “/usr/local/lib” folder for libraries:
echo “# Use OpenCV and other custom-built libraries.” >> ~/.bashrc
echo “export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/” >> ~/.bashrc
source ~/.bashrc

REF: http://elinux.org/Jetson/Installing_OpenCV
REF: http://www.jetsonhacks.com/2015/06/14/ros-opencv-and-opencv4tegra-on-the-nvidia-jetson-tk1/

2.3 HACK3

background advantages that OpenCV4Tegra has versus regular OpenCV.

OpenCV4Tegra is a CPU and GPU accelerated version of the standard OpenCV library. OpenCV stands for “Open Computer Vision”, the de-facto standard Computer Vision library containing more than 2500 computer vision & image processing & machine learning algorithms.
this is for the current 2.4.10 release of OpenCV vs. 2.4.10 release of OpenCV4Tegra.

here are three versions of OpenCV that you can run on the Jetson:
“Regular” OpenCV
OpenCV with GPU enhancements
OpenCV4Tegra with both CPU and GPU enhancements

“Regular” OpenCV is OpenCV that is compiled from the OpenCV repository with no hardware acceleration. This is typically not used on the Jetson, as GPU enhancements are available for OpenCV.
OpenCV with GPU enhancements is designed for CUDA GPGPU acceleration. This is part of the standard OpenCV package.
OpenCV4Tegra is a free, closed source library available from NVIDIA which includes ARM NEON SIMD optimizations, multi-core CPU optimizations and some GLSL GPU optimizations.
So why wouldn’t you always use OpenCV4Tegra? The answer lies in the actual OpenCV library itself; there are two proprietary patented algorithms, SIFT and SURF, which exist in opencv-nonfree. Because these are patented, NVIDIA does not include them in their distribution of OpenCV4Tegra. Therefore if your code does not use SIFT or SURF, then you can use OpenCV4Tegra and get the best performance.
Why use SIFT and/or SURF? The quick answer is that when people are doing feature detection, SIFT/SURF are two of the most popular algorithms in use today. One application is simultaneous Localization And Mapping (SLAM) used mostly in the robotics/drone world. One of the most popular packages of which is Semi-Direct Monocular Visual Odometry (SVO)
Another application which uses SIFT/SURF is deep learning, such as the package Caffe Deep Learning Framework.

Alternatives?
The first alternative is that if you do not have any need of SIFT/SURF in your application, you can use OpenCV4Tegra and enjoy the best performance. There is a rub, but also a possible workaround.
If you need SIFT/SURF you can:
Use OpenCV4Tegra, analyze the public OpenCV source code, and then copy/paste the parts of the nonfree module that you want (eg: SURF feature detector) from OpenCV into your own project. You will have the CPU optimizations of OpenCV4Tegra for most of your code and will have the GPU module and will have the non-optimized patented code that you need from the nonfree package such as SURF. So this option gives full performance (for everything except the nonfree code) but is tedious (and difficult to maintain).
Ignore OpenCV4Tegra, and instead, download & build public OpenCV. You will still have the GPU module but not any CPU optimizations, but you won’t need to spend time ripping out parts of the OpenCV non-free module code. So this option is easiest but produces slower code if you are running most of your code on CPU.

Opinion
If you need SIFT/SURF, then you should just build OpenCV from source, otherwise use OpenCV4Tegra.

Note : OpenCV 3.0 handles SIFT/SURF in a separate repository, opencv_contrib repo.
This may make it easier in the future to combine OpenCV4Tegra with SIFT/SURF, but because OpenCV4Tegra is still at release 2.4.10, this remains to be seen.

2.4 HACK4

ROS的OpenCV的bridge的bug

Installed the grinch kernel + Cuda4Tegra + OpenCV4Tegra + OpenCV4Tegra-dev.
Everything went smoothly until I installed a ros package called “ros-indigo-cv-bridge”, which is useful to translate ROS’ image messages to OpenCV’s matrix format. I broke my package system when trying to install it!

HACK1:
Ros-indigo-cv-bridge depends heavily on libopencv-dev and it seems that OpenCV4Tegra-dev is of no use when apt-get tries to install all dependencies.
I get the following error from apt-get, for every component included in libopencv-dev:
dpkg: error processing archive /var/cache/apt/archives/libopencv-core-dev_2.4.8+dfsg1-2ubuntu1_armhf.deb (–unpack):
trying to overwrite ‘/usr/include/opencv2/core/wimage.hpp’, which is also in package libopencv4tegra-dev 2.4.8.2
So,
my guess is, there must be a way to make apt-get to look into Opencv4Tegra to solve all dependencies when trying to install Ros-Indigo-CV-Bridge, but I don’t know how to do it.
Or,
the apt-get result is completely misleading.
->Don’t know if you solved this…but I ran in to the same trouble here when trying ROS/CV with the Tegra version of OpenCV. I ended up creating a “fake” package (using equivs) that tells apt that libopencv + libopencv-dev is already installed. This worked nicely for me and now I am running the tegra-version of opencv under ROS. Very nice…but a little hackish solution to the problem!
Anyhow, this was the contents of the input file for “equivs-build”:
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libopencv-dev-dummy
Version: 2.4.8
Maintainer: yourname <yourname@somemail>
Provides: libopencv-calib3d-dev, libopencv-calib3d2.4,
libopencv-contrib-dev, libopencv-contrib2.4,
libopencv-core-dev, libopencv-core2.4,
libopencv-dev,
libopencv-facedetect-dev, libopencv-facedetect2.4,
libopencv-features2d-dev, libopencv-features2d2.4,
libopencv-flann-dev, libopencv-flann2.4,
libopencv-gpu-dev, libopencv-gpu2.4,
libopencv-highgui-dev, libopencv-highgui2.4,
libopencv-imgproc-dev, libopencv-imgproc2.4,
libopencv-imuvstab-dev, libopencv-imuvstab2.4,
libopencv-legacy-dev, libopencv-legacy2.4,
libopencv-ml-dev, libopencv-ml2.4,
libopencv-objdetect-dev, libopencv-objdetect2.4,
libopencv-ocl-dev, libopencv-ocl2.4,
libopencv-photo-dev, libopencv-photo2.4,
libopencv-softcascade-dev, libopencv-softcascade2.4,
libopencv-stitching-dev, libopencv-stitching2.4,
libopencv-superres-dev, libopencv-superres2.4,
libopencv-video-dev, libopencv-video2.4,
libopencv-videostab-dev, libopencv-videostab2.4,
libopencv-vstab, libopencv-vstab2.4

Description: empty dummy package
no description

This will get you a “dummy-package” that you simply install using “sudo dpkg -i libopencv-dev-dummy_2.4.8_all.deb”. After this, all other packages that depend on opencv will install without trying to install the SW-version of opencv. Make sure you have installed the CUDA version before running this…

Note that the CUDA-version of OpenCV does not contain the nonfree package, i.e. SURF etc. Have not tried to solve that yet…
–> It solved the issue with cv-bridge, but not with each other package that relies on OpenCV.
I tried to install OpenNI2-camera and I went to the starting point. Each deb that has dependency on OpenCV must be modified using this method.

Try for example
sudo apt-get install ros-indigo-rgbd-launch ros-indigo-openni2-camera ros-indigo-openni2-launch

Actually the list of packages to be hacked is the following:
ros-indigo-cv-bridge
ros-indigo-depth-image-proc
ros-indigo-image-geometry
ros-indigo-image-proc
ros-indigo-rqt-image-view

—> Regarding my last comment, ros still thinks to look in usr/lib for the opencv libraries, but they aren’t there. Instead they are in /usr/lib/arm-linux-gnueabihf. I installed _L0g1x_’s fix, but the packages we are using are looking for the opencv libraries in usr/lib, thus giving me an error that they can’t find them. Not sure how to fix this. I thought that the opencv4tegra installed them in usr/lib?
aha, im not sure what packages you are using (please mention) and why it cant find the opencv libraries in /usr/lib since opencv4tegra actually does install into /usr/lib (which it shouldnt, it should install into /usr/lib/arm-linux-gnueabihf , just like the native ros arm opencv install does). My fix accounted for the incorrect path set by the opencv4tegra library, when it should actually be the other way around: The opencv4tegra deb should be modified to be install all the opencv4tegra libraries into /usr/lib/arm-linux-gnueabihf instead of /usr/lib. The issue of an update overwriting the tegra opencv libs will still then exists if you update opencv through a ros update.

—->
The problem currently is that opencv4tegra contain’s packages that install the modified nvidia opencv libs and names them differently then the native opencv libs.
For example:
Nvidia – libopencv4tegra, libopencv4tegra-dev
Native – libopencv, libopencv-dev
This causes a issue for users who use packages that specify in their DEBIAN/control that they depend on libopencv (one package example would be all the ROS computer vision related packages, well at least the ones that use OpenCV).

Inevitably, i know its difficult to not name the modified opencv4tegra libraries different from the native opencv libs to prevent from an upstream opencv update overwriting the opencv4tegra lib modifications. I have a few ideas about a way to possibly fix this that im currently trying out at this moment, but I would also like to hear the input of the opencv4tegra package maintainer on what his thoughts are on dealing with this issue.

!!! I spoke with the OpenCV4Tegra maintainer, and he said there is a temporary work-around you can use for now: !!!
Install ROS
Remove public OpenCV packages within ROS
Install OpenCV4Tegra
Install ROS again

ROS will then work and use the OpenCV4Tegra optimizations. We will try to fix the problem in the next release to enable smooth package replacement without ROS removal.
OpenCV4Tegra is an NVIDIA-only library and so it only gets updated with L4T releases or if you update it manually (downloading OpenCV4Tegra packages from the NVIDIA website). I haven’t tried the suggested ROS work-around above, but if you try it and it fails then let me know and I’ll get more details from them.

!!! My OS verion is Linux tegra-ubuntu 3.10.40 and ROS version is indigo. !!!
I have been using opencv without any issues, specially with CAFFE framework.
I have also followed some tutorials in the jetson (for sanity check) and everything is working allright
Right now I have tried to follow the image transports tutorials (http://wiki.ros.org/image_transport/Tutorials/PublishingImages) but when I run catkin_make I am obtaining this error:
” No rule to make target `/usr/lib/arm-linux-gnueabihf/libopencv_videostab.so.2.4.8′, needed by `/home/ubuntu/image_transport_ws/devel/lib/image_transport_tutorial/my_publisher’. Stop.”

***********************************
OpenCV4Tegra has GPU support and it is optimized for Tegra TK1 SoC, so I want to use its power for my algorithms 😉
— I forgot where I read this, but that dummy package works as is for 19.3, and that you have to slightly modify the dummy package for it to work with 21.2 opencv4tegra.
— Okay so i successfully installed cv-bridge, and can compile some sample code from the cv bridge tutorials, and run the node no problem. There still need to be some modifications, but i made a modified .deb for ros-indigo-cv-bridge and changed around a few things:
–First off, the default armhf deb for ros-indigo-cv-bridge sets different lib paths for setting the libraries in the cmake. For example, in the cv_bridgeConfig.cmake inside the .deb:
set(libraries “cv_bridge;/usr/lib/arm-linux-gnueabihf/libopencv_videostab.so;/usr/lib/arm-linux-gnueabihf/libopencv_video.so;…..
needs to instead be
set(libraries “cv_bridge;/usr/lib/libopencv_videostab.so;/usr/lib/libopencv_video.so;….
I took out the /arm-linux-gnueabihf/ out of the path because opencv4tegra libraries are installed in /usr/lib.
To get the cv-bridge debian so i could edit it, i did the following commands:
sudo apt-get install -d ros-indigo-cv-bridge ## -d just downloads, not install
cd /var/cache/apt/archives
sudo cp ros-indigo-cv-bridge_1.11.6-0trusty-20141201-2058-+0000_armhf.deb ~/Downloads
cd ~/Downloads
mkdir ros-indigo-cv-bridge-extracted
sudo dpkg-deb -R ros-indigo-cv-bridge_1.11.6-0trusty-20141201-2058-+0000_armhf.deb ros-indigo-cv-bridge-extracted
All dpkg-deb -Rdoes is extracts the .deb WITH the DEBIAN folder so that you can edit the DEBIAN/control file. In the control file, i deleted a few things: libopencv-dev, libopencv-python, libopencv-core2.4, libopencv-imgproc2.4, since these were all already installed by opencv4tegra debian.
Once i edited all those things, i think built the package like so:
sudo dpkg-deb -b ros-indigo-cv-bridge-extracted ros-indigo-cv-bridge-tegra_1.11.6-l0g1x-2.deb
and then finally just use sudo dpkg -i ros-indigo-cv-bridge-tegra_1.11.6-l0g1x-2.deb to install it.
I dont think i missed any steps, but attached is where the .deb file i made is. Just use dpkg -i to install it (after CUDA and opencv4tegra have been installed)
It would be nice if the ros arm buildfarm actually had a cv-bridge debian for the jetson.. maybe?
EDIT1: I wanted to clarify when i edit the DEBIAN/control file; when i say i “removed” libopencv-dev, libopencv-python, libopencv-core2.4, libopencv-imgproc2.4, all the re-movement does is remove what dependencies the package installer should check for;
EX) if there is a dependency listed in the control file, and the package manager sees that it is not installed on the system it will try to install that dependency (separately, like as its own .deb file). So since we know that libopencv-dev, libopencv-python, libopencv-core2.4, libopencv-imgproc2.4 is already installed from the opencv4tegra .deb , we can remove them from the ‘Depends:’ line in DEBIAN/control
EDIT 2: Steps to take if you just download the .deb i made:
sudo dpkg -i .deb
sudo apt-get update
sudo apt-get install libopencv4tegra libopencv4tegra-dev
EDIT 3: The modified cv-bridge.deb i made is only a quick fix/hack. I am currently working on making a permanent fix by modifying just the oepncv4tegra.deb, so that you wont have to use the quick fix cv-bridge hack and can update cv-bridge whenever with apt-get upgrade. Should have this done within the next day or two. For now i have rearranged things around so opencv4tegra libs actually in fact go in the /usr/lib/arm-linux-gnueabihf/ folder where they should be. Im trying to see if i can get this to work without a dummy package, but if there isnt another way, the dummy package will be included inside the opencv4tegra-modified.deb so that it will automatically install with everything.
REF: http://answers.ros.org/question/202643/opencv4tegra-and-ros-howto-jetson-tk1/

3、深度学习机器人

深度学习机器人组成是深度学习TK1 + 机器人Turtlebot:

artificial robot

The Kobuki mobile base is by the Korean firm Yujin Robot. The mobile base has two wheels, IR range and cliff sensors, a factory-calibrated gyroscope, a built-in rechargeable battery and various ports for powering the rest of the robot and for communications.
The nVidia Jetson TK1 is a small embedded PC, rather like a souped-up Raspberry Pi.
The Kinect is a popular peripheral for those frittering away their time with the xBox.

blocks

3.1 wifi
检查是哪一个接口来支持无线连接的:
$ iwconfig

first, check the WiFi is working with the Network Manager by typing:
$ nmcli dev
DEVICE TYPE STATE
eth2 802-3-ethernet connected
wlan2 802-11-wireless disconnected

Now connect to local 2.4Ghz Wifi network (I couldn’t get it to work with a 5Ghz one) by typing:
$ sudo nmcli dev wifi connect password

If your SSID has spaces in, then enclose it in quotes, e.g. ‘My network’. As usual with sudo commands you’ll be asked to authenticate with the ‘ubuntu’ password. Assuming you have no error message, then typing

all you need is the IP address of the WiFi interface:
$ ifconfig

try logging in via the WiFi interface with the IP from the last step:
$ ssh ubuntu@10.0.1.2

test robot
$ roslaunch turtlebot_bringup minimal.launch

$ roslaunch turtlebot_teleop keyboard_teleop.launch

Testing Caffe
Caffe is a tool for creating and running CUDA-accelerated neural networks. The most important thing it can potentially do for your robot is allow it to recognise objects in photographs or video streams.

3.2 ds card
增加SD card

The Deep Learning TK1 comes with 16Gb built-in flash on the Jetson TK1 board. That’s fine to begin with, but after downloading a few Caffe models, you’ll be out of space.
Fortunately, the TK1 comes with an SD Card slot for adding extra storage. This post describes how to add and configure an SD Card to give yourself more room.

Choose the fastest, biggest SD Card eg SanDisk 64Gb SD Card for around $60.The card is a class 10 / U3 designed for 4k video files and claims to have a read-write speed of 90Mb/s.

分区clk设备,增加主分区:
$ sudo fdisk /dev/mmcblk0
-a
-p
-1
-w
$ lsblk
— mmcblk1 179:32 0 29.7G 0 disk

制作文件系统:
sudo mkfs.ext4 /dev/mmcblk1p1

Done

$ sudo blkid
/dev/mmcblk1: UUID=”d417ef49-09d9-4fd2-9351-e0e1413a2f8f” TYPE=”ext4″
/dev/mmcblk1p1: UUID=”fd7a0700-efaf-47a5-a118-9202607b46e8″ TYPE=”ext4″

制作安装点:
sudo mkdir /media/sdmount

修改fatab:
The /etc/fstab file contains a list of devices that need mounting on each boot. We’re going to add the mount point for the card to this file, so it gets automatically mounted each time the robot is switched on.
sudo cp /etc/fstab /etc/fstab.orig
sudo vim /etc/fstab
showing only:
# UNCONFIGURED FSTAB FOR BASE SYSTEM
At the end of the file add a line with this format:
UUID= /media/sdmount ext4 defaults,users 0 0
Let’s unpack this. We’re telling the system that we want to mount a partition with the specified UUID at the mount point /media/sdmount (which we just created). ‘ext4’ specifies the filesystem type, which we formatted earlier. The options defaults,users sets the partition with read-write permissions for all users (see more options here under “Filesystem Independent Mount Options”). The final two parameters which are both zero specify whether we want to dump or auto check the filesystem (more details under “Editing Ubuntu’s filesystem table”).

安装:
mount all devices specified in /etc/fstab:
sudo mount -a

Now able to access the card at the mount point. Type:
ls /media/sdmount

建立连接点:
Create a symbolic link to your user directory
cd ~
sudo mkdir /media/sdmount/sdcard

Then change the ownership of the target directory so the ubuntu user can read and write to it:
sudo chown ubuntu :media/sdmount/sdcard

sudo chown ubuntu ~/sdcard

Finally, link it:
ln -s /media/sdmount/sdcard sdcard

Reboot .
Finally reboot and make sure the subdirectory is present and working.

实验环境是:
mkdir ~/sdcard
sudo mount /dev/mmcblk1p1/ ~/sdcard
if needed:
sudo chown ubuntu:ubuntu ~/sdcard
ifnotused
sudo umount /dev/mmcblk1p1

REF: http://www.artificialhumancompanions.com/adding-sd-card-deep-learning-robot/

3.3 Kubiko
$ sudo apt-get install ros-indigo-turtlebot
//ros-indigo-turtlebot-apps ros-indigo-turtlebot-interactions ros-indigo-turtlebot-simulator ros-indigo-kobuki-ftdi ros-indigo-rocon-remocon ros-indigo-rocon-qt-library ros-indigo-ar-track-alvar-msgs

this uses 100M / 400M space. YES

whiling setting up ros-indigo-librealsense, give ERROR: Module uvcvideo not found.
So setup error.
dependence is: ros-indigo-turtlebot –> ros-indigo-librealsense –> ros-indigo-librealsense-camera –> uvcvideo

RealSense 3D是一套感知计算解决方案,包括世界上最小的3D摄像头.RealSense camera support under ROS is still relatively new
maybe need to recompile the Grinch kernel with the necessary UVCVideo patches

A) First, librealsense needs to be installed on the Jetson TK1. This involves building a new kernel to support the video modes of the RealSense camera in the UVC module, and building the librealsense library.
1.1 First, operating system level files must be modified to recognize the camera video formats. When doing development on Linux based machines you will frequently hear the terms “kernel” and “modules”. The kernel is the code that is the base of the operating system, the interface between hardware and the application code.
A kernel module is code that can be loaded into the kernel image at will, without having to modify the kernel. These modules provide ancillary support for different types of devices and subsystems. The code for these modules is either in the kernel itself, in which case it is called ‘built-in’, or designated to built as a module. When built as a module the compiled code is stored separately from the kernel, typically with a .ko extension. The advantage of having a module is that it can be easily changed without having to rebuild the entire kernel.
We will be building a module called uvcvideo to help interface with the RealSense camera.
A convenience script has been created to help with this task in the installLibrealsense repository on the JetsonHacks Github account.
$ git clone https://github.com/jetsonhacks/installLibrealsense.git
$ cd installLibrealsense/UVCKernelPatches
$ ./getKernelSources.sh
Once the kernel sources have been downloaded and decompressed into the /usr/src directory, a configuration editor opens. In the configuration editor, set the local version of the kernel to that of the current configuration. The current local version number is available through the command:
$ uname -r
which displays:
3.10.40-gdacac96
The local version number consists of the digits following the 40 in this case, i.e. -gdacac96.
Remember the – sign, it is important! This identifier is used to ensure that the module matches the build of the kernel and should match exactly. Place the local version number in the field: General Setup -> Local version – append to kernel release:

Next, we will be modify the USB Video Class (UVC) to understand RealSense video formats. The option to compile UVC as a module is located in:
Device Drivers -> Multimedia Support -> Media USB Adapters -> USB Video Class (UVC)

Once you find the entry, right-click on the entry until you see a small circle. The circle indicates that the option will be compiled as a module. Save the configuration file.

A patch file is provided to apply on the module source and a shell script is provided to apply the patch. Again, these are convenience files, you may have to modify them for your particular situation.
$ ./applyUVCPatch.sh

Next, compile the kernel and module files:
$ ./buildKernel.sh

This takes several minutes as the kernel and modules are built and the modules installed. Once the build is complete, you have a couple of options. The first option is to make a backup of the new kernel and modules to place them on a host system to flash a Jetson system with the new kernel. We will not be covering that here, but for reference:
The second option is to copy the kernel over to the boot directory. A convenience script is provided:
$ ./copyzImages.sh

In addition to copying the new kernel into the boot directory, the newly built module, uvcvideo, is added to the file /etc/modules to indicate that the module should be loaded at boot time.

The RealSense cameras require USB 3.0. The USB port is set for USB 2.0 from the factory. Also, the stock kernel uses what is called ‘autosuspend’ to minimize power usage for USB peripherals. This is incompatible with most USB video devices. If you have not changed these settings on your TK1, a convenience script has been provided:
$ ./setupTK1.sh

Now reboot the system.

Once the machine has finished rebooting, open a Terminal:
$ cd installLibrealsense
$ ./installLibrealsense.sh

This will build librealsense and install it on the system. This will also setup udev rules for the RealSense device so that the permissions will be set correctly and can be accessed from user space. Once installation is complete, you will be able to play with the examples. For example:
$ cd ~/librealsense/bin
$ ./cpp-config-ui

The example allows you to set the camera parameters. Hit the ‘Start Capture’ button to start the camera.
At the end of the RealSense camera installation, you should run some of the examples provided to make sure that the camera is installed and works properly.

Qt Creator
There are Qt Creator files in librealsense which may be used to build the examples and librealsense itself. A convenience script, ./installQtCreator.sh , is provided in the installLibrealsense directory to install Qt Creator.

Note: In an earlier article article on installing ROS on the Jetson TK1, we used the Grinch kernel. The Grinch kernel provides access to a large number of peripherals on the TK1. Because we modified the stock kernel for the RealSense camera in our first step, the Grinch kernel is not used here. If you need the Grinch kernel, you will have to recompile the Grinch kernel with the RealSense camera changes. That is an excercise left to the reader.

2.2 the second part of getting the R200 to work is to build and install librealsense.

B) Second, we need to have ROS installed on the Jetson. Once the RealSense camera is working, from a new Terminal install ROS:

REF: http://www.jetsonhacks.com/2016/06/20/intel-realsense-camera-installation-nvidia-jetson-tk1/

C) Third, we download the realsense_camera package installer:
Open a new Terminal, which will source the new environment setup by ROS, and:
$ git clone https://github.com/jetsonhacks/installRealSenseCameraROS.git
$ cd installRealSenseCameraROS

We need is a Catkin Workspace for our base of operations. There is a convenience script to create a new Catkin Workspace.
$ ./setupCatkinWorkspace [workspace name]

With the prerequisites installed, we’re ready to install the realsense_camera package:
$ ./installRealSense.sh [workspace name]

If you do not have a swap file enabled on your Jetson, there may be issues compiling the package because the TK1 does not have enough memory to compile this in one pass. The installation script has been changed since the video was filmed to compile using only one core to relieve memory pressure, i.e.
$ catkin_make -j1″

智能机器人(57):融合调试

robot-pose-ekf的配置参数

freq:
滤波器的输出频率。注意, 高的频率只是把合成odom输出的更频繁,并不会增加姿态评估的精度。

sensor_timeout:
等待某一路传感器消息的最大时间, 超过这个时间如果vo或者imu的传感器还没有新的消息到达,则滤波器不再等待。

ekf滤波器不要去所有sensor一直的同步在线, 缺一些没关系。例如t0时刻滤波器更新输出, t1时刻来一路odom数据, t2时刻来一路imu数据, 则滤波器向内差值产生t0~t1的imu数据, 最终输出姿态评估。
( https://chidambaramsethu.wordpress.com/2013/07/15/a-beginners-guide-to-the-the-ros-robot_pose_ekf-package/)

output_frame:
wiki的示例是有问题的、至少是不清楚的:
示例默认把”output_frame” 设为”odom”容易混淆, 这个坐标系最好命名为”odom_combined”并不会和输出topic: “odom_combined” 混淆。

源代码更清楚:
(http://docs.ros.org/kinetic/api/robot_pose_ekf/html/odom__estimation__node_8cpp_source.html)
76 // paramters
77 nh_private.param(“output_frame”, output_frame_, std::string(“odom_combined”));
78 nh_private.param(“base_footprint_frame”, base_footprint_frame_, std::string(“base_footprint”));
79 nh_private.param(“sensor_timeout”, timeout_, 1.0);
80 nh_private.param(“odom_used”, odom_used_, true);
81 nh_private.param(“imu_used”, imu_used_, true);
82 nh_private.param(“vo_used”, vo_used_, true);
83 nh_private.param(“gps_used”, gps_used_, false);
84 nh_private.param(“debug”, debug_, false);
85 nh_private.param(“self_diagnose”, self_diagnose_, false);
86 double freq;
87 nh_private.param(“freq”, freq, 30.0);

104 pose_pub_ = nh_private.advertise(“odom_combined”, 10);

434 odom_broadcaster_.sendTransform(StampedTransform(tmp, tmp.stamp_, output_frame_, base_footprint_frame_));

注意: obot_pose_ekf 的 “output_frame”对应 amcl的”odom_frame_id”。

智能机器人(55):数据融合

6. 数据融合 – EKF – robot_pose_ekf

此保融合odom/imu/ov等数据获得更精确的位姿信息。
并不要求各路传感器数据连续提供,某路例如imu可以的数据可以中断,不影响算法进行。
REF:
https://answers.ros.org/question/235228/how-is-the-orientation-of-frame-odom-initialized/

Subs:
* odom (nav_msgs/Odometry)
position and orientation of robot in ground plane
* imu_data (sensor_msgs/Imu)
RPY angles of robot base frame relative to a world reference frame.
R和P为绝对角度因为有重力 而Y为相对角度。
* vo (nav_msgs/Odometry)
the full position and orientation of robot
如果某传感器提供3D数据而只用到2D部分,则对不用部分赋值较大的协方差即可。

Pubs:
* robot_pose_ekf/odom_combined (geometry_msgs/PoseWithCovarianceStamped)
The output of the filter (the estimated 3D robot pose).

TFs:
odom_combined -> base_footprint
发布一个名为odom_combined的新的坐标系,
并发布一个 odom_combined->base_footprint的TF变换。

Launchs:

不确定度方面的几个实践问题

6.1 协方差这个参数的来源问题

ERROR: Covariance specified for measurement on topic xxx is zero
A:
Each measurement that is processed by the robot pose ekf ,needs to have a covariance associated with it.
The diagonal elements of the covariance matrix , can not be zero.
When one of the diagonal elements is zero, this error is shown.
Messages with an invalid covariance, will not be used to update the filter.
So, you should reating the covariance matrices for individual sensors:
*1. For IMU and Odometry, the covariance matrix can be formed from the datasheet.
*2. For Visual Odometry, covariance matrix may be obtained from the measurement equation that relates the measured variables to the pose coordinates.

6.2 IMU的协方差

if (imu_covariance_(1,1) == 0.0){
SymmetricMatrix measNoiseImu_Cov(3); measNoiseImu_Cov = 0;
measNoiseImu_Cov(1,1) = pow(0.00017,2); // = 0.01 degrees / sec
measNoiseImu_Cov(2,2) = pow(0.00017,2); // = 0.01 degrees / sec
measNoiseImu_Cov(3,3) = pow(0.00017,2); // = 0.01 degrees / sec
imu_covariance_ = measNoiseImu_Cov;
}

for example 0.00017 means 0101deg/sec, this is so good imu, increase them if you get poor odometry.
For optimal values, perform an odometry calibration alogirthm such as UMBMark.

IMU is:
self.imu_msg.orientation_covariance = [-1, 0, 0,
0, -1, 0,
0, 0, -1 ] #sensor doesn’t have orientation

源代码是这样检查的
$ vi robot_pose_ekf/src/odom_estimation.cpp
void OdomEstimation::addMeasurement(const StampedTransform& meas, const MatrixWrapper::SymmetricMatrix& covar)
{
// check covariance
for (unsigned int i=0; i<covar.rows(); i++){=”” if=”” (covar(i+1,i+1)=”=” 0){=”” ros_error(“covariance=”” specified=”” for=”” measurement=”” on=”” topic=”” %s=”” is=”” zero”,=”” meas.child_frame_id_.c_str());=”” return;=”” }=”” add=”” measurements=”” addmeasurement(meas);=”” (meas.child_frame_id_=”=” “wheelodom”)=”” odom_covariance_=”covar;” else=”” “imu”)=”” imu_covariance_=”covar;” “vo”)=”” vo_covariance_=”covar;” ros_error(“adding=”” a=”” an=”” unknown=”” sensor=”” %s”,=”” };=”” 6.3=”” odom的协方差=”” the=”” odometry=”” will=”” be=”” updated,=”” but=”” covariance=”” always=”” published=”” as=”” 0.=”” controller=”” don’t=”” include=”” information.=”” you=”” need=”” to=”” have=”” value=”” included=”” in=”” information=”” some=”” reason,=”” can=”” create=”” your=”” own=”” based=”” rosaria=”” and=”” value.=”” odom=”” 6×6=”” matrix,=”” because=”” 6=”” dof,=”” position=”” (x,=”” y,=”” z)=”” orientation=”” so=”” float[36]=”” matrix.=”” diagonal=”” terms=”” are=”” trust=”” each=”” dof.=”” estimate=”” or=”” algorithm=”” accuracy=”” with=”” experiment.=”” see=”” data=”” good=”” 1cm=”” translation=”” 0.1=”” radian=”” rotation=”” use=”” this=”” matrix:=”” [0.01=”” 0.0=”” 0.0,=”” 0.01=”” 0.1]=”” no=”” one=”” dof=”” put=”” huge=”” example=”” segway_rmp:=”” this-=””>odom_msg.pose.covariance[0] = 0.00001;
this->odom_msg.pose.covariance[7] = 0.00001;
this->odom_msg.pose.covariance[14] = 1000000000000.0;
this->odom_msg.pose.covariance[21] = 1000000000000.0;
this->odom_msg.pose.covariance[28] = 1000000000000.0;
this->odom_msg.pose.covariance[35] = 0.001;
above means, position z is not sure, ori of x and y is no tsure.</covar.rows();>

covariance
nav_msgs::Odometry odom;
odom.header.stamp = current_time;
odom.header.frame_id = “odom”;
//set the position
odom.pose.pose.position.x = od->position.positionx;
odom.pose.pose.position.y = od->position.positiony;
odom.pose.pose.position.z = 0.0;
odom.pose.pose.orientation = odom_quat;
//set the velocity
odom.child_frame_id = “base_link”;
odom.twist.twist.linear.x = od->speed.speedx;
odom.twist.twist.linear.y = od->speed.speedy;
odom.twist.twist.angular.z = od->speed.speedr;

// set stddev
odom.pose.covariance[0] = pos_x_stddev;
odom.pose.covariance[7] = pos_y_stddev;
odom.pose.covariance[14] = pos_z_stddev;
odom.pose.covariance[21] = rot_x_stddev;
odom.pose.covariance[28] = rot_y_stddev;
odom.pose.covariance[35] = rot_z_stddev;

double pos_x_stddev
double pos_x_stddev
double pos_z_stddev
double rot_x_stddev
double rot_y_stddev
double rot_z_stddev

ros::NodeHandle private_node_handle(“~”);
private_node_handle.param(“pos_x_stddev”, pos_x_stddev, 0.11);
private_node_handle.param(“pos_y_stddev”, pos_y_stddev, 0.12);
private_node_handle.param(“pos_z_stddev”, pos_z_stddev, 1000000000000.0);
private_node_handle.param(“rot_x_stddev”, rot_x_stddev, 1000000000000.0);
private_node_handle.param(“rot_y_stddev”, rot_y_stddev, 1000000000000.0);
private_node_handle.param(“ros_z_stddev”, ros_z_stddev, 0.15);

REF:
Covariance matrices with a practical example