TK1之ros -31

  • 1、ROS在ARM的安装

1.1 个性配置
1) server上启动VPN
2) client上设置证书
3) network设置

  • 2、ROS在TK1的安装

opencv4tegra vs opencv4
2.1 HACK 1
2.2 HACK 2
2.3 HACK 3
2.4 HACK 4

  • 3、深度学习机器人

3.1 wifi
3.2 SD card

1、ROS在ARM的安装

1.1 Set Locale 设置你的环境

Boost and some of the ROS tools, all require that the system locale be set.
Linux中通过locale来设置程序运行的不同语言环境,locale由ANSI C提供支持。locale的命名规则为<语言>_<地区>.<字符集编码>
如zh_CN.UTF-8,zh代表中文,CN代表大陆地区,UTF-8表示字符集。

You can set it with:
$ sudo update-locale LANG=C LANGUAGE=C LC_ALL=C LC_MESSAGES=POSIX
LC_ALL 它是一个宏,如果该值设置了,则该值会覆盖所有LC_*的设置值。注意,LANG的值不受该宏影响。
“C”是系统默认的locale,”POSIX”是”C”的别名。所以当我们新安装完一个系统时,默认的locale就是C或POSIX。
LC_ALL=C 是为了去除所有本地化的设置,让命令能正确执行。
LC_MESSAGES , 提示信息的语言。

If there is a problem. Then try (other languages could be added):
$ export LANGUAGE=en_US.UTF-8
$ export LANG=en_US.UTF-8
$ export LC_ALL=en_US.UTF-8
$ locale-gen en_US.UTF-8
$ dpkg-reconfigure locales

1.2 Setup sources.list

Setup your computer to accept software from the ARM mirror on packages.ros.org.
Due to limited resources, there are only active builds for Trusty armhf (14.04), since this is the stable, long-term Ubuntu release and is the most-requested distribution in conjunction with ROS Indigo.

$ sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’

1.3 Set up keys 设置密码

$ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net –recv-key 0xB01FA116
or,
$ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net –recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

If you can try the following command by adding :80 if you have error:
gpg: keyserver timed out error due to a firewall
$ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net:80 –recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

If you have error
GPG error: Clearsigned file isn’t valid, got ‘NODATA’ (does the network require authentication?)
$ such as first apt-get clean then apt-get update has no effect, what you need is to get a TiZi.

1.4 make sure Debian package index is up-to-date:
$ sudo apt-get update

1.5
Desktop Install, include ROS, rqt, rviz, and robot-generic libraries
$ sudo apt-get install ros-indigo-desktop

NOT desktop-full, include ROS, rqt, rviz, robot-generic libraries, 2D/3D simulators and 2D/3D perception .
$ sudo apt-get install ros-indigo-desktop-full

1.6 Initialize rosdep
Before you can use ROS, you will need to install and initialize rosdep. rosdep enables you to easily install system dependencies for source you want to compile and is required to run some core components in ROS.
$ sudo apt-get install python-rosdep
$ sudo rosdep init
$ rosdep update

1.7 Environment setup
It’s convenient if the ROS environment variables are automatically added to your bash session every time a new shell is launched:
$ echo “” >> ~/.bashrc
$ echo “# Source ROS indigo setupenvironment:” >> ~/.bashrc
$ echo “source /opt/ros/indigo/setup.bash” >> ~/.bashrc
$ source ~/.bashrc

1.8 Getting rosinstall
rosinstall is a frequently used command-line tool in ROS that is distributed separately. It enables you to easily download many source trees for ROS packages with one command. To install this tool on Ubuntu, run:
$ sudo apt-get install python-rosinstall

1.x 设置env
检查ROS_ROOT和ROS_PACKAGE_PATH路径是否设置:
$ printenv | grep ROS
如果未设置则刷新环境:
$ source /opt/ros/indigo/setup.bash

1.9 Verifying OS name

Make sure your OS name defined at /etc/lsb-release is as the following.
Since ros does not recognize Linaro as an OS, this is necessary.
The following is for Ubuntu 14.04, trusty. Modify the release number and name as per your target.
$ vi /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION=”Ubuntu 14.04″

1.10 个性配置
1) server上启动VPN
setup vpn network
cp client.conf
cp ca.crt, ta.key
cp Client012.crt, Client012.key
chmod mod 600 ta.key Client012.key

2)client上设置证书
import cert file for firefox and specialy for chrome
需要根据证书里Issued to的域名信息修改 https certification:
$ vi /etc/hosts
10.10.0.1 dehaou14-n501jw
to visit srv

3)network设置
tegra-ubuntu上要合适配置/etc/hosts,并运行roscore。
pc和srv上要合适配置/etc/hosts,并:
export ROS_MASTER_URI=http://tegra-ubuntu:11311。
export ROS_HOSTNAME=dehaou14-n501jw
完成后可以测试:
$ rosnode ping /somenode

1.11 Using RVIZ notice:
It is not recommended to run rviz on most ARM-based CPUs.
They’re generally too slow, and the version of OpenGL provided by the software (mesa) libraries it not new enough to start rviz.

‘IF’ you have a powerful board with a GPU and vendor-supplied OpenGL libraries, it might be possible to run rviz.
The IFC6410 and the NVidia Jetson TK1 are two such boards where rviz will run, although neither is fast enough for graphics-heavy tasks such as displaying pointclouds.

NOTES:
Note that rviz will segfault , if you have the GTK_IM_MODULE environment variable set, so it’s best to unset it in your ~/.bashrc:
unset GTK_IM_MODULE

REF: http://wiki.ros.org/indigo/Installation/UbuntuARM

2、ROS在TK1的安装

opencv4tegra vs opencv4

2.1 HACK 1 – cv_bridge

With the latest opencv4tegra 21.2 released by Nvidia, the compatibility problems with cv_bridge and image_geometry packages have been solved, so installing OpenCV ROS Packages from PPA does not force opencv4tegra to be uninstalled.
???really???

but a few issues still remain since ROS searches for OpenCV v2.4.8, but opencv4tegra is based on OpenCV v2.4.12.
BUT, There are yet a bit of incompatibility since cv_bridge and image_geometry search for OpenCV 2.4.8 in “/usr/lib/arm-linux-gnueabihf” ,but opencv4tegra is based on OpenCV 2.4.12 and is installed in “/usr/lib/”.
These diversities do not allow to compile external packages based on OpenCV.

To solve the problem you can follow this guide:
http://myzharbot.robot-home.it/blog/software/ros-nvidia-jetson-tx1-jetson-tk1-opencv-ultimate-guide/
What we must “say” to cv_bridge and image_geometry is to not search for OpenCV in the default ARM path “/usr/lib/arm-linux-gnueabihf”, but in “/usr/lib” and that the current version of OpenCV is 2.4.12 and not 2.4.8. finally we must remove the references to the module OpenCL because Nvidia does not provide it.

1) Files to be modified
/opt/ros//lib/pkgconfig/cv_bridge.pc
/opt/ros//lib/pkgconfig/image_geometry.pc
/opt/ros//share/cv_bridge/cmake/cv_bridgeConfig.cmake
/opt/ros//share/image_geometry/cmake/image_geometryConfig.cmake

2) You can backup and modify each file using the following commands (example for ROS Indigo):
sudo cp /opt/ros/indigo/lib/pkgconfig/cv_bridge.pc /opt/ros/indigo/lib/pkgconfig/cv_bridge.pc-bak
sudo cp /opt/ros/indigo/lib/pkgconfig/image_geometry.pc /opt/ros/indigo/lib/pkgconfig/image_geometry.pc-bak
sudo cp /opt/ros/indigo/share/cv_bridge/cmake/cv_bridgeConfig.cmake /opt/ros/indigo/share/cv_bridge/cmake/cv_bridgeConfig.cmake-bak
sudo cp /opt/ros/indigo/share/image_geometry/cmake/image_geometryConfig.cmake /opt/ros/indigo/share/image_geometry/cmake/image_geometryConfig.cmake-bak

sudo gedit /opt/ros/indigo/lib/pkgconfig/cv_bridge.pc &
sudo gedit /opt/ros/indigo/lib/pkgconfig/image_geometry.pc &
sudo gedit /opt/ros/indigo/share/cv_bridge/cmake/cv_bridgeConfig.cmake &
sudo gedit /opt/ros/indigo/share/image_geometry/cmake/image_geometryConfig.cmake &

3) Modifications for each file
remove each instance “/usr/lib/arm-linux-gnueabihf/libopencv_ocl.so.2.4.8;“
replace each instance of “/usr/lib/arm-linux-gnueabihf/” with “/usr/lib“
replace each instance of “2.4.8” with “2.4.12” (or the current version of OpenCV in opencv4tegra package)

4)After edited, code files like this.

REF: http://myzharbot.robot-home.it/blog/software/ros-nvidia-jetson-tx1-jetson-tk1-opencv-ultimate-guide/

2.2 HACK 2 – opencv

Note about SIFT/SURF in the nonfree module: OpenCV4Tegra doesn’t include the opencv_nonfree package (containing SIFT & SURF feature detectors) since those algorithms are patented by other companies and therefore anyone using opencv_nonfree is at risk of liability.

Please note that opencv4tegra does not include “nonfree” module, so if your algorithms use SIFT or SURF and you want full CUDA support, the only solution is to compile OpenCV by yourself following this guide:
http://elinux.org/Jetson/Installing_OpenCV.

Remember that compiling OpenCV by yourself you will lose Nvidia optimizations on the code running on the CPU that give 3-4 FPS more on heavy algorithms not running on CUDA.

If you need something from the nonfree module, you have 2 options:
1) Analyze the public OpenCV source code then copy/paste the parts of the nonfree module that you want (eg: SURF feature detector) from OpenCV into your own project. You will have the CPU optimizations of OpenCV4Tegra for most of your code and will have the GPU module and will have the non-optimized patented code that you need from the nonfree package such as SURF. So this option gives full performance (for everything except the nonfree code) but is tedious.
2) Ignore OpenCV4Tegra, and instead, download & build public OpenCV (by following the instructions below: for natively compiling the OpenCV library from source). You will still have the GPU module but not any CPU optimizations, but you won’t need to spend time ripping out parts of the OpenCV non-free module code. So this option is easiest but produces slower code if you are running most of your code on CPU.

instructions: Natively compiling the OpenCV library from source onboard the device
Note: Compiling OpenCV from source will not give you NVIDIA’s CPU optimizations that are only available in the closed-source prebuilt OpenCV4Tegra packages.

1) If you haven’t added the “universal” repository to Ubuntu, then do it now:
sudo add-apt-repository universe
sudo apt-get update

2) Now you need to install many libraries:
# Some general development libraries
sudo apt-get -y install build-essential make cmake cmake-curses-gui g++
# libav video input/output development libraries
sudo apt-get -y install libavformat-dev libavutil-dev libswscale-dev
# Video4Linux camera development libraries
sudo apt-get -y install libv4l-dev
# Eigen3 math development libraries
sudo apt-get -y install libeigen3-dev
# OpenGL development libraries (to allow creating graphical windows)
sudo apt-get -y install libglew1.6-dev
# GTK development libraries (to allow creating graphical windows)
sudo apt-get -y install libgtk2.0-dev

3) Download the source code of OpenCV for Linux onto the device.
eg: Open a web-browser to “www.opencv.org” & click on “OpenCV for Linux/Mac”, or from the command-line you can run this on the device:
wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.10/opencv-2.4.10.zip

4) Unzip the OpenCV source code:
cd Downloads
unzip opencv-2.4.10.zip
mv opencv-2.4.10 ~

5) Configure OpenCV using CMake:
cd opencv-2.4.10/
mkdir build
cd build
cmake -DWITH_CUDA=ON -DCUDA_ARCH_BIN=”3.2″ -DCUDA_ARCH_PTX=”” -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF ..

6) If you want to customize any more of the build settings such as whether to support Firewire cameras or Qt GUI,
it is easiest to use the curses interactive version of CMake from here on:
ccmake ..
(Change any settings you want, then click Configure and Generate).

7) Now you should be ready to build OpenCV and then install it.
Unfortunately, OpenCV is currently experiencing a problem with CMake where installing the built libraries (that normally takes a few seconds) re-compiles the whole OpenCV (that normally takes close to an hour).
So to save time, instead of running “make -j4 ; make install”, we will build & install OpenCV using a single command.

8) To build & install the OpenCV library using all 4 Tegra CPU cores (takes around 40 minutes), including copying the OpenCV library to “/usr/local/include” and “/usr/local/lib”:
sudo make -j4 install

9)Finally, make sure your system searches the “/usr/local/lib” folder for libraries:
echo “# Use OpenCV and other custom-built libraries.” >> ~/.bashrc
echo “export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/” >> ~/.bashrc
source ~/.bashrc

REF: http://elinux.org/Jetson/Installing_OpenCV
REF: http://www.jetsonhacks.com/2015/06/14/ros-opencv-and-opencv4tegra-on-the-nvidia-jetson-tk1/

2.3 HACK3

background advantages that OpenCV4Tegra has versus regular OpenCV.

OpenCV4Tegra is a CPU and GPU accelerated version of the standard OpenCV library. OpenCV stands for “Open Computer Vision”, the de-facto standard Computer Vision library containing more than 2500 computer vision & image processing & machine learning algorithms.
this is for the current 2.4.10 release of OpenCV vs. 2.4.10 release of OpenCV4Tegra.

here are three versions of OpenCV that you can run on the Jetson:
“Regular” OpenCV
OpenCV with GPU enhancements
OpenCV4Tegra with both CPU and GPU enhancements

“Regular” OpenCV is OpenCV that is compiled from the OpenCV repository with no hardware acceleration. This is typically not used on the Jetson, as GPU enhancements are available for OpenCV.
OpenCV with GPU enhancements is designed for CUDA GPGPU acceleration. This is part of the standard OpenCV package.
OpenCV4Tegra is a free, closed source library available from NVIDIA which includes ARM NEON SIMD optimizations, multi-core CPU optimizations and some GLSL GPU optimizations.
So why wouldn’t you always use OpenCV4Tegra? The answer lies in the actual OpenCV library itself; there are two proprietary patented algorithms, SIFT and SURF, which exist in opencv-nonfree. Because these are patented, NVIDIA does not include them in their distribution of OpenCV4Tegra. Therefore if your code does not use SIFT or SURF, then you can use OpenCV4Tegra and get the best performance.
Why use SIFT and/or SURF? The quick answer is that when people are doing feature detection, SIFT/SURF are two of the most popular algorithms in use today. One application is simultaneous Localization And Mapping (SLAM) used mostly in the robotics/drone world. One of the most popular packages of which is Semi-Direct Monocular Visual Odometry (SVO)
Another application which uses SIFT/SURF is deep learning, such as the package Caffe Deep Learning Framework.

Alternatives?
The first alternative is that if you do not have any need of SIFT/SURF in your application, you can use OpenCV4Tegra and enjoy the best performance. There is a rub, but also a possible workaround.
If you need SIFT/SURF you can:
Use OpenCV4Tegra, analyze the public OpenCV source code, and then copy/paste the parts of the nonfree module that you want (eg: SURF feature detector) from OpenCV into your own project. You will have the CPU optimizations of OpenCV4Tegra for most of your code and will have the GPU module and will have the non-optimized patented code that you need from the nonfree package such as SURF. So this option gives full performance (for everything except the nonfree code) but is tedious (and difficult to maintain).
Ignore OpenCV4Tegra, and instead, download & build public OpenCV. You will still have the GPU module but not any CPU optimizations, but you won’t need to spend time ripping out parts of the OpenCV non-free module code. So this option is easiest but produces slower code if you are running most of your code on CPU.

Opinion
If you need SIFT/SURF, then you should just build OpenCV from source, otherwise use OpenCV4Tegra.

Note : OpenCV 3.0 handles SIFT/SURF in a separate repository, opencv_contrib repo.
This may make it easier in the future to combine OpenCV4Tegra with SIFT/SURF, but because OpenCV4Tegra is still at release 2.4.10, this remains to be seen.

2.4 HACK4

ROS的OpenCV的bridge的bug

Installed the grinch kernel + Cuda4Tegra + OpenCV4Tegra + OpenCV4Tegra-dev.
Everything went smoothly until I installed a ros package called “ros-indigo-cv-bridge”, which is useful to translate ROS’ image messages to OpenCV’s matrix format. I broke my package system when trying to install it!

HACK1:
Ros-indigo-cv-bridge depends heavily on libopencv-dev and it seems that OpenCV4Tegra-dev is of no use when apt-get tries to install all dependencies.
I get the following error from apt-get, for every component included in libopencv-dev:
dpkg: error processing archive /var/cache/apt/archives/libopencv-core-dev_2.4.8+dfsg1-2ubuntu1_armhf.deb (–unpack):
trying to overwrite ‘/usr/include/opencv2/core/wimage.hpp’, which is also in package libopencv4tegra-dev 2.4.8.2
So,
my guess is, there must be a way to make apt-get to look into Opencv4Tegra to solve all dependencies when trying to install Ros-Indigo-CV-Bridge, but I don’t know how to do it.
Or,
the apt-get result is completely misleading.
->Don’t know if you solved this…but I ran in to the same trouble here when trying ROS/CV with the Tegra version of OpenCV. I ended up creating a “fake” package (using equivs) that tells apt that libopencv + libopencv-dev is already installed. This worked nicely for me and now I am running the tegra-version of opencv under ROS. Very nice…but a little hackish solution to the problem!
Anyhow, this was the contents of the input file for “equivs-build”:
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libopencv-dev-dummy
Version: 2.4.8
Maintainer: yourname <yourname@somemail>
Provides: libopencv-calib3d-dev, libopencv-calib3d2.4,
libopencv-contrib-dev, libopencv-contrib2.4,
libopencv-core-dev, libopencv-core2.4,
libopencv-dev,
libopencv-facedetect-dev, libopencv-facedetect2.4,
libopencv-features2d-dev, libopencv-features2d2.4,
libopencv-flann-dev, libopencv-flann2.4,
libopencv-gpu-dev, libopencv-gpu2.4,
libopencv-highgui-dev, libopencv-highgui2.4,
libopencv-imgproc-dev, libopencv-imgproc2.4,
libopencv-imuvstab-dev, libopencv-imuvstab2.4,
libopencv-legacy-dev, libopencv-legacy2.4,
libopencv-ml-dev, libopencv-ml2.4,
libopencv-objdetect-dev, libopencv-objdetect2.4,
libopencv-ocl-dev, libopencv-ocl2.4,
libopencv-photo-dev, libopencv-photo2.4,
libopencv-softcascade-dev, libopencv-softcascade2.4,
libopencv-stitching-dev, libopencv-stitching2.4,
libopencv-superres-dev, libopencv-superres2.4,
libopencv-video-dev, libopencv-video2.4,
libopencv-videostab-dev, libopencv-videostab2.4,
libopencv-vstab, libopencv-vstab2.4

Description: empty dummy package
no description

This will get you a “dummy-package” that you simply install using “sudo dpkg -i libopencv-dev-dummy_2.4.8_all.deb”. After this, all other packages that depend on opencv will install without trying to install the SW-version of opencv. Make sure you have installed the CUDA version before running this…

Note that the CUDA-version of OpenCV does not contain the nonfree package, i.e. SURF etc. Have not tried to solve that yet…
–> It solved the issue with cv-bridge, but not with each other package that relies on OpenCV.
I tried to install OpenNI2-camera and I went to the starting point. Each deb that has dependency on OpenCV must be modified using this method.

Try for example
sudo apt-get install ros-indigo-rgbd-launch ros-indigo-openni2-camera ros-indigo-openni2-launch

Actually the list of packages to be hacked is the following:
ros-indigo-cv-bridge
ros-indigo-depth-image-proc
ros-indigo-image-geometry
ros-indigo-image-proc
ros-indigo-rqt-image-view

—> Regarding my last comment, ros still thinks to look in usr/lib for the opencv libraries, but they aren’t there. Instead they are in /usr/lib/arm-linux-gnueabihf. I installed _L0g1x_’s fix, but the packages we are using are looking for the opencv libraries in usr/lib, thus giving me an error that they can’t find them. Not sure how to fix this. I thought that the opencv4tegra installed them in usr/lib?
aha, im not sure what packages you are using (please mention) and why it cant find the opencv libraries in /usr/lib since opencv4tegra actually does install into /usr/lib (which it shouldnt, it should install into /usr/lib/arm-linux-gnueabihf , just like the native ros arm opencv install does). My fix accounted for the incorrect path set by the opencv4tegra library, when it should actually be the other way around: The opencv4tegra deb should be modified to be install all the opencv4tegra libraries into /usr/lib/arm-linux-gnueabihf instead of /usr/lib. The issue of an update overwriting the tegra opencv libs will still then exists if you update opencv through a ros update.

—->
The problem currently is that opencv4tegra contain’s packages that install the modified nvidia opencv libs and names them differently then the native opencv libs.
For example:
Nvidia – libopencv4tegra, libopencv4tegra-dev
Native – libopencv, libopencv-dev
This causes a issue for users who use packages that specify in their DEBIAN/control that they depend on libopencv (one package example would be all the ROS computer vision related packages, well at least the ones that use OpenCV).

Inevitably, i know its difficult to not name the modified opencv4tegra libraries different from the native opencv libs to prevent from an upstream opencv update overwriting the opencv4tegra lib modifications. I have a few ideas about a way to possibly fix this that im currently trying out at this moment, but I would also like to hear the input of the opencv4tegra package maintainer on what his thoughts are on dealing with this issue.

!!! I spoke with the OpenCV4Tegra maintainer, and he said there is a temporary work-around you can use for now: !!!
Install ROS
Remove public OpenCV packages within ROS
Install OpenCV4Tegra
Install ROS again

ROS will then work and use the OpenCV4Tegra optimizations. We will try to fix the problem in the next release to enable smooth package replacement without ROS removal.
OpenCV4Tegra is an NVIDIA-only library and so it only gets updated with L4T releases or if you update it manually (downloading OpenCV4Tegra packages from the NVIDIA website). I haven’t tried the suggested ROS work-around above, but if you try it and it fails then let me know and I’ll get more details from them.

!!! My OS verion is Linux tegra-ubuntu 3.10.40 and ROS version is indigo. !!!
I have been using opencv without any issues, specially with CAFFE framework.
I have also followed some tutorials in the jetson (for sanity check) and everything is working allright
Right now I have tried to follow the image transports tutorials (http://wiki.ros.org/image_transport/Tutorials/PublishingImages) but when I run catkin_make I am obtaining this error:
” No rule to make target `/usr/lib/arm-linux-gnueabihf/libopencv_videostab.so.2.4.8′, needed by `/home/ubuntu/image_transport_ws/devel/lib/image_transport_tutorial/my_publisher’. Stop.”

***********************************
OpenCV4Tegra has GPU support and it is optimized for Tegra TK1 SoC, so I want to use its power for my algorithms 😉
— I forgot where I read this, but that dummy package works as is for 19.3, and that you have to slightly modify the dummy package for it to work with 21.2 opencv4tegra.
— Okay so i successfully installed cv-bridge, and can compile some sample code from the cv bridge tutorials, and run the node no problem. There still need to be some modifications, but i made a modified .deb for ros-indigo-cv-bridge and changed around a few things:
–First off, the default armhf deb for ros-indigo-cv-bridge sets different lib paths for setting the libraries in the cmake. For example, in the cv_bridgeConfig.cmake inside the .deb:
set(libraries “cv_bridge;/usr/lib/arm-linux-gnueabihf/libopencv_videostab.so;/usr/lib/arm-linux-gnueabihf/libopencv_video.so;…..
needs to instead be
set(libraries “cv_bridge;/usr/lib/libopencv_videostab.so;/usr/lib/libopencv_video.so;….
I took out the /arm-linux-gnueabihf/ out of the path because opencv4tegra libraries are installed in /usr/lib.
To get the cv-bridge debian so i could edit it, i did the following commands:
sudo apt-get install -d ros-indigo-cv-bridge ## -d just downloads, not install
cd /var/cache/apt/archives
sudo cp ros-indigo-cv-bridge_1.11.6-0trusty-20141201-2058-+0000_armhf.deb ~/Downloads
cd ~/Downloads
mkdir ros-indigo-cv-bridge-extracted
sudo dpkg-deb -R ros-indigo-cv-bridge_1.11.6-0trusty-20141201-2058-+0000_armhf.deb ros-indigo-cv-bridge-extracted
All dpkg-deb -Rdoes is extracts the .deb WITH the DEBIAN folder so that you can edit the DEBIAN/control file. In the control file, i deleted a few things: libopencv-dev, libopencv-python, libopencv-core2.4, libopencv-imgproc2.4, since these were all already installed by opencv4tegra debian.
Once i edited all those things, i think built the package like so:
sudo dpkg-deb -b ros-indigo-cv-bridge-extracted ros-indigo-cv-bridge-tegra_1.11.6-l0g1x-2.deb
and then finally just use sudo dpkg -i ros-indigo-cv-bridge-tegra_1.11.6-l0g1x-2.deb to install it.
I dont think i missed any steps, but attached is where the .deb file i made is. Just use dpkg -i to install it (after CUDA and opencv4tegra have been installed)
It would be nice if the ros arm buildfarm actually had a cv-bridge debian for the jetson.. maybe?
EDIT1: I wanted to clarify when i edit the DEBIAN/control file; when i say i “removed” libopencv-dev, libopencv-python, libopencv-core2.4, libopencv-imgproc2.4, all the re-movement does is remove what dependencies the package installer should check for;
EX) if there is a dependency listed in the control file, and the package manager sees that it is not installed on the system it will try to install that dependency (separately, like as its own .deb file). So since we know that libopencv-dev, libopencv-python, libopencv-core2.4, libopencv-imgproc2.4 is already installed from the opencv4tegra .deb , we can remove them from the ‘Depends:’ line in DEBIAN/control
EDIT 2: Steps to take if you just download the .deb i made:
sudo dpkg -i .deb
sudo apt-get update
sudo apt-get install libopencv4tegra libopencv4tegra-dev
EDIT 3: The modified cv-bridge.deb i made is only a quick fix/hack. I am currently working on making a permanent fix by modifying just the oepncv4tegra.deb, so that you wont have to use the quick fix cv-bridge hack and can update cv-bridge whenever with apt-get upgrade. Should have this done within the next day or two. For now i have rearranged things around so opencv4tegra libs actually in fact go in the /usr/lib/arm-linux-gnueabihf/ folder where they should be. Im trying to see if i can get this to work without a dummy package, but if there isnt another way, the dummy package will be included inside the opencv4tegra-modified.deb so that it will automatically install with everything.
REF: http://answers.ros.org/question/202643/opencv4tegra-and-ros-howto-jetson-tk1/

3、深度学习机器人

深度学习机器人组成是深度学习TK1 + 机器人Turtlebot:

artificial robot

The Kobuki mobile base is by the Korean firm Yujin Robot. The mobile base has two wheels, IR range and cliff sensors, a factory-calibrated gyroscope, a built-in rechargeable battery and various ports for powering the rest of the robot and for communications.
The nVidia Jetson TK1 is a small embedded PC, rather like a souped-up Raspberry Pi.
The Kinect is a popular peripheral for those frittering away their time with the xBox.

blocks

3.1 wifi
检查是哪一个接口来支持无线连接的:
$ iwconfig

first, check the WiFi is working with the Network Manager by typing:
$ nmcli dev
DEVICE TYPE STATE
eth2 802-3-ethernet connected
wlan2 802-11-wireless disconnected

Now connect to local 2.4Ghz Wifi network (I couldn’t get it to work with a 5Ghz one) by typing:
$ sudo nmcli dev wifi connect password

If your SSID has spaces in, then enclose it in quotes, e.g. ‘My network’. As usual with sudo commands you’ll be asked to authenticate with the ‘ubuntu’ password. Assuming you have no error message, then typing

all you need is the IP address of the WiFi interface:
$ ifconfig

try logging in via the WiFi interface with the IP from the last step:
$ ssh ubuntu@10.0.1.2

test robot
$ roslaunch turtlebot_bringup minimal.launch

$ roslaunch turtlebot_teleop keyboard_teleop.launch

Testing Caffe
Caffe is a tool for creating and running CUDA-accelerated neural networks. The most important thing it can potentially do for your robot is allow it to recognise objects in photographs or video streams.

3.2 ds card
增加SD card

The Deep Learning TK1 comes with 16Gb built-in flash on the Jetson TK1 board. That’s fine to begin with, but after downloading a few Caffe models, you’ll be out of space.
Fortunately, the TK1 comes with an SD Card slot for adding extra storage. This post describes how to add and configure an SD Card to give yourself more room.

Choose the fastest, biggest SD Card eg SanDisk 64Gb SD Card for around $60.The card is a class 10 / U3 designed for 4k video files and claims to have a read-write speed of 90Mb/s.

分区clk设备,增加主分区:
$ sudo fdisk /dev/mmcblk0
-a
-p
-1
-w
$ lsblk
— mmcblk1 179:32 0 29.7G 0 disk

制作文件系统:
sudo mkfs.ext4 /dev/mmcblk1p1

Done

$ sudo blkid
/dev/mmcblk1: UUID=”d417ef49-09d9-4fd2-9351-e0e1413a2f8f” TYPE=”ext4″
/dev/mmcblk1p1: UUID=”fd7a0700-efaf-47a5-a118-9202607b46e8″ TYPE=”ext4″

制作安装点:
sudo mkdir /media/sdmount

修改fatab:
The /etc/fstab file contains a list of devices that need mounting on each boot. We’re going to add the mount point for the card to this file, so it gets automatically mounted each time the robot is switched on.
sudo cp /etc/fstab /etc/fstab.orig
sudo vim /etc/fstab
showing only:
# UNCONFIGURED FSTAB FOR BASE SYSTEM
At the end of the file add a line with this format:
UUID= /media/sdmount ext4 defaults,users 0 0
Let’s unpack this. We’re telling the system that we want to mount a partition with the specified UUID at the mount point /media/sdmount (which we just created). ‘ext4’ specifies the filesystem type, which we formatted earlier. The options defaults,users sets the partition with read-write permissions for all users (see more options here under “Filesystem Independent Mount Options”). The final two parameters which are both zero specify whether we want to dump or auto check the filesystem (more details under “Editing Ubuntu’s filesystem table”).

安装:
mount all devices specified in /etc/fstab:
sudo mount -a

Now able to access the card at the mount point. Type:
ls /media/sdmount

建立连接点:
Create a symbolic link to your user directory
cd ~
sudo mkdir /media/sdmount/sdcard

Then change the ownership of the target directory so the ubuntu user can read and write to it:
sudo chown ubuntu :media/sdmount/sdcard

sudo chown ubuntu ~/sdcard

Finally, link it:
ln -s /media/sdmount/sdcard sdcard

Reboot .
Finally reboot and make sure the subdirectory is present and working.

实验环境是:
mkdir ~/sdcard
sudo mount /dev/mmcblk1p1/ ~/sdcard
if needed:
sudo chown ubuntu:ubuntu ~/sdcard
ifnotused
sudo umount /dev/mmcblk1p1

REF: http://www.artificialhumancompanions.com/adding-sd-card-deep-learning-robot/

3.3 Kubiko
$ sudo apt-get install ros-indigo-turtlebot
//ros-indigo-turtlebot-apps ros-indigo-turtlebot-interactions ros-indigo-turtlebot-simulator ros-indigo-kobuki-ftdi ros-indigo-rocon-remocon ros-indigo-rocon-qt-library ros-indigo-ar-track-alvar-msgs

this uses 100M / 400M space. YES

whiling setting up ros-indigo-librealsense, give ERROR: Module uvcvideo not found.
So setup error.
dependence is: ros-indigo-turtlebot –> ros-indigo-librealsense –> ros-indigo-librealsense-camera –> uvcvideo

RealSense 3D是一套感知计算解决方案,包括世界上最小的3D摄像头.RealSense camera support under ROS is still relatively new
maybe need to recompile the Grinch kernel with the necessary UVCVideo patches

A) First, librealsense needs to be installed on the Jetson TK1. This involves building a new kernel to support the video modes of the RealSense camera in the UVC module, and building the librealsense library.
1.1 First, operating system level files must be modified to recognize the camera video formats. When doing development on Linux based machines you will frequently hear the terms “kernel” and “modules”. The kernel is the code that is the base of the operating system, the interface between hardware and the application code.
A kernel module is code that can be loaded into the kernel image at will, without having to modify the kernel. These modules provide ancillary support for different types of devices and subsystems. The code for these modules is either in the kernel itself, in which case it is called ‘built-in’, or designated to built as a module. When built as a module the compiled code is stored separately from the kernel, typically with a .ko extension. The advantage of having a module is that it can be easily changed without having to rebuild the entire kernel.
We will be building a module called uvcvideo to help interface with the RealSense camera.
A convenience script has been created to help with this task in the installLibrealsense repository on the JetsonHacks Github account.
$ git clone https://github.com/jetsonhacks/installLibrealsense.git
$ cd installLibrealsense/UVCKernelPatches
$ ./getKernelSources.sh
Once the kernel sources have been downloaded and decompressed into the /usr/src directory, a configuration editor opens. In the configuration editor, set the local version of the kernel to that of the current configuration. The current local version number is available through the command:
$ uname -r
which displays:
3.10.40-gdacac96
The local version number consists of the digits following the 40 in this case, i.e. -gdacac96.
Remember the – sign, it is important! This identifier is used to ensure that the module matches the build of the kernel and should match exactly. Place the local version number in the field: General Setup -> Local version – append to kernel release:

Next, we will be modify the USB Video Class (UVC) to understand RealSense video formats. The option to compile UVC as a module is located in:
Device Drivers -> Multimedia Support -> Media USB Adapters -> USB Video Class (UVC)

Once you find the entry, right-click on the entry until you see a small circle. The circle indicates that the option will be compiled as a module. Save the configuration file.

A patch file is provided to apply on the module source and a shell script is provided to apply the patch. Again, these are convenience files, you may have to modify them for your particular situation.
$ ./applyUVCPatch.sh

Next, compile the kernel and module files:
$ ./buildKernel.sh

This takes several minutes as the kernel and modules are built and the modules installed. Once the build is complete, you have a couple of options. The first option is to make a backup of the new kernel and modules to place them on a host system to flash a Jetson system with the new kernel. We will not be covering that here, but for reference:
The second option is to copy the kernel over to the boot directory. A convenience script is provided:
$ ./copyzImages.sh

In addition to copying the new kernel into the boot directory, the newly built module, uvcvideo, is added to the file /etc/modules to indicate that the module should be loaded at boot time.

The RealSense cameras require USB 3.0. The USB port is set for USB 2.0 from the factory. Also, the stock kernel uses what is called ‘autosuspend’ to minimize power usage for USB peripherals. This is incompatible with most USB video devices. If you have not changed these settings on your TK1, a convenience script has been provided:
$ ./setupTK1.sh

Now reboot the system.

Once the machine has finished rebooting, open a Terminal:
$ cd installLibrealsense
$ ./installLibrealsense.sh

This will build librealsense and install it on the system. This will also setup udev rules for the RealSense device so that the permissions will be set correctly and can be accessed from user space. Once installation is complete, you will be able to play with the examples. For example:
$ cd ~/librealsense/bin
$ ./cpp-config-ui

The example allows you to set the camera parameters. Hit the ‘Start Capture’ button to start the camera.
At the end of the RealSense camera installation, you should run some of the examples provided to make sure that the camera is installed and works properly.

Qt Creator
There are Qt Creator files in librealsense which may be used to build the examples and librealsense itself. A convenience script, ./installQtCreator.sh , is provided in the installLibrealsense directory to install Qt Creator.

Note: In an earlier article article on installing ROS on the Jetson TK1, we used the Grinch kernel. The Grinch kernel provides access to a large number of peripherals on the TK1. Because we modified the stock kernel for the RealSense camera in our first step, the Grinch kernel is not used here. If you need the Grinch kernel, you will have to recompile the Grinch kernel with the RealSense camera changes. That is an excercise left to the reader.

2.2 the second part of getting the R200 to work is to build and install librealsense.

B) Second, we need to have ROS installed on the Jetson. Once the RealSense camera is working, from a new Terminal install ROS:

REF: http://www.jetsonhacks.com/2016/06/20/intel-realsense-camera-installation-nvidia-jetson-tk1/

C) Third, we download the realsense_camera package installer:
Open a new Terminal, which will source the new environment setup by ROS, and:
$ git clone https://github.com/jetsonhacks/installRealSenseCameraROS.git
$ cd installRealSenseCameraROS

We need is a Catkin Workspace for our base of operations. There is a convenience script to create a new Catkin Workspace.
$ ./setupCatkinWorkspace [workspace name]

With the prerequisites installed, we’re ready to install the realsense_camera package:
$ ./installRealSense.sh [workspace name]

If you do not have a swap file enabled on your Jetson, there may be issues compiling the package because the TK1 does not have enough memory to compile this in one pass. The installation script has been changed since the video was filmed to compile using only one core to relieve memory pressure, i.e.
$ catkin_make -j1″

0 回复

发表评论

Want to join the discussion?
Feel free to contribute!

发表评论