智能机器人(37):IMU 传感器

1. setup robot
3. od_node
5. laser_node
7. imu_node

1. about imu sensor
2. anout imu usage
3. about imu code
3.1 odom-node.cpp的
3.2 imu-node.cpp的
4. about imu debug
REF:
https://answers.ros.org/question/200480/imu-message-definition/
https://github.com/jeskesen/i2c_imu/blob/master/src/i2c_imu_node.cpp
http://codegist.net/code/flora-9dof-accelerometer%2Bgyroscope%2Bmagnetometer—lsm9ds0/

(如果运行在Oracle 的VM VirtualBox , 需要install extend tools,使能 networks interface 桥接, 使能 com为主机设备 主机设备com1,然后install ubuntu 14.04 AMD64, ros ,setup env, init catkin_ws, start master )

1. setup robot
$ roslaunch funbot r_rbt.launch

.. …

这launch加载: odom的串口驱动,laser的以太网驱动,imu的USB驱动,
同时静态发布一个base_link到base_laser的tf变换。
同时静态发布一个base_link到imu_link的tf变换。
后面这项工作如果已经创建了机器人模型urdf,就不要再发布了。

3. od_node

5. laser_node

7. imu_node

1. about imu sensor

ros官方和开发者都存在的很大问题, 是imu和odom的消息格式。
odom的消息nav_msgs/Odometry, 指明了两个框架,tf就没有疑问。
imu的消息sensor_msgs/Imu,却只有一个坐标框架,就含混不清:
Q:
I am confused about the IMU though: it’s heading estimate should be in the map frame as it’s absolute, but it’s motion estimates should be in the base_link frame because they are relative to the robot ? The imu message only has a single frame_id, so which should I set it to?
Q:
It seems clear that nav_msgs::Odometry.pose.pose.orientation goes from header.frame_id to child_frame_id (or parent to child), but there is no child_frame_id in the sensor_msgs::Imu message and I do not see that . The two obvious choices are map and odom, but I don’t know if one is preferred over the other.

关键在于,要理解同样存在两个frame_id就可以:
传感器的线加速度和角速度, 是在传感器坐标框架内,例如imu_link;
传感器的转向旋转, 却是传感器相对于他的上层父坐标系,例如odom、map的那世界坐标系。

例如,写imu驱动程序时,定义imu_frame_id和tf_parent_frame_id及tf_frame_id这样三个框架:

那么,典型的:
**1. imu_frame_id would be something like imu_link, the acceleration and velocities are measured in this frame.
**2. the orientation is between a fixed inertial frame to sensor, the “fixed inertial frame” is implicitly assumed to be oriented in the same way as odom, map, or whatever top-level other fixed frame your application uses.
**3.1. orientation is calculated by user’ code using data both the accelerometer (pitch and roll) and magnetometer (yaw) and others …
**3.2. packages like robot_pose_ekf do not use the orientation vector in an absolute style ,it actually calculates changes in heading.

So:
You should populate the frame_id of the sensor_msgs/IMU message with the frame in which the sensor is mounted.
If IMU is oriented to base link, you can just fill the frame with base link, or publish a static tf publisher from base_link to imu_link.

2. about imu usage

方式1,需要-1来禁用传感器的转向数据。方式2,安全做法,像turtlebot这样的机器人,以及robot_pose_ekf这样的算法,对于imu的转向旋转,都是只作为相对值,微分,并不绝对值,积分。方式3,这种常见方式即使利用了电子罗盘,由于不能满足严格垂直这样的苛刻场景,需要解决。

*1. Orientation not provided
This is just gyros and accelerometers, no fusion yet.
In this case, ‘yaw’ or ‘heading’ still drifts since the gravity component for yaw is 0,
so you cannot determine absolute heading with only those 6 measurements.
We can set msg.orientation_covariance[0] = -1,
which tell other software not use the orientation.
For example, turtlebot and robot_pose_ekf all use the IMU only for the orientation changence, not the absolute orientation.
This is ultimately safest, as any orientation can be fused in this manner.
Unfortunately we lose the ability to remove drift, so more complicated approaches exist.

*2. Orientation integrated with drift from where the IMU started. This IMU doesn’t guarantee a known reference position. It’s possible for the ROS software to startup after the IMU calibrated, so the reference position is completely unknown due to drift. These IMUs are uncommon, but essentially only integrate the gyros for orientation and do not use a gravity reference. This a purely inertial output and the best you could do in empty space. The issue using this type in ROS is that the message itself does not let you know that the orientation is relative rather than absolute.
This is the case that ROS handles today and is a form of dead reckoning/Odometry. Turtlebot and robot_pose_ekf all use the IMU only for the change in orientation, not the absolute orientation. This is ultimately safest, as any orientation can be fused in this manner. Unfortunately you lose the ability to remove drift, so more complicated approaches exist.

*3. Orientation integrated with drift but pitch and roll referenced to gravity. This IMU is the most common. It uses the gyros to determine angle from startup, but references pitch and roll to gravity. The assumption here is that you will have to re-reference before you travel far enough for the Earth’s gravity vector to not aligned down in your Cartesian frame. In this case, ‘yaw’ or ‘heading’ still drifts since the gravity component for yaw is 0, so you cannot determine absolute heading with only those 6 measurements. Where this becomes a problem depends on the specific application.

*4. Orientation reference to gravity and magnetic north
This is very common that add magnetometers to reference the heading to magnetic north.
The problem using this method is that the frame must still be tangential to the Earth’s surface,
and that the magnetic field has ‘declination’ that varies based on where the IMU is located on the Earth’s surface.

*5. Orientation fused from a more complicated system (GPS INS).
This can be output a full sensor fusion system, but we highly recommend to NOT use this method.
If there is GPS, Odometry, INS, and more being fused into a globally corrected x, y, z, and orientation,
the output should primarily be a nav_msgs/Odometry.
However, if acceleration is required, the IMU message can be output, but we’d recommend it be the ‘raw’ sensor values.

What can we do to clarify this?
** Add an orientation_reference_frame_id. This is not a good idea since this is unknown and undefined for (1), (2), and (3). These don’t specify a reference that is repeatable. In the cases where you could provide it (4), (5), it still does not really solve the problem since the gravity and magnetometer reference frames are not Cartesian. So in order to accommodate ALL use cases, we’d have to reference something like this for gravity, but also factor in the magnetic field. Oh, and did I mention this varies over time?
** Let the sensor fusion stage handle it. This is ultimately the best solution. Since the sensor fusion stage has to handle custom orientation parameters, you might as well also define an imu_type parameter that can be changed to determine how best to use the IMU.

3. about imu code

3.1 odom-node.cpp的实现中,pose是在odom坐标,twist是在base_link坐标,tf是odom->base_link。

geometry_msgs::TransformStamped odom_trans;
double angle_radi =od->position.positionr*3.1415926/180.0;
geometry_msgs::Quaternion odom_quat = tf::createQuaternionMsgFromYaw(angle_radi);

odom_trans.header.stamp = current_time;
odom_trans.header.frame_id = “odom”;
odom_trans.child_frame_id = “base_link”;
odom_trans.transform.translation.x = od->position.positionx;
odom_trans.transform.translation.y = od->position.positiony;
odom_trans.transform.translation.z = 0;
odom_trans.transform.rotation = odom_quat;
//send the transform
if (tf_broadcast)
odom_broadcaster->sendTransform(odom_trans);
—————————————————-
nav_msgs::Odometry odom;
odom.header.stamp = current_time;
odom.header.frame_id = “odom”;
//set the position
odom.pose.pose.position.x = od->position.positionx;
odom.pose.pose.position.y = od->position.positiony;
odom.pose.pose.position.z = 0.0;
odom.pose.pose.orientation = odom_quat;
//set the velocity
odom.child_frame_id = “base_link”;
odom.twist.twist.linear.x = od->speed.speedx;
odom.twist.twist.linear.y = od->speed.speedy;
odom.twist.twist.angular.z = od->speed.speedr;
//publish the message
odom_pub->publish(odom);

3.2 imu-node.cpp的实现中,angular_velocity和linear_acceleration是在imu_link坐标,oritation应该是在odom或map坐标,tf是odom/map->base_link。
tf::Quaternion orientation(xf, yf, zf, wf);
delta_rotation = matrix_orientation.inverse() * orientation;
imu.header.stamp = measurement_time;
imu.header.frame_id = frame_id; // imu_link
quaternionTFToMsg(delta_rotation, imu.orientation);
imu.angular_velocity.x = gxf;
imu.angular_velocity.y = gyf;
imu.angular_velocity.z = gzf;
imu.linear_acceleration.x = axf;
imu.linear_acceleration.y = ayf;
imu.linear_acceleration.z = azf;
// publish imu message
imu_pub.publish(imu);
————————————-
// publish tf transform
if (tf_broadcast) {
transform.setOrigin(tf::Vector3(0,0,0));
transform.setRotation(delta_rotation);
tf_br.sendTransform(tf::StampedTransform(transform, measurement_time, tf_parent_frame_id, tf_frame_id));
}

4. about imu debug

4.1 ERROR: Covariance specified for measurement on topic xxx is zero
Each measurement that is processed by the robot pose ekf needs to have a covariance associated with it. The diagonal elements of the covariance matrix can not be zero. This error is shown when one of the diagonal elements is zero. Messages with an invalid covariance will not be used to update the filter.
Creating the covariance matrices for individual sensors:
For IMU and Odometry, the covariance matrix can be formed from the datasheet.
For Visual Odometry, covariance matrix may be obtained from the measurement equation that relates the measured variables to the pose coordinates [1].
*none of the covariance matrices should be zero, they should have some value, but I am not sure how it gets passed.

4.2
the odometry covariance is a float[36] a 6×6 matrix. diagonal terms are the trust you have in your sensor for each Dof. You have 6 Dof, position (x, y, z) and orientation (x, y, z)

to estimate your sensor or algorithm accuracy with experiment. If you see your data are good for 1cm in translation and 0.1 radian in rotation you can use this matrix:

[0.01 0.0 0.0 0.0 0.0 0.0,
0.0 0.01 0.0 0.0 0.0 0.0,
0.0 0.0 0.01 0.0 0.0 0.0,
0.0 0.0 0.0 0.1 0.0 0.0,
0.0 0.0 0.0 0.0 0.1 0.0,
0.0 0.0 0.0 0.0 0.0 0.1]

If have no information for one Dof you can put a huge value.

for example segway_rmp:
this->odom_msg.pose.covariance[0] = 0.00001;
this->odom_msg.pose.covariance[7] = 0.00001;
this->odom_msg.pose.covariance[14] = 1000000000000.0;
this->odom_msg.pose.covariance[21] = 1000000000000.0;
this->odom_msg.pose.covariance[28] = 1000000000000.0;
this->odom_msg.pose.covariance[35] = 0.001;

智能机器人(35):惯导驱动-mpu

1. 硬件
2. 软件
3. 软件 – ROS
3.1 arduino uno 部分
3.2 pc ros 部分
3.3 usage

MPU 6050 is a sensor based on MEMS technology with I2C protocol communication.
The accelerometer works on the principle of the piezoelectric effect.
The gyroscopes work on the principle of Coriolis acceleration.
REF: http://blog.csdn.net/eaibot/article/details/51398074/

1. 硬件

Arduino-UNO + MPU-6050

XCL,XDA:这2个pin是MPU6050做I2C的master时用来控制slave的。不接。
SDA,SCL:这2个pin是I2C接口。 接UNO的pin4(SDA)和pin5(SCL)。
AD0: 用来选定I2C地址的,接地时地址是0x68,接3.3V时是0x69. 接地。
INT:打断信号,MPU6050内建1024byte的FIFO buffer,当buffer有数据时可以interrupt通知arduino。接UNO的pin-2 (interrupt pin 0)
共享代码中都设置400KHz的I2C bus speed,100K应该也可。

2. 软件

下载:I2Cdev library: http://diyhacking.com/projects/I2Cdev.zip
下载:MPU 6050: http://diyhacking.com/projects/MPU6050.zip
两个库解压,复制到Arduino的libraries目录
File –> Examples –> MPU6050 –> Examples –> MPU6050_DMP6
编译,上传,打开串口监控设置波特率115200:
Send any character to begin DMP programming and demo:
ypr 28.37 -30.82 10.95
ypr 28.46 -30.82 10.95
ypr 28.56 -30.82 10.96
ypr 28.66 -30.82 10.96

注:
最初数据不准确。
注:
这里不是raw data,是DMP方式。
DMP即Digital Motion Processing,a built-in motion processor. It processes the values from the accelerometer and gyroscope to give accurate 3D values.

3. 软件 – ROS

REF: https://github.com/fsteinhardt/mpu6050_serial_to_imu/blob/master/README.md

3.1 arduino uno 部分

复制前面的ino代码,创建新项目。
修改部分:

*1 comment define OUTPUT_READABLE_YAWPITCHROLL:
#define OUTPUT_READABLE_YAWPITCHROLL ->
//#define OUTPUT_READABLE_YAWPITCHROLL
*2 uncomment OUTPUT_TEAPOT:
//#define OUTPUT_TEAPOT ->
#define OUTPUT_TEAPOT
*3 change code:
//uint8_t teapotPacket[14] = { ‘$’, 0x02, 0,0, 0,0, 0,0, 0,0, 0x00, 0x00, ‘r’, ‘n’ };
———–>>>>>>>>>>>>>>>
uint8_t teapotPacket[28] = { ‘$’, 0x03, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0x00, 0x00, ‘r’, ‘n’ };
*4 change code:
teapotPacket[2] = fifoBuffer[0];
teapotPacket[3] = fifoBuffer[1];
teapotPacket[4] = fifoBuffer[4];
teapotPacket[5] = fifoBuffer[5];
teapotPacket[6] = fifoBuffer[8];
teapotPacket[7] = fifoBuffer[9];
teapotPacket[8] = fifoBuffer[12];
teapotPacket[9] = fifoBuffer[13];
Serial.write(teapotPacket, 14);
teapotPacket[11]++; // packetCount, loops at 0xFF on purpose
————>>>>>>>>>>>>>>>>>>>>>>>>
teapotPacket[2] = fifoBuffer[0];
teapotPacket[3] = fifoBuffer[1];
teapotPacket[4] = fifoBuffer[4];
teapotPacket[5] = fifoBuffer[5];
teapotPacket[6] = fifoBuffer[8];
teapotPacket[7] = fifoBuffer[9];
teapotPacket[8] = fifoBuffer[12];
teapotPacket[9] = fifoBuffer[13];
// gyro values
teapotPacket[10] = fifoBuffer[16];
teapotPacket[11] = fifoBuffer[17];
teapotPacket[12] = fifoBuffer[20];
teapotPacket[13] = fifoBuffer[21];
teapotPacket[14] = fifoBuffer[24];
teapotPacket[15] = fifoBuffer[25];
// accelerometer values
teapotPacket[16] = fifoBuffer[28];
teapotPacket[17] = fifoBuffer[29];
teapotPacket[18] = fifoBuffer[32];
teapotPacket[19] = fifoBuffer[33];
teapotPacket[20] = fifoBuffer[36];
teapotPacket[21] = fifoBuffer[37];
//temperature
int16_t temperature = mpu.getTemperature();
teapotPacket[22] = temperature >> 8;
teapotPacket[23] = temperature & 0xFF;
Serial.write(teapotPacket, 28);
teapotPacket[25]++; // packetCount, loops at 0xFF on purpose
编译,上传到arduino uno。

或者,
直接下载代码: https://github.com/fsteinhardt/mpu6050_serial_to_imu

解压, 对MPU6050.ino项目用Arduino IDE打开。
不过arduino带的wire版本和这里代码不兼容, wire的好几个函数改名了。
error: ‘class TwoWire’ has no member named ‘setClock’
需要修改两处:
// Wire.setClock(400000); // 400kHz I2C clock. Comment this line if having compilation difficulties
TWBR = 24; // 400kHz I2C clock (200kHz if CPU is 8MHz)

// enable Arduino interrupt detection
Serial.println(F(“Enabling interrupt detection (Arduino external interrupt 0)…”));
//attachInterrupt(digitalPinToInterrupt(INTERRUPT_PIN), dmpDataReady, RISING);
attachInterrupt(0, dmpDataReady, RISING);
后面这个版本时间长了串口不稳定。

3.2 pc ros 部分

复制解压后的剩余代码到~/atkin_ws/src,
或者:
$ cd ~/catkin_ws/src/
$ git clone https://github.com/fsteinhardt/mpu6050_serial_to_imu
$ cd ~/catkin_ws/
$ catkin_make

如果需要:
$ sudo apt-get install ros-indigo-serial

Serial is a cross-platform, simple to use library for using serial ports on computers. This library provides a C++, object oriented interface for interacting with RS-232 like devices on Linux and Windows.
Not like ros-indigo-rosserial.

$ source ~/.bashrc
$ roslaunch mpu6050_serial_to_imu demo.launch
$ rostopic echo /imu_data

可以看到mpu6050的值转换为ROS的/imu 的消息:
header:
seq: 36872
stamp:
secs: 1508381861
nsecs: 177936189
frame_id: imu_link
orientation:
x: -0.0668025501072
y: -0.0957503132522
z: -0.175129953772
w: 0.977523017675
orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
angular_velocity:
x: 0.0
y: -0.0266316109008
z: -0.0266316109008
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration:
x: 0.185614013672
y: 2.42735229492
z: 11.7942736816
linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

Published Topics
* imu/data ([sensor_msgs::Imu])
The measured accelerometer, gyro and orientation values.
* imu/temperature ([sensor_msgs::Temperature])
The measured temperature in degrees Celsius.

Services:
* set_zero_orientation ([std_srvs/Empty])
This service sets the current orientation as the new zero orientation
so that from now on only the difference to this orientation is sent.

TF Published:
The resulting orientation can be published as a tf transform,
the frame names can be set using parameters.

Parameters:
* broadcast_tf (bool, default: true)
If true: publish the orientation of the IMU as a tf transform.
* tf_parent_frame_id (string, default: “imu_base”)
Sets the name of the parent frame in the tf transform.
… WE CHANGE TO base_link …
* tf_frame_id (string, default: “imu_link”)
Sets the name of the own frame in the tf transform.
* imu_frame_id (string, default: “imu_link”)
Sets the name of the frame for imu messages.

3.3 usage

Needed a tf:

Sample Launch:


4. 校准和标定

Put MPU6050, as horizontal as possible
– Write down offsets for that particular MPU6050 and use them with library’s functions “…setoffset…” in projects.

四元数到角度 : Node发布的是四元数、角速度、线加速度, 如果直观看左转右转的角度:

#! /usr/bin/python

import PyKDL
import rospy
from sensor_msgs.msg import Imu
from nav_msgs.msg import Odometry
from geometry_msgs.msg import Twist
from math import *
import threading
import os
import subprocess
import yaml

def quat_to_angle(quat):
rot = PyKDL.Rotation.Quaternion(quat.x, quat.y, quat.z, quat.w)
return rot.GetRPY()[2]
def normalize_angle(angle):
res = angle
while res > pi:
res -= 2.0*pi
while res < -pi: res += 2.0*pi return res class CalibrateRobot: def __init__(self): self.lock = threading.Lock() self.sub_imu = rospy.Subscriber('/imu/data', Imu, self.imu_cb) self.last_imu_angle = 0 self.imu_angle = 0 while not rospy.is_shutdown(): rospy.sleep(0.1) rospy.loginfo("imu_angle:"+str(self.imu_angle*180/3.1415926)) def imu_cb(self, msg): with self.lock: angle = quat_to_angle(msg.orientation) self.imu_angle = angle self.imu_time = msg.header.stamp def main(): rospy.init_node('scan_to_angle') CalibrateRobot() if __name__ == '__main__': main() --- [INFO] [WallTime: 1508384569.829275] imu_angle:-12.3451541995 [INFO] [WallTime: 1508384569.930287] imu_angle:-12.3600526937 [INFO] [WallTime: 1508384570.031151] imu_angle:-12.3667251945 [INFO] [WallTime: 1508384570.132128] imu_angle:-12.3829488555 可见即使位置不变也是存在漂移的,这个偏移可以加回去。

智能机器人(33):惯导驱动-lps

LpSensor-1.3.5 library (Ubuntu Linux 32-bit binary)
LpSensor-1.3.5 library (Ubuntu Linux 64-bit binary)
LpSensor-1.3.5 library (Ubuntu Linux armv7hf binary, suits rasberry pi)
LpSensor-1.3.5 library (Ubuntu Linux armv7hf binary, suits Jetson TX1)

$ tar xvzf ~/Downloads/LpSensor-1.3.5-Linux-x86-64.tar.gz
$ sudo apt-get install libbluetooth-dev //note no needed
$ sudo dpkg -i LpSensor-1.3.5-Linux-x86-64/liblpsensor-1.3.5-Linux.deb
$ dpkg -L liblpsensor

$ cd ~/catkin_ws/src
$ git clone https://github.com/larics/timesync_ros.git
$ git clone https://github.com/larics/lpms_imu.git

ERROR: unrecognized command line option ‘-std=c++14’

which use c++14 and need g++ >=4.9

$ sudo add-apt-repository ppa:ubuntu-toolchain-r/test & $ sudo apt-get update
$ sudo apt-get install gcc-4.9 g++-4.9 cpp-4.9
GCC 4.9 has installed but it is not working as a default compiler.
So to set it as a default compiler following commands should be executed that is shown in picture.

we can fix it with simple symbolic
$ cd /usr/bin
$ sudo rm gcc g++ cpp
$ sudo ln -s gcc-4.9 gcc
$ sudo ln -s g++-4.9 g++
$ sudo ln -s cpp-4.9 cpp

after that if you check the version of gcc you will find the old version
$ gcc –version
–4.8.4

REF:
https://www.lp-research.com/support/
https://www.lp-research.com/ros-and-lp-research-imus-simple/
https://www.lp-research.com/site/wp-content/uploads/2017/02/screencast_cropped.gif

智能机器人(24):SICK激光雷达lms511

LMS a b c – d e f gg
https://www.sick.com/media/dox/4/14/514/Operating_instructions_Laser_Measurement_Sensors_of_the_LMS5xx_Product_Family_en_IM0037514.PDF

采用激光雷达LMS511(测量精度 12mm),ROS开源gmapping算法,地图分辨率设置为 5cm时,效果不错;分辨率设置为3cm时,地图局部会歪斜。是不是说SLAM_GMAPPING 算法生成地图的理想分辨率就是5cm以上?

Windows下一般有两种方式将LMS511连接到电脑:Ethernet方式和USB方式
需要安装一个软件:SOPAS ET

SICK的正确打开方式是这样的,有两种做法
(1)LMS511如果被设成自动获取IP(即DHCP),然后连在一个带路由功能的设备后面(比如路由器),这样路由设备将会给雷达分配动态地址,然后用一台windows电脑运行SOPAS软件自动搜索设备,就能连上了。
(2)如果你没有路由器,你得把LMS511设成静态IP,这样只要把电脑的IP改成和LMS511的IP同一网段就可以直接把LMS511的网线插到电脑上。这时你可以在windows下用SOPAS自动搜索或者在ubuntu下用sicktoolbox驱动都能获取雷达数据。

USB连接需要安装USB驱动SOPAS ET才能识别LMS511设备,所以给LMS511上电插上数据线,在计算机――管理――设备管理器找到一个带感叹号的USB设备(名字中带LMS字样的)。
点击鼠标右键,点更新驱动程序软件,将路径设置为驱动程序文件夹的路径USB_driver_en_DC0004399就可以了。
如果安装驱动的时候出现“文件的哈希值不在指定的目录”导致未能成功安装驱动,则重启电脑到启动设置下面选择“禁用强制驱动程序签名”的选项,进入之后可以在没有签名的情况下安装驱动。

The factory setting for the Ethe rnet interface is as follows:
IP address: 192.168.0.1
subnet mask: 255.255.255.0
TCP port: 2111

Outdoor
Infrared (905 nm)
Aperture angle 190°
Operating range 0 m … 80 m
Response time ≥ 10 ms
Scanning frequency 25 Hz / 35 Hz / 50 Hz / 75 Hz / 100 Hz
Angular resolution
0.167°
0.25°
0.333°
0.5°
0.667°

Statistical error
6 mm (1 m … 10 m)
8 mm (10 m … 20 m)
14 mm (20 m … 30 m) 1)

Operating voltage 24 V DC
Power consumption 22 W, + 55 W heating (typical)

FAULT:
LMS1xx Series I think the driver is here
$ sudo apt-get install ros-kinetic-lms1xx
Published Topics :
scan (sensor_msgs/LaserScan)
Parameters
~host (string, default: 192.168.1.2)
~frame_id (string, default: laser)
The command for running the driver after connecting it via Ethernet is
$ rosrun LMS1xx LMS100 _host:=192.168.0.1

RIGHT:
After trying a number of driver and packages for our SICK LMS511 the best one I found to work with my ROS Indigo was from:
https://github.com/pangfumin/lms511_laser
All I had to do was initialize a workspace, copy the folder laser_node into the src folder and then catkin_make. Hope this helps someone else.

$ catkin_make
$ rosrun laser_node laser_node
$ rosrun laser_node laser_node _ip_add:=172.16.100.203

$ rosrun LMS100 _host:=192.168.0.1