# Category Archives: Robotics

## 机器学习 | 机器学习入门知识

– 经济学模型建立
– 图像处理和机器视觉
– 生物DNA解码
– 能源负载、使用、价格预测
– 汽车、航空和制造
– 自然语言处理
– … …

Machine Learning从其采用的学习方式来说有以下三大类：
– 监督学习 (Supervised Learning)：用于训练的数据包含已知结果（回归与分类问题）。
– 无监督学习 (Unsupervised Learning)：用于训练的数据不包含已知结果（聚类问题）。
– 强化学习 (Reinforcement Learning)：用于训练的数据不包含已知结果，但是可以用Award函数对其进行评价。

▲ 图. 机器学习的分类（图中没有强化学习，一般强化学习会被认为是semi-supervised）[1]

▲ 图. 常见Machine Learning算法的思维导图，点击放大 (Picture from http://machinelearningmastery.com/)

▲ 图. Machine Learning算法的选择 [1]

## Reference

[1] Introducing Machine Learning, Mathworks

## AR.Drone四轴飞行器ROS开发方法介绍

AR.Drone是法国Parrot公司生产的高性能四轴飞行器平台，它因其极高的性价比和丰富的板载传感器而被广泛应用于机器人研究中。AR.Drone平台于2010年在CES大会上首次发布，它最初的定义是应用于虚拟增强游戏的高科技四轴飞行器平台。AR.Drone使用了极轻的聚丙烯和碳纤维材料，不含外壳的总重量仅为380g。AR.Drone可用于室内或室外飞行，每一种模式可以配备不同的外壳。其第二代AR.Drone 2.0于2012年发布，相对于一代增强了摄像头分辨率，并提高了处理器性能。新版本的AR.Drone 2.0 (Power Edition) 还支持使用GPS定位模块，同时续航时间由最初的18分钟提高到了40分钟。AR.Drone丰富的传感器及极佳的稳定性，让其十分适合于机器人研究。相比于固定翼平台，四轴平台无需太大的实验场地，从而更适合于室内飞行实验。

Ubuntu和ROS的安装工作完成后，在terminal中输入以下指令安装ardrone-autonomy：

apt-get install ros-hydro-ardrone-autonomy


rosrun ardrone_autonomy ardrone_driver


# Default Setting - 50Hz non-realtime update, the drone transmission rate is 200Hz
$rosrun ardrone_autonomy ardrone_driver _realtime_navdata:=False _navdata_demo:=0 # 200Hz real-time update$ rosrun ardrone_autonomy ardrone_driver _realtime_navdata:=True _navdata_demo:=0

# 15Hz real-rime update
$rosrun ardrone_autonomy ardrone_driver _realtime_navdata:=True _navdata_demo:=1  其中，_realtime_navdata 参数决定数据是否缓冲发送，而_navdata_demo 参数决定数据的发送频率是15Hz还是200Hz。该驱动运行成功后，将作为一个Node向外Publish或者Subscribe topics。topics有三类，前两个为数据输出，后一个为命令输入： 1) Legacy navigation data，位于ardrone/navdata，包括当前状态、转角、速度、加速度等： • header: ROS message header，消息头 • batteryPercent: The remaining charge of the drone’s battery (%)，电池电量 • state: The Drone’s current state，当前飞行器状态: • 0: Unknown，未知 • 1: Inited，初始化完成 • 2: Landed，着陆 • 3,7: Flying，飞行 • 4: Hovering，悬停 • 5: Test (?)，测试 • 6: Taking off，正在起飞 • 8: Landing，正在着陆 • 9: Looping (?) • rotX: Left/right tilt in degrees (rotation about the X axis)，左右倾角Roll • rotY: Forward/backward tilt in degrees (rotation about the Y axis)，前后倾角Pitch • rotZ: Orientation in degrees (rotation about the Z axis)，方向角Yaw • magX, magY, magZ: Magnetometer readings (AR-Drone 2.0 Only) (TBA: Convention)，磁场传感器 • pressure: Pressure sensed by Drone’s barometer (AR-Drone 2.0 Only) (Pa)， 气压 • temp : Temperature sensed by Drone’s sensor (AR-Drone 2.0 Only) (TBA: Unit)， 温度 • wind_speed: Estimated wind speed (AR-Drone 2.0 Only) (TBA: Unit)， 风速 • wind_angle: Estimated wind angle (AR-Drone 2.0 Only) (TBA: Unit)， 风向角 • wind_comp_angle: Estimated wind angle compensation (AR-Drone 2.0 Only) (TBA: Unit) • altd: Estimated altitude (mm)， 预测的当前高度 • motor1..4: Motor PWM values， 电机PWM控制值 • vx, vy, vz: Linear velocity (mm/s) [TBA: Convention]， 当前线性速度 • ax, ay, az: Linear acceleration (g) [TBA: Convention]， 当前转向速度 • tm: Timestamp of the data returned by the Drone returned as number of micro-seconds passed since Drone’s boot-up. 2) Cameras，前置摄像头和底部摄像头的视频流分别位于ardrone/front/image_raw 和 ardrone/bottom/image_raw，传输使用标准的ROS camera interface接口。同时摄像头可以通过ardrone_front.yaml和ardrone_bottom.yaml这两个配置文件进行校正； 3) 飞行器控制 (输入)，位于cmd_vel，该节点接受geometry_msgs::Twist 数据作为输入，可以控制飞行器在x, y, z上的速度，以及控制yaw的角速度： -linear.x: move backward +linear.x: move forward -linear.y: move right +linear.y: move left -linear.z: move down +linear.z: move up -angular.z: turn left +angular.z: turn right  用户开发自己的Node程序时，只要在程序中Publish或者Subscribe上述对应的topics，就可以和AR.Drone双向通信了。ROS的开发编程方法在此不再赘述，但是推荐使用C++的API（另一个为python接口，觉得支持的还不好）。 目前先写这么多，如果还收到其他开发上的疑问，我再进行补充。最后附上ardrone_autonomy的在线手册地址 ## 两轮自平衡机器人 | 研究计划 我一直对自平衡小车十分感兴趣，最早在关注Segway的时候就想玩一玩自平衡算法（当时大约还是2010年）。然而那时一无业余时间，二无设计能力，于是搁浅至今。今年5月份回国的时候，这个想法重新占上心头，于是在淘宝选够了一款评价较好的平衡车，总价不过400大洋。谁知回到英国，自己又懈怠了下来，除了偶尔拿出来当玩具玩两下，也没有仔细深入研究。这样下去自然不行，然而深入研究确实需要不少精力，只能在此痛下决心一定要把里因外果弄清楚。 虽然之前没有正式做过平衡车，然后对于这个系统还是有所了解的，其中存在的研究问题大概如下： ● 硬件设计：电机选型、电机驱动、速度反馈传感器、惯性传感器MEMS、电源、电池、MCU、通信。因为硬件设计实在繁琐，其中原理并不复杂，调试却颇费功夫，所以直接购买硬件成品； ● 系统建模：两轮自平衡的系统模型与一阶倒立摆应该是一样的。系统模型并非必须，但是通过系统模型可以了解系统特性，也可以做一些软件仿真，方便控制器的参数调节。系统的最终模型可以是s域的传递函数模型 (transfer function) 也可以是时域的状态空间模型 (state-space model)； ● 传感器数据处理：主要是对惯性传感器的数据进行滤波与分析。滤波的目的是排除系统的动态扰动以及传感器的动态噪声，主要算法应该是互补滤波器 (Complementary Filter) 和卡尔曼滤波器 (Kalman Filter)。互补滤波器的算法实现较简单，而卡尔曼滤波器虽然复杂但可以和状态空间模型结合，设计性能更佳的LQG控制器； ● 控制器设计：常用的为PID控制器和状态空间下的LQR (Linear Quadratic Regulator) 或LQG (Linear-Quadratic-Gaussian) 控制器； ● 任务设计及实时性保证：整个软件有实时性要求，倾向于使用免费的FreeRTOS实时操作系统。操作系统采用静态优先级调度，优先级基于任务周期。系统中的主要任务有：传感器采集任务、传感器滤波与分析任务、控制器任务、电机转速调制任务、通信任务。 整个研究对我来说有三个目的： 1）研究运动传感器数据滤波与处理； 2）研究系统建模与控制器设计； 3）研究FreeRTOS的使用方法以及验证相关的实时性理论。 因为还未阅读相关资料，以上设计计划可能还有遗漏、错误之处，日后发现再做修正。 ## Configure OpenCV-Python development environment in Windows In older days, it is relative complex to configure a OpenCV development environment. You have to install an IDE, e.g. Mircosoft Visual Studio, and have to set many configurations of the project, such as the include folder, the header folder and where the IDE can find the third-party OpenCV library. But after Python becomes one of the main streams, things has changed. Python has proven to be a better and easier way to program and test computer vision programs, compared with C++. In this article, I will show you the simple steps that could help you to configure an OpenCV-Python ready environment in Windows. 1. First, you have to download and install Python 2.7 and Numpy. Install them to their default locations, in order to avoid any further issues. The reason to use Python 2.7 instead of Python 3.x is that Python 2.7 is best supported by most third-party libraries, while Python 3 is too new for all the scientific libraries to be fully supported. 2. Download the latest OpenCV release from Sourceforge and extract it into a system folder, e.g., C:\Program Files\opencv. 3. Goto$(opencv folder)\build\python\2.7\x86 or \x64 based on your machine type. Copy cv2.pyd to \$(python folder)\lib\site-packages.

That’s all you have to do. Really easy and will only cost you 10 minutes. Now let’s open the Python IDLE and test if the environment is working:

import numpy
import cv2

print cv2.__version__


Run the code and if you got the version printed and no error message, then congratulations: you have succeed. But if you did get error message, then you should check the version of both the Python and the OpenCV is correct. If you have multiple versions of Python, then check if you have run the right version of IDLE.

Before we try more complicated vision algorithms, let’s simply connect a USB camera first and try to get the video stream from the camera and display it on the screen:

import cv2

cap = cv2.VideoCapture(0)

while(True):
cv2.imshow('camera', frameInput)

if cv2.waitKey(33) == 27:
break

cv2.destroyAllWindows()
cap.release()


What you should got is a pop-up window displaying the current image captured from the camera. You don’t have to understand this code at the moment, but the thing is once you got the live video stream from the USB camera, further steps can be done to process, modify and save it. More details and examples will be discussed later in my further blogs.

References
[1] OpenCV-Python Tutorials, Install OpenCV-Python in Windows, available at: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html

## Dyson Released its New Vacuum Cleaner Robot: 360 Eye

This state-of-the-art vacuum cleaner robot, 360 Eye, was released by Dyson few months ago. From the official website, it can be seen that the robot uses V-SLAM technique which dramatically increases the computational overhead. To fit the computational load, I think the processor of this robot should be at least a Cortex A8 running at 1GHz or an ARM-DSP SoC, like TI DaVinci DM64xx. I think the innovative part which makes this robot unique, is the use of tank-like structure and a 360 degrees camera for video navigation:

Figure. 360 Eye Vacuum Robot by Dyson (picture from IEEE Spectrum)

The tank structure makes this robot robust to small scale obstacles such as your carpet on the floor and small rising edges. For the idea of using a 360 degrees camera, well, I have to say it is a genius concept. A full view camera promises more contrast features and corners for the robot to navigate while it also reduce the possibility that the view of the robot is blocked by large obstacles like a human.

This is no doubt that this is the most advanced vacuum cleaner robot so far and only the newest technology of micro-processors can ensure this aggressive design. The only problem is, the cost of this robot is also considerable, thus its selling price is more than 900 dollars.

Last thing that I am wondering about is, what if this robot encounters a mirror?

( For Chinese users please visit Youku to see the video: http://v.youku.com/v_show/id_XNzc0MTMzNTA0.html )

## AR.Drone Position Servoing and Visual Tracking

A demonstration of my Master’s Thesis: Visual-Based Localization and Tracking of a UGV with a Quadcopter. In this project, a visual tracking framework is designed to track the UGV with an AR.Drone quadcopter from Parrot. The system utilizes a centralized control by a ground station which is running ROS and Ubuntu 12.04 LTS.

The first two experiments were taken with the support of a global vision system which was designed using a low cost web camera. While in the last experiment, the quadcopter simply uses IMU data for navigation. The image was captured from the bottom camera of the AR.Drone and processed with OpenCV. Four PID controllers were designed to control the motion of the quadcopter to make it hold at a position or track a trajectory.

The next step is to use such a robot system for factory and infrastructure inspection. But since I have to return my quadcopter to the department, it is really problematic for me to imply this idea. Hope I can find the chance to get another AR.Drone soon.

（国内用户请访问优酷：http://v.youku.com/v_show/id_XNzczOTg0MDY0.html

## ROS的消息回调处理：ros::spin()与ros::spinOnce()

I. 对于速度较快的消息，需要注意合理控制消息队列及spinOnce()的时间。例如，如果消息到达的频率是100Hz，而spinOnce()的执行频率是10Hz，那么就要至少保证消息队列中预留的大小大于10。

II. 如果对于用户自己的周期性任务，最好和spinOnce()并列调用。即使该任务是周期性的对于数据进行处理，例如对接收到的IMU数据进行Kalman滤波，也不建议直接放在回调函数中：因为存在通信接收的不确定性，不能保证该回调执行在时间上的稳定性。

// 示例代码
ros::Rate r(100);

while (ros::ok())
{
libusb_handle_events_timeout(...); // Handle USB events
ros::spinOnce();                   // Handle ROS events
r.sleep();
}


III. 最后说明一下将ROS集成到其他程序架构时的情况。有些图形处理程序会将main()包裹起来，此时就需要找到一个合理的位置调用ros::spinOnce()。比如对于OpenGL来说，其中有一个方法就是采用设置定时器定时调用的方法：

// 示例代码
void timerCb(int value) {
ros::spinOnce();
}

glutTimerFunc(10, timerCb, 0);
glutMainLoop(); // Never returns


## ROS Hydro安装教程

ROS (Robot Operating System) 是目前最为领先的机器人操作系统，被广泛用于机器人系统的控制与仿真中。虽然之前早有了解，但直到近日因为科研需要才开始正式使用它。ROS目前由Willow Garage维护，最新的版本为ROS Hydro，支持最好的平台为Linux Ubuntu 12.04。

ROS Hydro的安装过程并不复杂，按照特定步骤一般不会有什么问题，20 – 30分钟左右就可以安装完成。以下为完整的安装步骤：

## 1. 配置安装环境

### 1.3 修改source.list文件

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu precise main" > /etc/apt/sources.list.d/ros-latest.list'


### 1.4 设置密钥

wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O - | sudo apt-key add -


## 2. 安装ROS Hydro

### 2.1 更新安装源apt-install update

sudo apt-get update


### 2.2 安装ROS Hydro完全版

sudo apt-get install ros-hydro-desktop-full


## 3. 安装依赖包和工具

### 3.1 初始化rosdep

rosdep是ROS解决程序包依赖的工具：

sudo rosdep init
rosdep update


### 3.2 环境变量设置

echo "source /opt/ros/hydro/setup.bash" >> ~/.bashrc


source /opt/ros/hydro/setup.bash


### 3.3 安装rosinstall

rosinstall是ROS下载程序包集合的工具，安装方法：

sudo apt-get install python-rosinstall


roscore


## 【参考资料】

[1] ROS.org, Ubuntu install of ROS Hydro, http://wiki.ros.org/hydro/Installation/Ubuntu