Category Archives: Robotics

The Limitations of Classical PID Controller and Its Advanced Derivations

Since founded by N. Weiner in 1947, the control theory has been evolved for more than 60 years and is still full of challenges and opportunities. The most important principle of the control theory, in my opinion, is the feedback mechanism. Without feedback and closed-loop, almost no algorithm and control technique can be implied. The idea of feedback is that by comparing the reference input and the actual output, an error signal can be obtained and then can be used by the controller to trace and eliminate the difference between the input and the output. Apart from Watt’s steam engine, one could say that the first formally implication of (negative) feedback is the amplifier invented by H.S. Black. It is a genius idea when first came out in 1927 and was proved to be an extremely useful way to solve electronic and control problems. The idea of output feedback has also been extended to state feedback and error feedback to achieve state control and estimation in more advanced control techniques.

Classical control is the foundation of control theory and it is more concentrated on analysing the stability and performance of a controlled plant. However, only linear and SISO systems have been discussed in classical control theory. Although traditional control techniques such as PID controller are still widely used in industry, they cannot handle more complex engineering scenarios such as aerospace, chemistry and biology. Another problem of classical control is that all parameters are designed and tuned based on the current system model, in which case the system will be more vulnerable to further disturbance and parameters varying.

In order to solve these problems of classical PID controller which mentioned before, more advanced approaches have been derived nowadays. If using classical approach to control a MIMO system, one should divide the system into different modes and control each mode separately. However if the system inputs and outputs are coupled with each other, it cannot be decoupled and this method will not be practicable anymore. Here comes the state-space method, which solved the limitation of classical control by using state variables. The advantage of state-space is that it can be represented by matrices and such is very computer-friendly. State-space representation is actually defined in time domain instead of frequency domain and every state can have some extend of physical meaning which gives some clues about what is happening inside a controlled plant. One milestone which makes the state-space method more practicable is the invention of Kalman filter. Kalman filter uses a series of history measurements in the presence of noise to estimate the current state of the system. Kalman filter can work as a state estimator or simply a special filter which uses the physical system model to remove the process and the measurement noise.

Optimal control method such as MPC and LQR is another derivation of classical control. In most circumstances, there are more than one possible control inputs which can drive the system to work properly, but we need is to find the optimal one. Optimal control actually transforms the control problem into an optimal problem which tries to minimise an objective function to get the best outcome. Another advantage of optimal control is that it can take constraints into consideration. One defect of PID controller is that it cannot handle system constraints like actuator saturation or output limitation. In the optimal case control, design a controller with constraints could be feasible.

It is also known that no system is constant and some parameters are likely to vary with time or to the working condition. In classical control, the controller is designed just for the current system model and thus may loss performance or even be unstable due to the system change and uncertainties. In such aspect, adaptive control or robust control may be more applicable. Both adaptive control and robust control are designed to cope with uncertainties. The difference is that adaptive control identifies the system model and changes its parameters in real-time, but robust control fixed its parameters after deployed to the plant. For the truth that adaptive control has to calculate the system model every few periods, it needs much more computational time. What’s more, since the control parameters in the adaptive controller are changing every time, it may be difficult to prove its stability.  On the other hand, the gain of robust controller has already been designed before applied to the system, so it doesn’t need to do additional calculation during the operation. Since robust controller is globally optimised and especially designed to handle uncertainties, it may not have a performance as good as other controllers. But since the real control problems are always not ideal, it is meaningful to take uncertainties and disturbance into the system model.

Some more advanced control techniques such as neural network and expert control are being discussed today. In my opinion, these new approaches have the potential to be the next generation of control theory. With the developing of computer science, it is now possible to model extremely complex networks. This kind of controller can actually take all the possible system states and its corresponding solutions into a database and each time just search for the best solution according to the current system data.  New techniques such as machine learning can also be absorbed into the controller and make the controller more flexible which can handle different control problems using a same configuration.

However, no matter how powerful the control method is, there are rarely situations where we do not need to make trade-offs. As human-beings, we always need to make decisions and balance the income and the expense. Being too greedy is like giving an infinite gain to a helicopter, which may work at the beginning but will suddenly crash whenever there is any disturbance. So push yourself while keep in mind that you have limitation. Take it easy, be adaptive to the environment and always try to get the optimal solution of your life.

REFERENCES

[1] R.C. Dorf & R.H. Bishop, Modern Control Systems (Twelfth Edition), Pearson, USA.

[2] Wikipedia, Harold Stephen Black. Available at: http://en.wikipedia.org/wiki/Harold_Stephen_Black. Last accessed 26th Mar 2014

[3] E.F. Camacho and C. Bordons, Modern Predictive Control, Springer, London, 2003

机器学习 | 机器学习入门知识

最近正尝试用机器学习的方法解决线性回归和趋势预测问题,这里将自己对机器学习的初步理解整理至此。

机器学习 (Machine Learning) 研究的主题是如何让计算机具备与人类同等的思考和分析能力。机器学习主要基于认知学、计算机科学,统计概率学以及信息决策学。典型的机器学习应用包括照片分类、垃圾邮件识别、自然语言处理等。最近很火热的围棋人工智能AlphaGo就是采用了深度神经网络对大量棋局进行学习,从而具备了顶尖围棋选手的水平。

机器学习的应用领域有:
– 经济学模型建立
– 图像处理和机器视觉
– 生物DNA解码
– 能源负载、使用、价格预测
– 汽车、航空和制造
– 自然语言处理
– … …

Machine Learning从其采用的学习方式来说有以下三大类:
– 监督学习 (Supervised Learning):用于训练的数据包含已知结果(回归与分类问题)。
– 无监督学习 (Unsupervised Learning):用于训练的数据不包含已知结果(聚类问题)。
– 强化学习 (Reinforcement Learning):用于训练的数据不包含已知结果,但是可以用Award函数对其进行评价。

ml_category
▲ 图. 机器学习的分类(图中没有强化学习,一般强化学习会被认为是semi-supervised)[1]

监督学习用于数据中已包含已知标签。言下之意就是用于训练的数据已经具备了对应的输出。比如有一份得癌症与否和肿瘤块大小的对应数据,对单一肿瘤块大小数据而言,其对应的是否患癌症是已知的;再比如需要训练一个神经网络,学习判断一个图片中的主体是猫还是狗。那么用于训练的图片集中,每一张图片都会有已知对应的’这张图片是猫’或’这张图片是狗’的标签。通过对已知数据的学习与理解,从而在新数据出现时具有一定的预测能力。

而非监督学习则是对样本并没有既成标签,而是要通过模式搜索的方式对相似的一些元素进行聚类。典型的聚类问题有:基因序列分析,市场调研,物体识别等。

现在常见的机器学习算法有:

MachineLearningAlgorithms

▲ 图. 常见Machine Learning算法的思维导图,点击放大 (Picture from http://machinelearningmastery.com/)

除了主流的聚类,回归和贝叶斯之外,采用神经网络的Deep Learning深度学习是现在最热门的话题。机器学习的方法很多,在选择使用什么方法时,首先必须要确定自己所面对的是哪一种机器学习问题。在确定了分类后,可以根据下图选择具体方法:


ml_methods
▲ 图. Machine Learning算法的选择 [1]

Reference

[1] Introducing Machine Learning, Mathworks

AR.Drone四轴飞行器ROS开发方法介绍

之前有人咨询如何在AR.Drone平台上进行开发,这里就简单介绍一下AR.Drone在Linux和ROS (Robot Operating System) 下的开发方法。原来计划是一篇更加完整的开发介绍,但是因为中间搁置了一段时间,很多细节到现在已经不能完全记清楚了,所以只简单介绍一下工具及方法。开发过程是在我的研究生毕业设计: Vision-Based Localization and Tracking of a UGV with a Quadcopter中整理的,该项目的介绍和演示视频可以在这里找到

AR.Drone是法国Parrot公司生产的高性能四轴飞行器平台,它因其极高的性价比和丰富的板载传感器而被广泛应用于机器人研究中。AR.Drone平台于2010年在CES大会上首次发布,它最初的定义是应用于虚拟增强游戏的高科技四轴飞行器平台。AR.Drone使用了极轻的聚丙烯和碳纤维材料,不含外壳的总重量仅为380g。AR.Drone可用于室内或室外飞行,每一种模式可以配备不同的外壳。其第二代AR.Drone 2.0于2012年发布,相对于一代增强了摄像头分辨率,并提高了处理器性能。新版本的AR.Drone 2.0 (Power Edition) 还支持使用GPS定位模块,同时续航时间由最初的18分钟提高到了40分钟。AR.Drone丰富的传感器及极佳的稳定性,让其十分适合于机器人研究。相比于固定翼平台,四轴平台无需太大的实验场地,从而更适合于室内飞行实验。

ardrone
图1. Parrot公司的AR.Drone 2.0四轴飞行器

整个系统是服务器-客户端架构,其中AR.Drone作为服务器,对用户提供WiFi接口,需要用户主动去连接。其官方提供的SDK (AR.Drone SDK) 向用户提供了飞行姿态数据、视频流以及用户控制命令接口,基于该SDK可以使用C或C++进一步扩展功能,如增加路径规划导航及图像识别等。该SDK的具体使用方法我并没有仔细研究过,命令和数据好像在是建立socket连接后通过AT指令传输的,详情可以参见开发手册

在我的项目中,我并没有使用AR.Drone SDK,而是使用了AR.Drone SDK的二次封装库:ardrone-autonomy. ardrone-autonomy是在ROS下实现的,所以需要ROS和Linux Ubuntu开发环境。ROS Hydro版本的安装方法在我另一篇博文中。

Ubuntu和ROS的安装工作完成后,在terminal中输入以下指令安装ardrone-autonomy:

apt-get install ros-hydro-ardrone-autonomy

如果不是使用的ROS Hydro而是其他版本,则将-hydro-换为对应版本:-Indigo-或-Groovy-。安装完成后,通过rosrun运行驱动:

rosrun ardrone_autonomy ardrone_driver

运行之前确定WiFi需要确定正常连接。如果连接失败,命令提示行中会有对应的提示输出。该驱动在运行时还可以修改配置参数:

# Default Setting - 50Hz non-realtime update, the drone transmission rate is 200Hz
$ rosrun ardrone_autonomy ardrone_driver _realtime_navdata:=False  _navdata_demo:=0

# 200Hz real-time update
$ rosrun ardrone_autonomy ardrone_driver _realtime_navdata:=True _navdata_demo:=0

# 15Hz real-rime update
$ rosrun ardrone_autonomy ardrone_driver _realtime_navdata:=True _navdata_demo:=1

其中,_realtime_navdata 参数决定数据是否缓冲发送,而_navdata_demo 参数决定数据的发送频率是15Hz还是200Hz。该驱动运行成功后,将作为一个Node向外Publish或者Subscribe topics。topics有三类,前两个为数据输出,后一个为命令输入:

1) Legacy navigation data,位于ardrone/navdata,包括当前状态、转角、速度、加速度等:

  • header: ROS message header,消息头

  • batteryPercent: The remaining charge of the drone’s battery (%),电池电量

  • state: The Drone’s current state,当前飞行器状态:
    • 0: Unknown,未知
    • 1: Inited,初始化完成
    • 2: Landed,着陆
    • 3,7: Flying,飞行
    • 4: Hovering,悬停
    • 5: Test (?),测试
    • 6: Taking off,正在起飞
    • 8: Landing,正在着陆
    • 9: Looping (?)
  • rotX: Left/right tilt in degrees (rotation about the X axis),左右倾角Roll

  • rotY: Forward/backward tilt in degrees (rotation about the Y axis),前后倾角Pitch

  • rotZ: Orientation in degrees (rotation about the Z axis),方向角Yaw

  • magX, magY, magZ: Magnetometer readings (AR-Drone 2.0 Only) (TBA: Convention),磁场传感器

  • pressure: Pressure sensed by Drone’s barometer (AR-Drone 2.0 Only) (Pa), 气压

  • temp : Temperature sensed by Drone’s sensor (AR-Drone 2.0 Only) (TBA: Unit), 温度

  • wind_speed: Estimated wind speed (AR-Drone 2.0 Only) (TBA: Unit), 风速

  • wind_angle: Estimated wind angle (AR-Drone 2.0 Only) (TBA: Unit), 风向角

  • wind_comp_angle: Estimated wind angle compensation (AR-Drone 2.0 Only) (TBA: Unit)

  • altd: Estimated altitude (mm), 预测的当前高度

  • motor1..4: Motor PWM values, 电机PWM控制值

  • vx, vy, vz: Linear velocity (mm/s) [TBA: Convention], 当前线性速度

  • ax, ay, az: Linear acceleration (g) [TBA: Convention], 当前转向速度

  • tm: Timestamp of the data returned by the Drone returned as number of micro-seconds passed since Drone’s boot-up.

2) Cameras,前置摄像头和底部摄像头的视频流分别位于ardrone/front/image_raw 和 ardrone/bottom/image_raw,传输使用标准的ROS camera interface接口。同时摄像头可以通过ardrone_front.yaml和ardrone_bottom.yaml这两个配置文件进行校正;

3) 飞行器控制 (输入),位于cmd_vel,该节点接受geometry_msgs::Twist 数据作为输入,可以控制飞行器在x, y, z上的速度,以及控制yaw的角速度:

-linear.x: move backward
+linear.x: move forward
-linear.y: move right
+linear.y: move left
-linear.z: move down
+linear.z: move up

-angular.z: turn left
+angular.z: turn right

用户开发自己的Node程序时,只要在程序中Publish或者Subscribe上述对应的topics,就可以和AR.Drone双向通信了。ROS的开发编程方法在此不再赘述,但是推荐使用C++的API(另一个为python接口,觉得支持的还不好)。

目前先写这么多,如果还收到其他开发上的疑问,我再进行补充。最后附上ardrone_autonomy的在线手册地址

两轮自平衡机器人 | 研究计划

我一直对自平衡小车十分感兴趣,最早在关注Segway的时候就想玩一玩自平衡算法(当时大约还是2010年)。然而那时一无业余时间,二无设计能力,于是搁浅至今。今年5月份回国的时候,这个想法重新占上心头,于是在淘宝选够了一款评价较好的平衡车,总价不过400大洋。谁知回到英国,自己又懈怠了下来,除了偶尔拿出来当玩具玩两下,也没有仔细深入研究。这样下去自然不行,然而深入研究确实需要不少精力,只能在此痛下决心一定要把里因外果弄清楚。

虽然之前没有正式做过平衡车,然后对于这个系统还是有所了解的,其中存在的研究问题大概如下:

● 硬件设计:电机选型、电机驱动、速度反馈传感器、惯性传感器MEMS、电源、电池、MCU、通信。因为硬件设计实在繁琐,其中原理并不复杂,调试却颇费功夫,所以直接购买硬件成品;

● 系统建模:两轮自平衡的系统模型与一阶倒立摆应该是一样的。系统模型并非必须,但是通过系统模型可以了解系统特性,也可以做一些软件仿真,方便控制器的参数调节。系统的最终模型可以是s域的传递函数模型 (transfer function) 也可以是时域的状态空间模型 (state-space model);

● 传感器数据处理:主要是对惯性传感器的数据进行滤波与分析。滤波的目的是排除系统的动态扰动以及传感器的动态噪声,主要算法应该是互补滤波器 (Complementary Filter) 和卡尔曼滤波器 (Kalman Filter)。互补滤波器的算法实现较简单,而卡尔曼滤波器虽然复杂但可以和状态空间模型结合,设计性能更佳的LQG控制器;

● 控制器设计:常用的为PID控制器和状态空间下的LQR (Linear Quadratic Regulator) 或LQG (Linear-Quadratic-Gaussian) 控制器;

● 任务设计及实时性保证:整个软件有实时性要求,倾向于使用免费的FreeRTOS实时操作系统。操作系统采用静态优先级调度,优先级基于任务周期。系统中的主要任务有:传感器采集任务、传感器滤波与分析任务、控制器任务、电机转速调制任务、通信任务。

整个研究对我来说有三个目的:
1)研究运动传感器数据滤波与处理;
2)研究系统建模与控制器设计;
3)研究FreeRTOS的使用方法以及验证相关的实时性理论。

因为还未阅读相关资料,以上设计计划可能还有遗漏、错误之处,日后发现再做修正。

Configure OpenCV-Python development environment in Windows

In older days, it is relative complex to configure a OpenCV development environment. You have to install an IDE, e.g. Mircosoft Visual Studio, and have to set many configurations of the project, such as the include folder, the header folder and where the IDE can find the third-party OpenCV library. But after Python becomes one of the main streams, things has changed. Python has proven to be a better and easier way to program and test computer vision programs, compared with C++. In this article, I will show you the simple steps that could help you to configure an OpenCV-Python ready environment in Windows.

1. First, you have to download and install Python 2.7 and Numpy. Install them to their default locations, in order to avoid any further issues. The reason to use Python 2.7 instead of Python 3.x is that Python 2.7 is best supported by most third-party libraries, while Python 3 is too new for all the scientific libraries to be fully supported.

2. Download the latest OpenCV release from Sourceforge and extract it into a system folder, e.g., C:\Program Files\opencv.

3. Goto $(opencv folder)\build\python\2.7\x86 or \x64 based on your machine type. Copy cv2.pyd to $(python folder)\lib\site-packages.

That’s all you have to do. Really easy and will only cost you 10 minutes. Now let’s open the Python IDLE and test if the environment is working:

import numpy
import cv2

print cv2.__version__

Run the code and if you got the version printed and no error message, then congratulations: you have succeed. But if you did get error message, then you should check the version of both the Python and the OpenCV is correct. If you have multiple versions of Python, then check if you have run the right version of IDLE.

Before we try more complicated vision algorithms, let’s simply connect a USB camera first and try to get the video stream from the camera and display it on the screen:

import cv2

cap = cv2.VideoCapture(0)

while(True):
    _, frameInput = cap.read()
    cv2.imshow('camera', frameInput)

    if cv2.waitKey(33) == 27:
        break

cv2.destroyAllWindows()
cap.release()

What you should got is a pop-up window displaying the current image captured from the camera. You don’t have to understand this code at the moment, but the thing is once you got the live video stream from the USB camera, further steps can be done to process, modify and save it. More details and examples will be discussed later in my further blogs.

References
[1] OpenCV-Python Tutorials, Install OpenCV-Python in Windows, available at: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html