Category Archives: Work Reports

你好,2017!

又是新的一年。岁月匆匆,不给人时间回头看看,就又让人上路了。

今年更新了10篇左右的博文:

  • 新增的"C语言深度"专题来源于我在嵌入式课程助教过程中发现的C语言的错误使用。我觉得C和C++是机器人领域最重要的两个语言,所以希望在这方面增加一些内容。
  • 另几篇博文是介绍新发布的树莓派3代的。今年除了树莓派3,还入手了若干树莓派Zero。因为Zero很难买到,所以一下屯了5、6个。这批Zero准备用在智能家庭的节点中,但是应用场景目前还不明确,所以没有给大家做专题介绍。

明年的工作计划:

  • 承诺的智能家庭系统还没有完全开发完成,全部完成后会找时间公布。目前已完成的部分:中心服务器的部署,NAS,多媒体中心,一个传感器节点部署好了(已经上报了半年的温湿度数据)。二氧化碳、PM2.5传感器和无线组网模块选购好了,但还没有时间调试。另一个困难是控制数据的下发和传感器数据、系统参数的展现,我想基于BS架构(Flask + socket.io / Node.js + Ajax)。我没有网页编程的基础,而且中间涉及Real-time和asynchronous的问题,所以还没有时间解决。
  • More topics on Robotics. 实验室还是以机器人为主题的,去年有点跑偏了(嵌入式系统),今年重新回归到主题上。重点我想要关注的内容点有:增强学习、概率决策、机器视觉、ROS、机器(深度)学习。
  • 博主现在长期在国外生活,关于将网站转换为全英文的想法已有很久。但是还是一直很挣扎,考虑到很多内容对国内的读者会有帮助,所以今年还是保留双语写作。

最后祝大家2017年工作、学习顺利!

近日关注的几个KickStarter项目

KickStarter是国外最著名的众筹网站。项目发起者可以在只有基本idea的情况下提前发布产品信息,以获得来自个人的资金支持,达到满意的标准后再进行产品的实际生产,从而减少了产品发售的风险。 这几天在KS上比较热门的科技项目都是智能设备/可穿戴设备,这里我聊一聊几个我最近关注的项目。

1. Sweep激光雷达

项目主页:https://www.kickstarter.com/projects/scanse/sweep-scanning-lidar/description

sweep
▲ 图1. Sweep低成本激光雷达

激光雷达是机器人常用的传感器外设,用于快速扫描、感知环境存在中的障碍物。激光雷达通过激光旋转扫射,再测量激光返回时间的方式,对周围障碍物的距离进行快速估算。传感器输出的点云数据可以进行三维建模,最终辅助机器人进行导航。目前常用的激光传感器品牌为 HOKUYO,售价在几万至几十万元。传统的激光雷达使用光学振镜进行激光扫描,通过计算激光束相位差进行距离计算。而低成本的激光雷达通过电机控制激光的发射角度,使用视觉原理进行距离估算。在牺牲了扫描速度的情况下,大幅度降低了成本。关于低成本激光雷达的设计原理,可以参照CSK兄的博文:自制低成本3D激光扫描测距仪(3D激光雷达)

sweep-demo
▲ 图2. Sweep工作在四轴飞行器上(慢速摄影)

以下是由IEEE Spectrum网站整理的Sweep与其他常见激光雷达的参数对比。Sweep的价格只有专业激光雷达的1/5,但刷新速度(Scan Rate)只有专业传感器的1/4。其检测距离精度为1cm,精度为1 - 2%,可以满足一般的机器人地图构建与导航应用。图表中的 robopeak 现改名为 SLAMTECRPLIDAR 也在最近推出了新款 RPLIAR A2,在性能上相比一代做了很大提升 (10Hz刷新率),价格和Sweep也很相似。目前Sweep的众筹已经结束,最终筹款$272,990,共有1,010个支持者。官网上可以预购,价格为$255,折合人民币1680元。

sweep-spec
▲ 图3. Sweep与其他激光雷达的参数对比

 

2. Pebble 2智能手表

项目主页:https://www.kickstarter.com/projects/597507018/pebble-2-time-2-and-core-an-entirely-new-3g-ultra

pebble_cover
▲ 图4. Pebble智能手表第二代

大名鼎鼎的智能手表先祖Pebble再次回到KickStarter。这次他们带来的是最新的两款手表产品:Pebble 2和Pebble Time 2。前者是廉价型,后者为Premium高级版本。这次新版本的Pebble相比上一代的主要改变有:

1、增加了心率传感器。现在心率传感器已经是智能手表的标配了,Pebble自然要与时俱进;
2、增加了麦克风外设,支持语音信息回复,应该还可以通过Google Voice (Android) 和Siri (iOS) 进行语音控制);
3、核心处理器从Cortex M3升级为M4,有更大的信号处理能力;
4、在增加了额外的传感器之后,防水能力从50m下降到30m,但是依然足够日常使用;
5、Pebble Time 2实际上是上一代的Pebble Time Steel,Pebble Steel这个型号可能不会再推出。

pebble
▲ 图5. Pebble 2和Pebble Time 2

pebble2_colors
▲ 图6. Pebble 2有五种可选颜色

新产品的两个版本Pebble 2和Pebble Time 2两者的主要区别有:

1、Pebble Time 2的屏幕尺寸更大 ,同时使用的是彩色e-ink屏,而Pebble 2是黑白灰度e-ink屏 (e-ink即Kindle所使用的电纸屏,功耗极低);
2、外观和材质上,Pebble Time 2更佳,Pebble 2则看上去比较廉价;
3、当然,Pebble Time 2的价格比Pebble 2多70%。

Read more »

两轮自平衡机器人 | 研究计划

我一直对自平衡小车十分感兴趣,最早在关注Segway的时候就想玩一玩自平衡算法(当时大约还是2010年)。然而那时一无业余时间,二无设计能力,于是搁浅至今。今年5月份回国的时候,这个想法重新占上心头,于是在淘宝选够了一款评价较好的平衡车,总价不过400大洋。谁知回到英国,自己又懈怠了下来,除了偶尔拿出来当玩具玩两下,也没有仔细深入研究。这样下去自然不行,然而深入研究确实需要不少精力,只能在此痛下决心一定要把里因外果弄清楚。

虽然之前没有正式做过平衡车,然后对于这个系统还是有所了解的,其中存在的研究问题大概如下:

● 硬件设计:电机选型、电机驱动、速度反馈传感器、惯性传感器MEMS、电源、电池、MCU、通信。因为硬件设计实在繁琐,其中原理并不复杂,调试却颇费功夫,所以直接购买硬件成品;

● 系统建模:两轮自平衡的系统模型与一阶倒立摆应该是一样的。系统模型并非必须,但是通过系统模型可以了解系统特性,也可以做一些软件仿真,方便控制器的参数调节。系统的最终模型可以是s域的传递函数模型 (transfer function) 也可以是时域的状态空间模型 (state-space model);

● 传感器数据处理:主要是对惯性传感器的数据进行滤波与分析。滤波的目的是排除系统的动态扰动以及传感器的动态噪声,主要算法应该是互补滤波器 (Complementary Filter) 和卡尔曼滤波器 (Kalman Filter)。互补滤波器的算法实现较简单,而卡尔曼滤波器虽然复杂但可以和状态空间模型结合,设计性能更佳的LQG控制器;

● 控制器设计:常用的为PID控制器和状态空间下的LQR (Linear Quadratic Regulator) 或LQG (Linear-Quadratic-Gaussian) 控制器;

● 任务设计及实时性保证:整个软件有实时性要求,倾向于使用免费的FreeRTOS实时操作系统。操作系统采用静态优先级调度,优先级基于任务周期。系统中的主要任务有:传感器采集任务、传感器滤波与分析任务、控制器任务、电机转速调制任务、通信任务。

整个研究对我来说有三个目的:
1)研究运动传感器数据滤波与处理;
2)研究系统建模与控制器设计;
3)研究FreeRTOS的使用方法以及验证相关的实时性理论。

因为还未阅读相关资料,以上设计计划可能还有遗漏、错误之处,日后发现再做修正。

Started to learn Ada and Real-Time Java

Having been mainly using C/C++ in real-time systems for many years, this is my first time to seriously consider other languages which has the capability and primitives to handle concurrency and meet the requirement of hard real-time systems.

During my time looking for a research position in Real-time Systems Group at University of York, I firstly realized there is another language - Ada - which has been used in military, aerospace and industrial systems for more than three decades. It is a shame that I didn't get a chance to know its name, even I was working in industrial and automotive filed. Ada has a fruitful environment for real-time systems design and was first introduced by the US Defense. It has advanced features such as rum-time checkingparallel processing and OOP support.

To be honest, according to my previous experience with the Java language, I never expect it to be used in real-time systems. However, things are changing and there is a trend to use RTSJ (Real-Time Specification JAVA) in control and military peripheral systems (but not core systems). I think it will take a long time before RTSJ displaced the dominant position of C in embedded systems, but I think it is worthy to learn the language features that Sun and IBM are trying to change with current Java language.

What is the BEST Hardware Platform meaning to me

Last week I tried to make my new Intel Galileo running a basic 14 x 7 LED Matrix program which was working perfectly on my old Arduino UNO. This Intel powered opensource platform has a CPU frequency of 400MHz while the Arduino only got 16MHz. The interesting thing I found is, after I spent 3 hours to port the program to Galileo and executed it with full of expectation, the program turned out to work crappy: the LED matrix looked unstable and each pixel had a different luminance.

I expected the Galileo, which has 20 times more computational power, should have an brilliant performance and blows my Arduino two streets away. But the thing is, is it really appropriate to run a real time program on a Linux based platform? And is it fair enough to compare two platforms in one certain application?

The answer is NO, and this is exactly the motivation for me to write this article. During the last year, I had collected and played enormous platforms, either for using in my project or simply explore the features of a new platform. During this process, I realized that tools are just tools and there is no BEST platform exists. To make it clear, I will compare some of the platforms I got in hand and analysis their own advantages and disadvantages, and finally make a conclusion.

1. Linux Platforms

Well, apart from the aforementioned Intel Galileo, I got the popular Raspberry PI and less known pcDuino :

DSC05191_out
Figure 1. Raspberry PI Model B

DSC05195_out
Figure 2. pcDuino version 2

Generally speaking, these two platforms have a similar performance and functions: both are working at a high frequency, both running Linux operating system and all of them have some kind of GPIO extensions and USB connections. The Raspberry PI has a large community and no matter what problem you encountered, from running a program in desktop environment to compiling the Linux Kernel, you can find your answer. On the other hand, the pcDuino is not well supported and also not even well documented. So why I bought another Linux single board computer when I already got one? The answer is the additional Arduino connectivity of the pcDuino.

The pcDuino can seemless 'run' Arduino sketch locally without the need to adhere another Arduino board. Is this convenience really worth 60 pounds? Well, I think so. This design eliminates the overhead of communicating with additional hardware through serial and makes my life easier to develop more complex project where the need of hardware functions like GPIO / ADC / DAC is an essential. Apart form this aspect, I couldn't find other difference between these two platforms, and it is also for this reason that I decided to buy one more Linux platform.

2. Visual Sensors

I am pretty new to computer vision and I only started to investigate different camera since the beginning of this year. What I got so far are two normal web cameras, two PS Eye cameras for Playstation 3 and a Kinect from Microsoft:

DSC05200_out2
Figure 3. My Collection of Cameras and Visual Sensors

Among them, the Kinect is a star which is being used by hundreds of institutions and researchers in recent years. It is actually a stereo camera which can give you depth image of the scenario. What I have done so far is simply to configure the SDK and run some examples on Processing, a JAVA programming software for interactive user interface. It is a perfect sensor for robotics and can be used in applications such as V-SLAM, 3D reconstruction and manipulator as what has been done with the Boris, the newest robot invented by Birmingham University. The only thing that prohibits me from using it further, is the complex toolchain that it comes with. As first released only for XBOX 360, it is not well support by PCs and you have to use opensource packages like OpenNI and the driver from PrimeSense to make it work.

Another idea to play with stereo vision is to use a pair of monocular cameras and I found the Sony PS3 Eye is an ideal platform for me. It is cheap (8 pounds each), fast (60 fps @ 640 x 480) and fairly easy to use. You can simply buy a SDK from the company Code Laboratories and you can simply use OpenCV like what you have done with other web cameras. The adjustment and configuration of two separate cameras are tricky but it is a good idea for researchers who want to have a feeling of basic principles about stereo vision. What really interested me about this camera is it is really easy to use and can also achieve a competitive frame rate compared to these professional industry cameras. It is indeed a good platform for fast object tracking, but the thing is, it is not general and you should buy a SDK before use. So if you want to make it a product for customs, it would be problematic for the software license. So sometimes we may prefer to use off-the-shelf web cameras.

If you tried different cameras, you may find there is even difference between web cameras. I bought this HP HD2300 after I got my previous Logitech C270. Why? Simply because it has a higher and crisp resolution and it has a better white balance performance which is important for vision systems for detecting color features. But why I still keep & use the Logitech C270? Well, for CPU with limited computational power (such as the Raspberry PI), higher resolution doesn't mean higher performance but resulted in lags and failures.

3. Microcontrollers

DSC05202_out
Figure 4. Microcontrollers I got in my personal lab (AVRs)

DSC05208_out
Figure 5. Microcontrollers I got in my personal lab (Cortex-M3 / M4)

(To be continue)

第三届谢菲尔德大学搜救机器人比赛

这个月月初参加了我们学校的搜救机器人大赛:谢菲尔德自动工程系搜救机器人比赛 (ACSE Robotic Search and Rescue Competition) 。该比赛的目标是设计一个移动机器人,通过远程视频控制的方法让其通过一个模拟的搜救环境,并在最短时间内到达终点。该场地模拟了很多搜救过程中可能出现的障碍:重型物体、斜坡、坑洼路面、吊桥、狭窄的通道等。图为当天的比赛场地:

DSC01711
Figure 1.  比赛当天场地实景

这个比赛的难点在于机器人的结构必须能应对复杂的场地,并且参赛队伍在比赛过程中无法直接看见机器人所在环境,只能通过远程视频的方式对机器人进行无线控制。这次我采用的架构为Arduino兼容的大谷Wild Thumper机器人控制板 + 树莓派:由Wild Thumper控制板进行电机控制与传感器采集,并通过树莓派实现远程控制与视频传输,两个系统之间通过TTL串口进行通信。PC端使用Processing对机器人实现控制,并展示当前的运行参数与传感器数据。

DSC01774
Figure 2. Wild Thumper(左) + 树莓派(右)的配置

DSC01783
Figure 3. 机器人实体(正面)

DSC01785
Figure 4. 机器人实体(侧面)

机器人所用的摄像头为树莓派官方最新发布的摄像头模块RaspiCam,该模块使用OV5647芯片,图像像素为300万,视频支持1080P@30fps。 摄像头模块通过软排线与树莓派的CSi接口相连接,并通过云台增加了倾斜方向的自由度。视频采集与传输使用Raspivid + Netcat + mplayer,为了提高传输速率使用了UDP协议,并且将采集像素下降到600 * 480。摄像头通过热熔胶固定在舵机云台上,图为摄像头的安装位置:

DSC01775Figure 5. 机器人摄像头特写

DSC01700 Figure 6. 无线视频传输调试

无标题2_副本
Figure 7. 比赛现场 之1

无标题1_副本
Figure 8. 比赛现场 之2

机器人硬件配置清单

核心主控 树莓派 Model B
辅助控制 Wild Thumper (Arduino兼容)
底盘 大谷4WD Wild Thumper Chasis
摄像头 RaspiCam + 单轴云台
传感器 3轴加速度/陀螺仪 + 3轴磁场传感器
电源供电 20C/5000mAh 2s锂聚合物电池+ USB备份电源5V 6000mAh

测评预告:TI EZ430-CHRONOS-433无线手表开发套件

2013年是可穿戴设备在公众面前崭露头角的一年。从最初的Google Glass到已经成功商品化的Sony和Samsung的智能手表,无不让用户和开发者体会到了电子产品与人体结合的可能性。如果说3D打印是2012的主旋律,那可穿戴设备就是2013年开源硬件里最热门的词汇。

其实早在2009年,TI就推出了基于430的智能手表套件,只是当时生不逢时,主要的推广目标是无线芯片以及430的低功耗特性,所以没有得到太广泛的关注。如今重新看来,这款套件无非是入门可穿戴设备最好的入门套件之一:首先它从外型上就是真正的手表,造型还颇前卫,价格却只有区区500元不到;其次其内置气压及MEMS传感器,可以用于人体运动和外界环境的识别;再者,其基于低功耗430传感器,可以在实现复杂体感算法的同时保证相对较低的功耗;最后,其含通信芯片,支持与其他无线传感器组网,也可与PC结合构建更为复杂的智能系统。

ez430_pic

本次评测会全面拆解、分析该套件,进行基本的编程尝试,并会尝试开发一套睡眠质量检测系统。为保证产品的原装性,本次测评使用的开发套件从英国本地知名的电子元器件厂商Farnell(http://uk.farnell.com/)处订购。国内的朋友可能无法从英国直接购买,那也可以在Farnell的中国子公司 - e络盟处购买(http://cn.element14.com)

附1. TI - EZ430-CHRONOS-433 - CC430, RF WATCH, 433MHZ, DEV KIT(英国Farnell)
http://uk.farnell.com/texas-instruments/ez430-chronos-433/cc430-rf-watch-433mhz-dev-kit/dp/1779156

附2. TI EZ430-EZ430-CHRONOS-433 智能手表开发套件(e络盟)
http://cn.element14.com/texas-instruments/ez430-chronos-433/%E5%BC%80%E5%8F%91%E5%A5%97%E4%BB%B6-ez430-chronos-%E6%97%A0%E7%BA%BF%E6%89%8B%E8%A1%A8%E5%9E%8B/dp/1779156

SCentRo谢菲尔德机器人研究中心

SCentRo (Sheffield Centre for Robotics) 是英国谢菲尔德的机器人研究中心,该研究中心主要研究的领域有:
1、自组装/自恢复机器人
2、UAV无人机
3、仿生机器人 (Bio-inspired Robot)
4、Swarm机器人

实验室网站地址:http://www.scentro.ac.uk/,已经加入了网站的链接列表。scentroCapture