Category Archives: Reports

你好，2017！

• 新增的”C语言深度”专题来源于我在嵌入式课程助教过程中发现的C语言的错误使用。我觉得C和C++是机器人领域最重要的两个语言，所以希望在这方面增加一些内容。
• 另几篇博文是介绍新发布的树莓派3代的。今年除了树莓派3，还入手了若干树莓派Zero。因为Zero很难买到，所以一下屯了5、6个。这批Zero准备用在智能家庭的节点中，但是应用场景目前还不明确，所以没有给大家做专题介绍。

• 承诺的智能家庭系统还没有完全开发完成，全部完成后会找时间公布。目前已完成的部分：中心服务器的部署，NAS，多媒体中心，一个传感器节点部署好了（已经上报了半年的温湿度数据）。二氧化碳、PM2.5传感器和无线组网模块选购好了，但还没有时间调试。另一个困难是控制数据的下发和传感器数据、系统参数的展现，我想基于BS架构（Flask + socket.io / Node.js + Ajax）。我没有网页编程的基础，而且中间涉及Real-time和asynchronous的问题，所以还没有时间解决。
• More topics on Robotics. 实验室还是以机器人为主题的，去年有点跑偏了（嵌入式系统），今年重新回归到主题上。重点我想要关注的内容点有：增强学习、概率决策、机器视觉、ROS、机器(深度)学习。
• 博主现在长期在国外生活，关于将网站转换为全英文的想法已有很久。但是还是一直很挣扎，考虑到很多内容对国内的读者会有帮助，所以今年还是保留双语写作。

近日关注的几个KickStarter项目

KickStarter是国外最著名的众筹网站。项目发起者可以在只有基本idea的情况下提前发布产品信息，以获得来自个人的资金支持，达到满意的标准后再进行产品的实际生产，从而减少了产品发售的风险。 这几天在KS上比较热门的科技项目都是智能设备/可穿戴设备，这里我聊一聊几个我最近关注的项目。

1. Sweep激光雷达

▲ 图1. Sweep低成本激光雷达

▲ 图2. Sweep工作在四轴飞行器上（慢速摄影）

▲ 图3. Sweep与其他激光雷达的参数对比

2. Pebble 2智能手表

▲ 图4. Pebble智能手表第二代

1、增加了心率传感器。现在心率传感器已经是智能手表的标配了，Pebble自然要与时俱进；
2、增加了麦克风外设，支持语音信息回复，应该还可以通过Google Voice (Android) 和Siri (iOS) 进行语音控制）；
3、核心处理器从Cortex M3升级为M4，有更大的信号处理能力；
4、在增加了额外的传感器之后，防水能力从50m下降到30m，但是依然足够日常使用；
5、Pebble Time 2实际上是上一代的Pebble Time Steel，Pebble Steel这个型号可能不会再推出。

▲ 图5. Pebble 2和Pebble Time 2

▲ 图6. Pebble 2有五种可选颜色

1、Pebble Time 2的屏幕尺寸更大 ，同时使用的是彩色e-ink屏，而Pebble 2是黑白灰度e-ink屏 （e-ink即Kindle所使用的电纸屏，功耗极低）；
2、外观和材质上，Pebble Time 2更佳，Pebble 2则看上去比较廉价；
3、当然，Pebble Time 2的价格比Pebble 2多70%。

两轮自平衡机器人 | 研究计划

● 硬件设计：电机选型、电机驱动、速度反馈传感器、惯性传感器MEMS、电源、电池、MCU、通信。因为硬件设计实在繁琐，其中原理并不复杂，调试却颇费功夫，所以直接购买硬件成品；

● 系统建模：两轮自平衡的系统模型与一阶倒立摆应该是一样的。系统模型并非必须，但是通过系统模型可以了解系统特性，也可以做一些软件仿真，方便控制器的参数调节。系统的最终模型可以是s域的传递函数模型 (transfer function) 也可以是时域的状态空间模型 (state-space model)；

● 传感器数据处理：主要是对惯性传感器的数据进行滤波与分析。滤波的目的是排除系统的动态扰动以及传感器的动态噪声，主要算法应该是互补滤波器 (Complementary Filter) 和卡尔曼滤波器 (Kalman Filter)。互补滤波器的算法实现较简单，而卡尔曼滤波器虽然复杂但可以和状态空间模型结合，设计性能更佳的LQG控制器；

● 控制器设计：常用的为PID控制器和状态空间下的LQR (Linear Quadratic Regulator) 或LQG (Linear-Quadratic-Gaussian) 控制器；

● 任务设计及实时性保证：整个软件有实时性要求，倾向于使用免费的FreeRTOS实时操作系统。操作系统采用静态优先级调度，优先级基于任务周期。系统中的主要任务有：传感器采集任务、传感器滤波与分析任务、控制器任务、电机转速调制任务、通信任务。

1）研究运动传感器数据滤波与处理；
2）研究系统建模与控制器设计；
3）研究FreeRTOS的使用方法以及验证相关的实时性理论。

Started to learn Ada and Real-Time Java

Having been mainly using C/C++ in real-time systems for many years, this is my first time to seriously consider other languages which has the capability and primitives to handle concurrency and meet the requirement of hard real-time systems.

During my time looking for a research position in Real-time Systems Group at University of York, I firstly realized there is another language – Ada – which has been used in military, aerospace and industrial systems for more than three decades. It is a shame that I didn’t get a chance to know its name, even I was working in industrial and automotive filed. Ada has a fruitful environment for real-time systems design and was first introduced by the US Defense. It has advanced features such as rum-time checkingparallel processing and OOP support.

To be honest, according to my previous experience with the Java language, I never expect it to be used in real-time systems. However, things are changing and there is a trend to use RTSJ (Real-Time Specification JAVA) in control and military peripheral systems (but not core systems). I think it will take a long time before RTSJ displaced the dominant position of C in embedded systems, but I think it is worthy to learn the language features that Sun and IBM are trying to change with current Java language.

What is the BEST Hardware Platform meaning to me

Last week I tried to make my new Intel Galileo running a basic 14 x 7 LED Matrix program which was working perfectly on my old Arduino UNO. This Intel powered opensource platform has a CPU frequency of 400MHz while the Arduino only got 16MHz. The interesting thing I found is, after I spent 3 hours to port the program to Galileo and executed it with full of expectation, the program turned out to work crappy: the LED matrix looked unstable and each pixel had a different luminance.

I expected the Galileo, which has 20 times more computational power, should have an brilliant performance and blows my Arduino two streets away. But the thing is, is it really appropriate to run a real time program on a Linux based platform? And is it fair enough to compare two platforms in one certain application?

The answer is NO, and this is exactly the motivation for me to write this article. During the last year, I had collected and played enormous platforms, either for using in my project or simply explore the features of a new platform. During this process, I realized that tools are just tools and there is no BEST platform exists. To make it clear, I will compare some of the platforms I got in hand and analysis their own advantages and disadvantages, and finally make a conclusion.

1. Linux Platforms

Well, apart from the aforementioned Intel Galileo, I got the popular Raspberry PI and less known pcDuino :

Figure 1. Raspberry PI Model B

Figure 2. pcDuino version 2

Generally speaking, these two platforms have a similar performance and functions: both are working at a high frequency, both running Linux operating system and all of them have some kind of GPIO extensions and USB connections. The Raspberry PI has a large community and no matter what problem you encountered, from running a program in desktop environment to compiling the Linux Kernel, you can find your answer. On the other hand, the pcDuino is not well supported and also not even well documented. So why I bought another Linux single board computer when I already got one? The answer is the additional Arduino connectivity of the pcDuino.

The pcDuino can seemless ‘run’ Arduino sketch locally without the need to adhere another Arduino board. Is this convenience really worth 60 pounds? Well, I think so. This design eliminates the overhead of communicating with additional hardware through serial and makes my life easier to develop more complex project where the need of hardware functions like GPIO / ADC / DAC is an essential. Apart form this aspect, I couldn’t find other difference between these two platforms, and it is also for this reason that I decided to buy one more Linux platform.

2. Visual Sensors

I am pretty new to computer vision and I only started to investigate different camera since the beginning of this year. What I got so far are two normal web cameras, two PS Eye cameras for Playstation 3 and a Kinect from Microsoft:

Figure 3. My Collection of Cameras and Visual Sensors

Among them, the Kinect is a star which is being used by hundreds of institutions and researchers in recent years. It is actually a stereo camera which can give you depth image of the scenario. What I have done so far is simply to configure the SDK and run some examples on Processing, a JAVA programming software for interactive user interface. It is a perfect sensor for robotics and can be used in applications such as V-SLAM, 3D reconstruction and manipulator as what has been done with the Boris, the newest robot invented by Birmingham University. The only thing that prohibits me from using it further, is the complex toolchain that it comes with. As first released only for XBOX 360, it is not well support by PCs and you have to use opensource packages like OpenNI and the driver from PrimeSense to make it work.

Another idea to play with stereo vision is to use a pair of monocular cameras and I found the Sony PS3 Eye is an ideal platform for me. It is cheap (8 pounds each), fast (60 fps @ 640 x 480) and fairly easy to use. You can simply buy a SDK from the company Code Laboratories and you can simply use OpenCV like what you have done with other web cameras. The adjustment and configuration of two separate cameras are tricky but it is a good idea for researchers who want to have a feeling of basic principles about stereo vision. What really interested me about this camera is it is really easy to use and can also achieve a competitive frame rate compared to these professional industry cameras. It is indeed a good platform for fast object tracking, but the thing is, it is not general and you should buy a SDK before use. So if you want to make it a product for customs, it would be problematic for the software license. So sometimes we may prefer to use off-the-shelf web cameras.

If you tried different cameras, you may find there is even difference between web cameras. I bought this HP HD2300 after I got my previous Logitech C270. Why? Simply because it has a higher and crisp resolution and it has a better white balance performance which is important for vision systems for detecting color features. But why I still keep & use the Logitech C270? Well, for CPU with limited computational power (such as the Raspberry PI), higher resolution doesn’t mean higher performance but resulted in lags and failures.

3. Microcontrollers

Figure 4. Microcontrollers I got in my personal lab (AVRs)

Figure 5. Microcontrollers I got in my personal lab (Cortex-M3 / M4)

(To be continue)

第三届谢菲尔德大学搜救机器人比赛

Figure 1.  比赛当天场地实景

Figure 2. Wild Thumper(左) + 树莓派(右)的配置

Figure 3. 机器人实体(正面)

Figure 4. 机器人实体(侧面)

Figure 5. 机器人摄像头特写

Figure 6. 无线视频传输调试

Figure 7. 比赛现场 之1

Figure 8. 比赛现场 之2

机器人硬件配置清单

 核心主控 树莓派 Model B 辅助控制 Wild Thumper (Arduino兼容) 底盘 大谷4WD Wild Thumper Chasis 摄像头 RaspiCam + 单轴云台 传感器 3轴加速度/陀螺仪 + 3轴磁场传感器 电源供电 20C/5000mAh 2s锂聚合物电池+ USB备份电源5V 6000mAh

测评预告：TI EZ430-CHRONOS-433无线手表开发套件

http://uk.farnell.com/texas-instruments/ez430-chronos-433/cc430-rf-watch-433mhz-dev-kit/dp/1779156

http://cn.element14.com/texas-instruments/ez430-chronos-433/%E5%BC%80%E5%8F%91%E5%A5%97%E4%BB%B6-ez430-chronos-%E6%97%A0%E7%BA%BF%E6%89%8B%E8%A1%A8%E5%9E%8B/dp/1779156

SCentRo谢菲尔德机器人研究中心

SCentRo (Sheffield Centre for Robotics) 是英国谢菲尔德的机器人研究中心，该研究中心主要研究的领域有：
1、自组装/自恢复机器人
2、UAV无人机
3、仿生机器人 (Bio-inspired Robot)
4、Swarm机器人