Consumers quickly become acclimatized to the most advanced features of today’s smartphones and tablets. Modern life is increasingly reliant on instantaneous sensing of context and location, and in turn users are placing increasing demands on smart devices used in diverse fields such as industrial processes and healthcare. Clever design of hardware and software is essential to ensure the required speed and accuracy while meeting tight constraints on system size and power consumption.
Sophisticated capabilities such as motion tracking and location awareness provide the key to new features and applications in consumer equipment, and are helping enable new market opportunities such as the burgeoning wearables scene. Advanced sensing is also raising the expectations placed on equipment such as industrial automation, medical patient monitoring and Internet of Things (IoT) applications, as users demand smart features and autonomous, context-sensitive behavior.
Tiny MEMS sensors, such as accelerometers, gyroscopes and magnetometers represent the key enabling technology for motion tracking and location awareness in portable devices at consumer price points.
End-user demands for greater accuracy and better performance are served by using these sensors in combination, since each individual sensor type is subject to limitations. An accelerometer, for example, can provide basic orientation and tilt detection, and can track pitch and roll if the device is not moving. With the addition of a gyroscope, more complex motion such as pitch and roll while the device is moving, or high-speed rotation, can be measured accurately. On the other hand, a magnetometer can be used to correct for accelerometer rotational errors by monitoring movement relative to magnetic north, but has limited bandwidth and is vulnerable to external electromagnetic interference.
Sensor fusion combines the outputs of multiple sensors in a system to monitor complex or rapid movements accurately for purposes such as gesture controls or body-motion capture for gaming or research purposes. Depending on the application, sensor fusion may be best performed in the main processor, or an external sensor hub, or in the sensor itself. Factors such as power consumption, size constraints, battery lifetime and processing resources have the most important influences on the decision.
Microcontroller as sensor hub
Sensor fusion algorithms can be run in a microcontroller acting as a sensor hub. Atmel
has worked with sensor partners such as Kionix
to develop sensor fusion solutions for its microcontrollers such as the SAM D20
ARM® Cortex®-M0+ microcontroller, or the SAM G53
featuring the ARM Cortex-M4 core. This simplifies integration of sensors such as the Kionix KXCJ9
or Memsic MXC62320
MEMS accelerometers. These microcontrollers support SleepWalking and the Atmel Event System, which enable power savings when acting as a sensor hub. SleepWalking allows all functions and clocks to be stopped, while peripherals retain the ability to wake parts of the system asynchronously. The Event System allows peripherals to respond to events such as receipt of a sensor signal without processor intervention, which helps make best use of the microcontroller’s sleep mode.
Cut the Power
Tackling power consumption is increasingly important to engineers developing motion-sensing systems. Today’s pervasive motion-based apps, particularly in wearable devices such as smart watches or glasses, require always-on sensing, despite the extra demands this can place on the battery. The latest developments in mobile operating systems provide an illustration. Google has significantly revised the sensor-handling features in Android version 4.4 to allow greater use of real-time location and context awareness without imposing excessive drain on the battery. Pedometer functions like step detection and counting are required to run in background, and APIs have been revised to improve sensor management and prevent false triggers from waking the main application processor.
Android version 4.4 shows how responsibility for handling sensors can be shifted into the sensor hub, or even the sensor itself. Sensor fusion calculations on accelerometer and gyroscope outputs are required to happen in between interrupts while the main application processor sleeps. In addition, a batching mode enables the sensor hub to buffer the fusion results and send these only when the application processor has been woken by a significant sensor event.
has taken this batching concept a stage further in its STM32F411
microcontroller, which is based on the ARM Cortex-M0 core. The microcontroller implements its own Batch-Acquisition mode (BAM), which saves energy when used as a sensor-hub by storing sensor data directly into on-chip SRAM while its own CPU core sleeps. The core wakes briefly to process this stored data before returning to power-saving mode. Additional energy-saving features such as Flash STOP mode, zero-wait execution and voltage scaling make this device attractive for use in applications such as industrial controls, medical monitors, building automation, and wearable technology, in addition to smartphones and tablets.
Save board space
Sensor modules such as 6-axis modules like the Freescale FXOS8700CQR1
, which combines a 3-axis accelerometer and 3-axis magnetometer to provide a convenient space-saving solution that integrates the functions of two motion sensors. The FXOS8700CQR1 has built-in digital signal processing that supports embedded programmable event functions such as freefall detection, pulse/tap detection, orientation detection and magnetic sensing to aid applications such as indoor navigation, user-interface control, or shock and vibration monitoring in industrial equipment. Figure 1 shows the main blocks and DSP functions of this device. Complementing Freescale’s portfolio of MEMS sensors, the Sensor Fusion Library for Kinetis Microcontrollers provides advanced functions for computing device orientation, linear acceleration, and magnetic interference on the microcontroller connected to the sensor module.
Figure 1: The DSP embedded in the FXOS8700CQR1 performs basic processing on magnetic and accelerometer sensor data.
has implemented a Digital Motion Processor (DMP) unit alongside a 3-axis gyroscope and 3-axis accelerometer in its MPU-6500™
Motion Tracking Device, as shown in Figure 2. The DMP is able to run motion-processing algorithms with low latency, and provides a number of features, including gesture recognition using programmable interrupts, a low-power screen rotation algorithm that can calculate screen orientation without intervention from the main processor, and a pedometer capable of maintaining step count while the host processor sleeps.
Figure 2: The MPU-6500 integrates the first generation of the InvenSense DMP, which is capable of offloading sensor-fusion processing from a sensor hub.
The DMP integrated in the InvenSense MPU-6500 heralds the arrival of a new generation of sensors that are capable of performing more extensive sensor fusion with no sensor hub connected. Performing sensor-fusion processing in the sensor module can help to reduce system power consumption, reduce reaction times and simplify application design. Bill of materials costs and board space can also be saved. Both InvenSense and ST have announced 6-axis inertial modules that meet the requirements of Android 4.4, featuring built-in motion processing that offloads the application processor and the sensor hub.
With growing demand for context-sensitive features across a broad range of applications and markets, approaches to connecting sensors are coming under close scrutiny and new techniques are emerging that help to save power and component count. The latest mobile operating systems illustrate how “always-on” sensing can be achieved at minimal additional power consumption.