In my next set of blog posts, I will explore the world of multi-sensor data fusion. Like most existing technologies, multi-sensor data fusion is an attempt to take a phenomenon that exists in nature and translate it into a technology usable by man-made systems. The best example of multi-sensor data fusion (and the closest to us) is our brain. Every second of every day, our brain continuously monitors sensors (e.g. our eyes, ears, nose, etc) and processes the generated data. With this data, we acquire information about our surroundings and make decisions.

Consider a bottle of water and a glass of vinegar. If you need to precisely identify both the container and the content, you absolutely need more than one sense. With the sense of sight, you can easily identify the glass from the bottle, but it is impossible to determine which contains the water and which contains the vinegar. Only by adding an additional sense, like smell or taste, can you confidently identify both the containers and contents.

Multi-sensor data fusion tries to replicate the work performed by our brain ­– it takes information acquired by a number of different sensors and fuses it together, taking advantage of different points of view.

The way I see it, a multi-sensor data fusion system has three main components: sensors, sensor data processing, and data fusion. Let’s examine each type of component by imagining an overly simplistic example, in this case a theoretical security system that identifies people based on a number of different attributes.

Figure 1: A multi-sensor data fusion system

Figure 1: A multi-sensor data fusion system


The sensors in a multi-sensor data fusion system depend heavily on the application for which the system is built. Our system, being designed to identify people, would use cameras to perform facial recognition, a microphone to perform voice recognition, and include an imaging device for fingerprint capture.

Sensor data processing

The sensor data processing component makes sense of the data generated by a single sensor, without any knowledge of the data received by other sensors. The processing can take place in the same processing unit as the data fusion, on a remote processing unit, or even directly on the sensor itself (in the case of intelligent sensors).

In our example, let’s assume the output of each sensor data processing component is the ID or the name of a specific person.

Data fusion

The data fusion component takes the outputs from sensor data processing components and uses them to make a decision. The method used to reach the decision must be carefully considered in order for our system to be as reliable as possible. Many factors can influence decision-making. In our example, we can easily imagine two very simple ways to make a decision about a person’s identity:

  1. If all three sensors components select the same ID for the person, the identity is confirmed.
  2. The identity of a person is confirmed if the fingerprint scan makes an identification and at least one of the other two components confirms it.

If our system uses the first method (all three sensors components need to select the same ID), obvious flaws quickly become apparent. What happens if the room is very dark and the camera can’t see all the features needed to perform an identification? Chances are in this case that the identity of the person will not be confirmed.

There are many ways around this limitation. You could add a new sensor data processing component that extracts the luminosity values from the camera sensor. By adding this parameter to the decision-making process at the data fusion level, the system could rule that if the image is too dark, the output from the facial recognition processing component will be disregarded and that only two sensors are required to confirm an identity.

A completely different approach would be to generate a notification that instructs security personnel to compare the actual person against a picture, in lieu of the sensor.


Multi-sensor data fusion is a very interesting and vast field. In this blog post, I used a trivial example to introduce it in my own way. It’s easy to see the possibilities multi-sensor data fusion brings and how there is an infinite number of applications that can use it to enhance decision-making in different situations.