No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

How IoT Endpoints Measure an Environment

The Internet of Things can describe a huge number and types of endpoints, which can be passive--such as read-only devices and sensors--or active, such as those that report status, alarms, and alerts. Endpoints can also be used to control and change operations, and their use can increase efficiency, reduce costs, and improve safety.

I interviewed Andy Souders, SVP Products & Strategy at Savi to learn about their deployment of sensors and the support of IoT devices. Savi offers sensor analytics solutions that create operational intelligence from the Internet of Things.

Andy, would you define what you mean by sensors?
We like to call sensors "data producers". In most cases, they are "things" that are put on another "thing" to measure "something" about that "thing."

The more traditional definition of a "sensor" is a device that collects information about the physical environment. In some cases that information might be logged, read and/or transmitted real-time via a radio transmitter/receiver to a network. In addition to this traditional definition, there are also non-physical sensors such as social media. Facebook and Twitter are examples of "social sensors" since they measure or "sense" sentiment or mood. A sensor is less about a specific technology deployed and more about the data that they produce.

What do sensors measure?
Anything and everything that can be measured. Sensors can measure the location and the state of "things" where state might be temperature, humidity, shock, vibration, tilt, altitude, weight, specific gravity, barometric pressure, etc. A non-physical "social" sensor like Waze is an example of measuring the context around traffic patterns.

What network technologies are used to access the sensors (Cellular, satellite, Wi-Fi...)?
Nearly every radio frequency protocol from Lower Hybrid Frequency (LHF) through Ultra High Frequency (UHF) across the spectrum has been used. This includes reading data from devices that communicate across Radio Frequency Identification (aRFID/pRFID), Zigbee, Bluetooth, Cellular, Wi-Fi, Satellite, and beyond.

Are there smart and dumb sensors? How do these affect the network?
Just as in analytics there is no such thing as a "dumb" number, there is no such thing as a "dumb" sensor. Regardless of the type of sensor and its specific technology, the biggest issue from a Wireless Sensor Network and Internet of Things (IoT) perspective is that both "smart" and "dumb" sensors need to communicate to a centralized network and need to be able to be uniquely identified. This leads to significant challenges in ingesting this data, which can come in structured, unstructured, and semi-structured formats and is highly unreliable.

Sensors are the biggest catalyst to the "big data" explosion that we're currently experiencing as part of the "third wave of the Internet," with some analysts predicting 50 Billion connected devices by 2020. Without real standards to speak of, greater volumes of data will become increasingly difficult to ingest, let alone analyze.

Could you contrast rules-based vs. streaming sensors and their impact on the network?
Rules-based sensors can be considered "event based" data producers that send data based upon some change to the environment that they are monitoring. An example would be a sensor which uses an accelerometer to "wake up" and start broadcasting its position once the sensor has been moved, at which point, it could also be a streaming sensor, to continue to broadcast the current location and perhaps temperature and humidity. Streaming sensors do not interpret any of the data that they collect and instead simply broadcast data to the network.

The idea behind rules-based sensors is to have the intelligence as close to the network edge as possible to minimize the impact on the network by reducing the amount of data transmitted and subsequent event processing.

Should the collection and analysis be decentralized or centralized?
When discussing analytics, it is important that we first establish the baseline and how the analytics will be used. The following are different flavors of analytics:

Descriptive Analytics--What happened?
This requires a low level of machine-based processing and high human involvement to interpret the data.

Example: Rank the performance of all carriers, using risk-based KPIs, for a major shipper.

Predictive Analytics--What will happen?
Requires a higher level of machine processing to predict the possibility of something happening based upon an event and context of that event.

Example: Revise the estimated time of arrival based upon the specific driver, vehicle, route and real-time location and weather.

Prescriptive Analytics--What action needs to happen?
This is where the real machine learning and automated decision support comes in.

Example: Derive the optimal departure time for each day of week to minimize journey time.

So using the above, collection of sensor data can currently be centralized--for example a data logger that stores temperature and humidity of a shipment of vaccines during its journey. Today the analysis of that data must be centralized, especially as you move up the analytics maturity curve to gain operational intelligence from the sensor data. As sensor technology advances, we will start to see ecosystems where limited analytics will be able to be decentralized. Even then, for more advanced analytics, centralized processing will still be necessary.

Could you provide two examples of the benefits of sensors connected to a network?
One easily understood example that demonstrates the benefits of sensors in a connected world is Google Traffic, where location and speed data is anonymously collected from hundreds and thousands of sensors to display real-time views of traffic flows. In this example, let's say that the sensor is an Android smartphone or an iPhone with Google Maps. Anyone who uses this app functionality while driving knows that the benefits of using it can include reduced travel time and fuel costs and perhaps even lower blood pressure, especially if you live in a major metro area.

Another similar example is what Savi does today monitoring high-value assets moving across East Africa. In this case, there are no smartphones but rather container security devices which are used to measure not only location and speed, but also humidity, temperature, shock, and tampering. Savi uses the data from these sensors to track potential threats to the cargo being transported in real-time, and in addition applies that data over time to show the routes with the highest/lowest risk in order to accurately predict the estimated time of arrival of those goods.

In addition to the benefits of the Google example above, benefits of this approach are also reduced loss and theft--a combination of risk analytics plus performance analytics. All use streaming sensor data and also apply a model-driven machine-learning approach to derive value.

Should IT or operations be in charge of the sensor implementation and analysis?
Sensor data is ugly--it takes the "three V's" (volume, velocity, and variety) of big data to the extreme. The complication of the collection, storage and analysis of sensor data--both streaming and over time--is typically far beyond the capabilities of most IT organizations that are used to the normal "row and column" structure types of corporate data.

While some tools are out there today to help IT, some in the form of IoT platforms, these too only help solve part of the problem. As with any kind of build-versus-buy decision, the best approach is to consider outsourcing those services that are not a core competency to your organization.