No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Nvidia Democratizes Accelerated Computing

Nvidia last week held the European version of its GPU Technology Conference (GTC), in Munich, and as typical used the event as a forum for major news and innovation. GTC Europe was highlighted by the launch of RAPIDS, a collaborative initiative aimed at bringing the benefits of accelerated computing to data analytics, which has yet to realize the benefits of graphics processing units, or GPUs, as they're more commonly known.

GPUs have been around for decades, until recently used mostly to improve graphics, primarily in video games. Use has exploded over the past few years as GPUs' ability to handle complex data sets has made them ideal for use cases such as deep learning, real-time graphic rendering, and autonomous cars. A single GPU can perform these kinds of tasks orders of magnitudes faster than hundreds of CPUs, hence use of the term "accelerated computing."

The rise in digital transformation has put an emphasis on data analytics, as being able to find insights quickly and make decisions on the analysis can make the difference between being a market leader and going out of business. For example, machine learning and analytics have become a core part of every contact center vendor's strategy. I believe as more things get connected, customer services will be heavily reliant on doing analytics faster, and that will require entirely new platforms.

As companies place greater emphasis on analytics, the size of their data sets has grown. They analyze the data to help solve any number of complex business challenges, from predicting credit card fraud, calculating the risk of approving mortgages, planning logistics, forecasting retail inventory, and understanding customer buying behavior. But because CPU-based systems can take hours or even days to run models, they're holding companies back from being able to get quick insights from the data they're amassing.

During his keynote, Nvidia CEO Jensen Huang shared the graphic below, created by a data scientist to show the amount of time spent waiting for analytics models to run (green shading) versus actual work (shaded red) when CPUs and GPUs are in use.

On the CPU side, you can clearly see that data scientists spend a lot of time waiting for models to run. Given the cost of data scientists, minimizing this time should be of great interest to companies. While Huang intended to provide a fun look at the life of a data scientist, this is true to life, too. At the event, I spoke to a data scientist out of the D.C. area who told me that once his model starts, he goes and does other things while he waits the hours for it to complete.

RAPIDS is meant to change that. It's open-source software supported by a wide of companies, including Databricks, Quansight, Anaconda, Hewlett Packard Enterprise (HPE), IBM, and Oracle, and designed to integrate with and accelerate processing on open-source libraries and platforms commonly used by data scientists. Because RAPIDS works with existing tools, customers can replace CPU-based systems with this software and with minimal changes see a huge boost in the speed of analytics -- which, in turn, means data scientists can spend more time working and less time waiting. The graphic below shows how all the components fit together. This isn't quite plug and play, but it's pretty close.

The primary component of RAPIDS is Nvidia's CUDA-based accelerated machine learning libraries. Nvidia is initially accelerating the five most commonly used libraries; this strategy is intended to optimize training and improve model accuracy. RAPIDS gives data scientists the ability to run their entire data pipelines on GPUs, obviating the need to move data back and forth between systems.

Nvidia based RAPIDS on the Python programming language familiar to most data scientists. Also, it has interfaces that are comparable to Scikit-learn and Pandas, two of the most popular data analysis libraries for Python. The database processing is based on Apache Arrow, another widely use solution, taking the friction out of deployment. RAPIDS can run on a single GPU system and scale up to multiple nodes on products like Nvidia's DGX-1 and DGX-2 servers.

The uptake of RAPIDS will be in data-dependent verticals such as retail, logistics, high tech and financial services. During his keynote, Huang pointed out that Walmart, IBM, and HPE have been beta customers and instrumental in the development of RAPIDS. Walmart is currently the largest company in the world, and may be the most sophisticated with respect to data analytics. The fact that it's heavily invested in RAPIDS speaks volumes about the platform's potential.

I believe the use of RAPIDS to accelerate machine learning-based analytics will act as a catalyst to broader deep learning adoption. Success with analytics will incent companies to collect even more data, causing the benefits of traditional analytics to plateau. When this happens, they'll need to apply deep learning to discover the insights hidden in their massive volumes of data.

One of the interesting implications for Nvidia regarding RAPIDS is that it could see significant uptick of its DGX Station, which is a desktop form-factor version of its popular DGX-1 server. Today, data scientists have no choice but to put the server in the data center as the data sets they're working with are far too large for traditional computers. I've talked to some data scientists that have been early adopters of DGX Stations, and they like the desktop form factor as they can set up the workstation themselves and work on local machines. If a reset is required, there's no need to call IT as the box is at the desk. DGX Station is ideally suited for use cases, such as business intelligence, where the data set is localized.

Given the potential that accelerated computing can bring to machine learning and data analytics, you might wonder why Nvidia hasn't previously offered something like this. I believe it's a confluence of factors that came together in a "perfect storm" scenario that enabled this. These include the development of NVLink, which enables multiple GPUs to be bound together in a fabric; GPU memory size increasing to the point where it can handle larger data sets; and the standardization of data formats enabling the Arrow libraries to be the foundation of RAPIDS for data processing. If one of these things didn't happen, RAPIDS would still be an idea.

As I mentioned previously, the use of GPUs has exploded but the adoption has been limited to a handful of verticals with limited use cases such as graphics rendering and deep learning. RAPIDS creates a use case that can democratize accelerated computing and make it available to every company.

Follow Zeus Kerravala on Twitter!
@zkerravala