NetApp on Wednesday expanded its on-premises to cloud capabilities with the introduction of a new artificial intelligence architecture that combines its flagship all-NVMe flash storage array with Nvidia's DGX GPU technology.
The new NetApp Ontap AI proven architecture is an integrated offering that allows customers to deploy artificial intelligence, machine learning, and deep learning technology from the edge to the core to the cloud, said Octavian Tanase, senior vice president for Ontap at the Sunnyvale, Calif.-based storage vendor.
NetApp Ontap AI combines NetApp's new AFF A800 all-flash storage array with end-to-end NVMe performance, Nvidia's DGX GPU nodes, and 100-Gbit Ethernet switches from Cisco into a system that solution providers can integrate in the field based on reference architecture the vendors provide, Tanase told CRN.
The new offering fits well with NetApp's Data Fabric architecture, which allows data to be easily migrated and managed between on-premises data centers, private clouds, public clouds, and hybrid clouds, he said.
"Data is the new gold of the 21st Century," he said. "A lot of companies are looking to leverage data to make faster and better decisions. And a lot of data is increasingly created at the edge or in the cloud. So they need new ways to move data to where it can be used. And our Data Fabric is perfect for edge to core to cloud deployment."
By bringing together NetApp storage, Nvidia GPU technology, and high-speed Cisco networking, the companies are bringing customers a simple, pre-designed system from which all the guesswork was taken out, Tanase said. NetApp Ontap AI also offers seamless scalability for managing large data lakes stretching from data centers to the cloud, he said.
Integration of the NetApp Ontap AI is done in the field, with over 80 percent of deployments expected to be handled by NetApp's channel partners, Tanase said.
While NetApp is publishing reference architectures for NetApp Ontap AI, solution providers will have to send purchase orders to NetApp, Nvidia, and Cisco, he said. The offering includes script for configuration with customers' software.
"There are no special certifications needed," he said. "But channel partners will need expertise to deploy the whole data pipeline."
NetApp isn't the first storage vendor to work with Nvidia to develop integrated artificial intelligence offerings.
Rival all-flash storage vendor Pure Storage in March combined its FlashBlade all-flash storage array with Nvidia GPU nodes and Arista Networks networking to introduce AIRI, or Artificial Intelligence Ready Infrastructure.
Mountain View, Calif.-based Pure Storage followed up in May with AIRI Mini, a more entry-level version that comes with a choice of Arista or Cisco networking.
Tanase said that the NetApp Ontap AI is more compatible with edge to core to cloud deployments than is Pure Storage's AIRI given its ability to be deployed with NetApp's Data Fabric architecture.
The partnership between NetApp and Nvidia makes sense on a number of levels, said John Woodall, vice president of engineering at Integrated Archive Systems, a Palo Alto, Calif.-based solution provider and long-time NetApp partner that also works with Nvidia.
The Nvidia DGX GPU is a real supercomputer in a box, Woodall told CRN.
"If you are doing that level of compute, having fast storage is critical," he said. "NetApp's A800 is the first end-to-end NVMe array, so it makes sense."
Customers are looking to do things with artificial intelligence such as test algorithms or data in the cloud before deploying it in production, and that is where NetApp among all its competitors really shines, Woodall said.
"NetApp Cloud Volumes work on all three hyperscalers: Amazon, Azure, and Google," he said. "Also, customers' data often lives in the cloud, and it doesn't always make sense to haul it to somewhere else to use. NetApp's data lake takes care of that."
The NetApp Ontap AI really shows the power of NetApp's Data Fabric architecture, and in particular shows how future-proof it is, Woodall said.
"Data Fabric is the consistent part of the story," he said. "This new artificial intelligence architecture speaks to an important tenet of Data Fabric: It's future-proof. If we were talking about Data Fabric three or four years ago, we would be thinking about it with artificial intelligence. Yet here we are."
The fact that everything is tied together with 100-Gbit Ethernet is important when dealing with high-performance all-flash storage and GPU technologies, Woodall said.
"With 100-Gbit Ethernet, we finally find ourselves with the compute, the storage, and the networking performance to deploy infrastructures for artificial intelligence and machine learning," he said. "And with NetApp's focus on the cloud, customers get choice of where to tie with the data."