NVIDIA Extends GPU Cloud Support For AI Researchers
By Bio-IT World Staff
December 4, 2017 NVIDIA has extended NVIDIA GPU Cloud (NGC) support to NVIDIA TITAN, giving AI researchers using desktop GPUs access to NGC. The company has also added new software and other key updates to the NGC container registry to provide researchers a broader, more powerful set of tools to advance their AI and high performance computing research and development efforts.
Customers using NVIDIA Pascal architecture-powered TITAN GPUs can sign up immediately for a no-charge NGC account and gain full access to a comprehensive catalog of GPU-optimized deep learning and HPC software and tools. Other supported computing platforms include NVIDIA DGX-1, DGX Station and NVIDIA Volta-enabled instances on Amazon EC2.
Software available through NGC’s rapidly expanding container registry includes NVIDIA optimized deep learning frameworks such as TensorFlow and PyTorch, third-party managed HPC applications, NVIDIA HPC visualization tools, and NVIDIA’s programmable inference accelerator, NVIDIA TensorRT 3.0.
“We built NVIDIA GPU Cloud to give AI developers easy access to the software they need to do groundbreaking work,” said Jim McHugh, vice president and general manager of enterprise systems at NVIDIA in a statement. “With GPU-optimized software now available to hundreds of thousands of researchers using NVIDIA desktop GPUs, NGC will be a catalyst for AI breakthroughs and a go-to resource for developers worldwide.”
New NGC Containers, Updates and Features
In addition to making NVIDIA TensorRT available on NGC’s container registry, NVIDIA announced the following NGC updates:
- Open Neural Network Exchange (ONNX) support for TensorRT.
- Immediate support and availability for the first release of MXNet 1.0
- Availability of Baidu’s PaddlePaddle AI framework
ONNX is an open format originally created by Facebook and Microsoft through which developers can exchange models across different frameworks. In the TensorRT development container, NVIDIA created a converter to deploy ONNX models to the TensorRT inference engine. This makes it easier for application developers to deploy low-latency, high-throughput models to TensorRT.
Together, these additions give developers a one-stop shop for software that supports a full spectrum of AI computing needs — from research and application development to training and deployment.
Launched in October, NGC is also available free of charge to users of NVIDIA Volta GPUs on Amazon Web Services and all NVIDIA DGX-1 and DGX Station customers. NVIDIA will continue to expand the reach of NGC over time.