CyberWarfare / ExoWarfare

AWS adopts Nvidia’s Tesla T4 chip for AI inference

Amazon’s AWS today announced new EC2 instances with Tesla T4 GPUs, which it says will be available via G4 instances to customers in the coming weeks. T4 will also be available through the Amazon Elastic Container service for Kubernetes.

“It will be featuring Nvidia T4 processors and really designed for machine learning and to help our customers shrink the time that it takes to do inference at the edge — where that response time really matters — but also reduce the cost,” AWS VP of compute Matt Garman said onstage today during the keynote address at San Jose State University.

The new instance will be able to harness up to eight T4 GPUs simultaneously in the cloud.

Nvidia debuted the T4 for datacenters last September. The Tesla T4 uses Turing architecture and is packed with 2,560 CUDA cores and 320 Tensor cores with the power to process queries nearly 40 times faster than a CPU.

Since Nvidia debuted the GPU, it has been incorporated into datacenters run by companies like Cisco, Dell EMC, and Hewlett Packard Enterprise.


Nvidia T4


Nvidia also announced general availability of the Constellation platform for autonomous driving, the Safety Force Field for autonomous vehicles, Jetson Nano computer for embedded devices, and the reorganization of more than 40 Nvidia deep learning acceleration libraries under the new umbrella name CUDA-X AI.

CUDA-X AI libraries work with popular frameworks like MXNET, PyTorch, and TensorFlow.

Also today: Nvidia researchers introduced GauGAN, an AI system trained on 1 million Flickr photos that can create lifelike landscape images.