Supermicro SYS-6049GP-TRT GPU Server Featured on CRN’s 2018 Hottest Enterprise Servers List
2018 is almost over. Computer Reseller News (CRN) has published the list of the “Hottest” servers manufacturers pushed to the market in 2018. Supermicro is featured on the list with its SYS-6049GP-TRT model which uniquely supports up to 20 NVIDIA Tesla T4 Tensor Core GPUs. This is a great machine for Artificial Intelligence workloads and IoT applications with unprecedented GPU density.
The 4U beast supports a pair of Scalable Xeon processors, up to 3TB of DDR4 memory and 24 x 3.5″ hard drives for storing processed data. The system has 4 redundant hot-dwap power supplies to feed the hunger of the GPUs.
Supermicro’s SuperServer SYS-6049GP-TRT provides superior performance required to accelerate diverse applications of modern AI. Supermicro’s performance-optimized 4U SuperServer 6049GP-TRT dramatically increases the density of GPU server platforms for data center deployment supporting deep learning, inference applications. As more and more industries deploy artificial intelligence, they are looking for high density servers optimized for inference. The 6049GP-TRT is the optimal platform to lead the transition from training deep learning, neural networks to deploying artificial intelligence into real world applications such as facial recognition and language translation.
Supermicro has an entire family of 4U GPU systems that support the ultra-efficient Tesla T4, which is designed to accelerate inference workloads in any scale-out server. The hardware accelerated transcode engine in Tesla T4 delivers multiple HD video streams in real-time and allows integrating deep learning into the video transcoding pipeline to enable a new class of smart video applications. As deep learning shapes our world like no other computing model in history, deeper and more complex neural networks are trained on exponentially larger volumes of data. To achieve responsiveness, these models are deployed on powerful Supermicro GPU servers to deliver maximum throughput for inference workloads.
See the CRN’s review here