Category: Software, Business, Microsoft, Infrastructure, apple, artificial-intelligence

https://aws.amazon.com/?utm_content=inline-mention last month expanded its https://thenewstack.io/amazon-web-services-takes-the-silicon-wars-to-the-cloud/, highlighted by the introduction of the giant cloud provider’s third-generation Arm-based https://thenewstack.io/aws-graviton-marks-the-emergence-of-arm-for-cloud-native-workloads/ that will power new cloud instances aimed at compute-intensive workloads like high-performance computing (HPC), scientific modeling, analytics and CPU-based machine learning inferencing. At the same time, AWS CEO https://www.linkedin.com/in/adamselipsky/ also announced new Trn1 instances running on the company’s year-old Trainium chips aimed at machine learning training workloads and boasted about the price-performance capabilities of Inf1 instances launched in 2019 and leveraging the Inferentia chips, for machine learning inferencing tasks. The company even announced storage-optimized EC2 instances — Im4gn/Is4gen/I4i — based on its Nitro solid-state drives (SSDs) for improved storage performance for I/O-intensive workloads in the AWS cloud. The introduction of the latest processors and EC2 instances is the latest demonstration of AWS’ years-long efforts to build its own processors to run in its cloud instances as well as in its Outposts infrastructure, which are designed to deliver AWS services and connectivity to on-premises data centers at a time when enterprise adoption of hybrid cloud models is growing rapidly. All this comes five years after AWS bought Israeli startup Annapurna Labs in 2016, making it the foundation of its chip-making efforts.

Related Articles