View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Silicon
June 14, 2023

AMD takes on Nvidia with MI300X AI GPU – could it land AWS as a client?

The new device is launching later this year. The US chipmaker hopes to grab a bigger slice of the expanding AI market.

By Matthew Gooding

US chipmaker AMD has revealed details of a new GPU for artificial intelligence workloads. It believes the MI300X can grab a slice of the AI chip market, which is currently dominated by rival Nvidia. And though the company did not reveal any big-name customers that will be using the new chips on launch, Amazon’s cloud unit AWS is reportedly considering whether to deploy the MI300X in its data centres.

AMD has launched its MI300X AI GPU. (Photo courtesy of AMD)

The new GPU was launched at AMD’s Data Centre and AI Technology event on Tuesday, along with a raft of products which it hopes will help attract customers building and running AI models. Demand for AI chips has rocketed since the launch of ChatGPT and the subsequent avalanche of generative AI services, and so far Nvidia has been the chipmaker that has cashed in.

AMD reveals details of the MI300X AI GPU

Available later this year, the MI300X will feature 192GB of HBM3 memory. By comparison, Nvidia’s H100 – the company’s newest AI accelerator – has 80GB memory, though it does offer an option to link up two H100s which offers an aggregate 188GB. The size of the chip’s memory means customers can run an entire large language AI model on a single GPU, AMD says. It gives the example of Falcon-40, an open-source LLM of 40bn parameters which requires 90GB of memory. That said, Falcon-40 is a relatively small model compared to the likes of GPT-4, which has 170trn parameters.

Elsewhere, AMD will be offering customers access to its Instinct platform, which brings together eight MI300X accelerators into an industry-standard design for AI inference and training. The company believes that making its GPUs simple to plug in to existing data centre infrastructure will present a more flexible and affordable alternative to Nvidia, which does sell individual chips but is more concerned with trying to convince cloud providers offering AI services to sign up to its DGX Cloud platform.

At the event, AMD also showcased its ROCm software ecosystem for data centre accelerators, highlighting collaborations with the PyTorch Foundation open-source AI group, and AI tool developer Hugging Face.

“AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD,” said AMD CEO Lisa Su. “We are laser-focused on accelerating the deployment of AMD AI platforms at scale in the data centre, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimised for our hardware.”

Can AMD grab a slice of Nvidia’s AI pie?

The MI300X has yet to be fully benchmarked, so it remains to be seen whether it will match up to the performance of Nvidia’s GPUs. The accelerator is launching without the announcement of a high-profile customer, a factor which perhaps contributed to the company’s share price dipping following the announcement.

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

However, it has apparently piqued the interest of AWS, by far the largest player in the public cloud market. Dave Brown, the company’s head of elastic computing, said at the AMD event that discussions were taking place between the two businesses.

“We’re still working together on where exactly that will land between AWS and AMD, but it’s something that our teams are working together on,” Brown said in an interview with Reuters. “That’s where we’ve benefited from some of the work that they’ve done around the design that plugs into existing systems.”

Brown added that AWS would not be using Nvidia’s DGX Cloud, and will instead continue building its own systems featuring the H100.

Read more: Could Intel be anchor investor in Arm’s IPO?

Homepage image by Tobias Arhelger/Shutterstock

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU