Google puts Nvidia on high alert as it showcases Trillium, its rival AI chip, while promising to bring H200 Tensor Core GPUs within days
published 8 November 2024
Trillium offers substantial advancements over TPU v5e predecessor
- Trillium offers 4x training boost, 3x inference improvement over TPU v5e
- Enhanced HBM and ICI bandwidth for LLM support
- Scales up to 256 chips per pod, ideal for extensive AI tasks
Google Cloud has unleashed its latest TPU, Trillium, the sixth-generation model in its custom AI chip lineup, designed to power advanced AI workloads.
First announced back in May 2024, Trillium is engineered to handle large-scale training, tuning, and inferencing with improved performance and cost efficiency.
The release forms part of Google Cloud’s AI Hypercomputer infrastructure, which integrates TPUs, GPUs, and CPUs alongside open software to meet the increasing demands of generative AI.
A3 Ultra VMs arriving soon
Trillium promises significant improvements over its predecessor, TPU v5e, with over a 4x boost in training performance and up to a 3x increase in inference throughput. Trillium delivers twice the HBM capacity and doubled Interchip Interconnect (ICI) bandwidth, making it particularly suited to large language models like Gemma 2 and Llama, as well as compute-heavy inference applications, including diffusion models such as Stable Diffusion XL.
Google is keen to stress Trillium’s focus on energy efficiency as well, with a claimed 67% increase compared to previous generations.
Google says its new TPU has demonstrated substantially improved performance in benchmark testing, delivering a 4x increase in training speeds for models such as Gemma 2-27b and Llama2-70B. For inference tasks, Trillium achieved 3x greater throughput than TPU v5e, particularly excelling in models that demand extensive computational resources.
Scaling is another strength of Trillium, according to Google. The TPU can link up to 256 chips in a single, high-bandwidth pod, expandable to thousands of chips within Google’s Jupiter data center network, providing near-linear scaling for extensive AI training tasks. With Multislice software, Trillium maintains consistent performance across hundreds of pods.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Tied in with the arrival of Trillium, Google also announced the A3 Ultra VMs featuring Nvidia H200 Tensor Core GPUs. Scheduled for preview this month they will offer Google Cloud customers a high-performance GPU option within the tech giant’s AI infrastructure.
Watch On
You might also like
- Google Cloud: No-one can deliver business AI value like us
- Google’s TPU v5p chip is faster and has more memory and bandwidth
- Intel and Google Cloud team up to launch super-secure VMs
Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.
A new form of macOS malware is being used by devious North Korean hackers
Ulefone Armor 27T Pro rugged phone review
Australian Beach Volleyball Tour live stream: How to watch bronze and gold medal matches online for free, finals, start time
Most Popular
-
1Google puts Nvidia on high alert as it showcases Trillium, its rival AI chip, while promising to bring H200 Tensor Core GPUs within days
-
2As if Intel didn’t have enough to worry about, Nvidia might be about to jump into the PC processor market
-
3I review TVs for a living and this record-low price on the Hisense U8N is one of the best early Black Friday deals I’ve seen
-
4I’d drop Hulu for Netflix in November 2024 – here’s why
-
5The Samsung Galaxy S24 is now cheaper than its mid-range version thanks to this £200 discount