The surge in machine-generated data has led to an overwhelming demand for scalable AI infrastructure, putting pressure on both compute and connectivity within data centres. As the power requirements of AI workloads rise, traditional monolithic integrated circuits (ICs) are no longer viable. Chiplet architectures, offering modular, custom components interconnected by low-latency, high-bandwidth connections, are proving essential for scaling AI efficiently.
The proliferation of data, particularly from autonomous sensors, video, and financial analysis, has driven an exponential growth in data generation. This has prompted a shift towards AI for data processing, challenging the limitations of traditional compute infrastructures. The growing demand for power and the associated environmental impact, with individual AI training runs generating significant CO2 emissions, have highlighted the need for more energy-efficient hardware. Furthermore, hardware costs are rising sharply, particularly as large-scale AI deployments require multiple GPUs and massive server investments.
AI’s power consumption is accelerating, with data centres projected to consume a significant portion of global electricity in the coming years. For example, by 2030, AI is expected to account for a large share of US electricity consumption. High-power devices like Nvidia’s H100 GPU are at the centre of this challenge, consuming vast amounts of energy and requiring constant operation. This makes the push for low-power AI design crucial in reducing both costs and environmental impact.
To meet these demands, chiplet-based designs have emerged as a key solution. Unlike monolithic ICs, chiplets allow for smaller, modular components to be combined for enhanced performance and lower costs. By separating different functions into individual chiplets—such as memory and logic processing—AI chips can be designed and tested more efficiently. This modular approach also improves yield and reduces manufacturing costs by up to 40%, making it more feasible to scale AI infrastructure across millions of devices in data centres.
The UCIe (Universal Chiplet Interconnect Express) standard is vital for enabling chiplets to communicate efficiently, ensuring high bandwidth and low power consumption while maintaining signal integrity across interconnected tiles. This interconnectivity supports both “scaling up” (adding more resources to individual servers) and “scaling out” (adding more servers to distribute workloads). As data processing requirements grow, this combination of scaling strategies drives significant improvements in both compute power and networking demands.
As AI workloads increase, so too do the challenges in data centre connectivity. Traditional data centre networks are evolving to meet these needs, with optical fibres replacing electrical connections to handle higher data rates. Chiplet-based designs are facilitating this transition by allowing for tighter integration of components, improving power efficiency and lowering costs. Co-packaged optics (CPO) are also playing a critical role in this transformation, offering direct optical connections to AI accelerators and switches.
AI is increasingly being distributed across geographically separated sites, creating new challenges in connectivity. The rise of regional data centres and distributed training methods, which preserve privacy by keeping sensitive data local, requires new broadband solutions. Coherent-Lite transceivers, which leverage chiplet designs, are enabling this distributed infrastructure by reducing power consumption while maintaining long-range optical connectivity.
Alphawave Semi is contributing to this shift by developing UCIe IP and offering high-performance chiplets designed for AI and high-performance computing (HPC) applications. Their multi-protocol chiplet integrates Ethernet, PCIe, CXL, and UCIe standards, providing scalable and efficient connectivity. These innovations support the growing demand for flexible, high-performance AI systems, ensuring that AI scaling can continue without compromising on power efficiency or cost.
The increasing reliance on chiplets in AI infrastructure highlights their critical role in scaling performance and connectivity. Chiplets offer a sustainable solution to the growing demands of AI by enabling custom silicon designs that are optimised for specific workloads, reducing power consumption, and accelerating hardware development cycles.
The role of chiplets in AI infrastructure is rapidly becoming indispensable. They provide a scalable, energy-efficient way to meet the growing demands of AI workloads while offering significant cost savings. As data centres continue to expand and AI applications evolve, chiplets are helping to drive the innovation needed for future AI advancements.
Alphawave IP Group plc (LON:AWE) is a semiconductor IP company focused on providing DSP based, multi-standard connectivity Silicon IP solutions targeting both data processing in the Datacenter and data generation by IoT end devices.