Take a fresh look at your lifestyle.

Technologies Paving the Way for AI Applications

In our tech-dominated world, the word \”AI\” appears in discussions of just about every industry. Be it automotive, cloud, social media, healthcare, or insurance, AI has a major impact, and firms both large and small are making investments.

What's talked about less, however, would be the technologies making our current utilization of AI feasible and paving the way in which for growth in the future. After all, AI is difficult, and it's taking increasingly large neural network models and datasets to solve the most recent problems like natural-language processing.

Between 2012 and 2022, the growth of AI training capabilities increased with a factor of 300,000 as more complex problems were adopted. This is a doubling of coaching capability every 3.4 months, an incredible growth rate that has demanded rapid innovation across many technologies. The sheer amount of digital data in the world can also be rapidly increasing-doubling every two to three years, by a few estimates-and oftentimes, AI is the best way to create feeling of it all in due time.

As the planet continues to be data-rich, so that as infrastructure and services become more data-driven, storing and moving data is growing rapidly in importance. Behind the scenes, advancements in memory technologies like DDR and HBM, and new interconnect technologies like Compute Express Link (CXL), are paving the way in which for broader purposes of AI in future computing systems by making it simpler to use.

This may ultimately enable new opportunities, though each comes with its very own group of challenges, too. With Moore's Law slowing, these technologies are becoming much more important, especially if the industry wishes to maintain the pace of advancement we have become accustomed to.

DDR5

Though the JEDEC DDR5 specification was initially released in July 2022, we've got the technology is just now beginning to increase on the market. To deal with the requirements of hyperscale data centers, DDR5 improves on its predecessor, DDR4, by doubling the data-transfer rate, increasing storage capacity by 4×, and lowering power consumption. A new generation of server platforms essential to the growth of AI and general-purpose computing in data centers will be enabled by DDR5 main memory.

To enable higher bandwidths and much more capacity while maintaining operation inside the desired power and thermal envelope, DDR5 DIMMs must be \”smarter\” and more capable memory modules. Within an expanded chipset, SPD Hub and Temperature sensors are integrated into server RDIMMs with the transition to DDR5.

HBM3

High-bandwidth memory (HBM), once a specialty memory technology, is becoming mainstream due to the intense demands of AI programs along with other high-intensity compute applications. HBM offers the capability to give you the tremendous memory bandwidths necessary to efficiently move the increasingly large amounts of data needed for AI, although it comes with added design and implementation complexities due to its 2.5D/3D architecture.

In January of the year, JEDEC published its HBM3 update towards the HBM standard, ushering in a new degree of performance. HBM3 can deliver 3.2 terabytes per second when utilizing four DRAM stacks and offers better power and area efficiency compared with previous generations HBM, and in contrast to solutions like DDR memory.

GDDR6

GDDR memory is a mainstay of the graphics niche for two decades, supplying ever-increasing levels of bandwidth needed by GPUs and video games for additional photorealistic rendering. While its performance and power efficiency are not up to HBM memory, GDDR is made on similar DRAM and packaging technologies as DDR and follows a far more familiar design and manufacturing flow that reduces design complexity and causes it to be attractive for many types of AI applications.

The current form of the GDDR family, GDDR6, can deliver 64 gigabytes per second of memory bandwidth in one DRAM. The narrow 16-bit data bus allows multiple GDDR6 DRAMs to be connected to a processor, with eight or even more DRAMs commonly linked to a processor and capable of delivering 512 GB/s or more of memory bandwidth.

COMPUTE EXPRESS LINK

CXL is a revolutionary step forward in interconnect technology that allows a host of new use cases for data centers, from memory expansion to memory pooling and, ultimately, fully disaggregated and composable computing architectures. With memory as being a large area of the server BOM, disaggregation and composability with CXL interconnects can enable better usage of memory helpful information on improved TCO.

In addition, processor core counts continue to increase faster than memory systems will keep up, resulting in a scenario in which the bandwidth and capacity available per core is within danger of falling over time. CXL memory expansion can offer more bandwidth and capacity to keep processor cores fed with increased data.

The newest CXL specification, CXL 3.0, was launched in August of the year. The specification introduces a number of improvements within the 2.0 spec, including fabric capabilities and management, improved memory sharing and pooling, enhanced coherency, and peer-to-peer communication. It also doubles the information rate to 64 gigatransfers per second, leveraging the PCI Express 6.0 physical layer with no additional latency.

While their list is as simple as no means exhaustive, each of these technologies promises to enable new advancements and employ cases for AI by significantly improving computing performance and efficiency, and each will be critical to the growth of data centers in the coming years.