With the rapid development of generative AI, deep learning and large-scale data computing, AI servers have become the core equipment to promote the construction of future technological infrastructure. Inside these servers, the "high-speed cable" is one of the indispensable components that plays a key role in signal transmission and data communication. The design and performance of high-speed transmission lines directly affect the communication efficiency and overall computing performance between modules within the server, becoming a key foundation that cannot be ignored for high-end AI computing systems.
Core functions and technological evolution of high-speed transmission lines
AI servers usually contain a large number of high-speed modules such as CPU, GPU, memory, network interface card, storage module, etc. High-frequency and high-volume data transmission is required between these components, and high-speed transmission lines are the high-speed data channels connecting these modules. Compared with traditional signal lines, high-speed transmission lines not only have wider bandwidth and lower latency, but can also effectively reduce signal interference and data error rates.
Based on different applications and specification requirements, a variety of interface standards have been developed for high-speed transmission lines in the market, including:
• MCIO(Mini Cool Edge IO):With high-density design, it is suitable for high-speed communication applications with limited space.
• PCIe Riser: Provides connections between the CPU and expansion modules such as GPU and accelerator cards, and is one of the most common communication specifications within AI servers.
• Gen-Z: A new generation of memory language designed for high-bandwidth, low-latency transmission, which is expected to replace some of the roles of PCIe and DDR buses in the future.
• SimSAS (Slim SAS): Known for its small size and high performance, it is often used in high-speed storage module connections.
The common feature of these interfaces is that they can support transmission rates exceeding 25Gbps and even up to 112Gbps, providing strong support for the rapid training and inference of AI models.
Market demand is increasing rapidly, and supply chain becomes the key
AI applications are rapidly gaining popularity in areas such as cloud services, voice recognition, autonomous driving, and medical image recognition, driving a doubling of demand for AI server deployment in data centers and enterprises around the world. According to market research institutions, the global AI server market will maintain an average annual growth rate of more than 20% from 2024 to 2030. Each AI server is usually equipped with dozens or even hundreds of high-speed transmission lines, which has led to a surge in the overall market size.
In addition, to meet the high-heat and high-density application scenarios of AI computing, high-speed transmission lines are also evolving towards high flexibility, high heat dissipation, and low energy consumption. Cables with high-speed transmission capabilities and the ability to be bent and installed in a narrow space have become a priority in server design. This has also prompted traditional wire manufacturers and high-speed connector technology manufacturers to actively invest in research and development, and promote the application of new materials and new structures, such as shielding enhancement, insulation material improvement, and double-layer coating technology.
Taiwanese manufacturers and international giants compete for business opportunities simultaneously
Currently, the high-speed transmission line market is dominated by international giants such as Amphenol, TE Connectivity, Molex, and Samtec, which have one-stop solution capabilities from connectors to cable integration. However, more and more Taiwanese manufacturers are investing in this field. For example, SingAudio and LiDian Technology specialize in high-speed lines and modular design inside servers to seize the new wave of demand brought by AI applications.
Looking ahead, as AI models continue to upgrade, the requirements for computing power and transmission efficiency will become more extreme. As the neural network inside the AI server, high-speed transmission lines will continue to play an important role in promoting the upgrade of overall system performance. Whoever can first master the technical threshold of the new generation of high-speed interfaces and wire manufacturing processes will be able to stand out in the infrastructure competition in the AI era.