
With machines being trained to mimic cognitive functions of the human brain, semiconductor companies have been put on a growth chart which they didn’t have access to in the past even with all the innovations in chip design and next-generation devices that are fabrication enabled. Most AI apps such as virtual assistants rely on hardware for various functions.
Semiconductor companies could get 40-50% of the technology stack
With the creation of advanced Machine Learning algorithms, AI allows us to process huge data sets, and also learn, and improve over a period of time. Deep learning, a kind of ML made a huge leap in 2010s when it enabled generation of quite accurate results with a much wider range of data and the least requirement of data preprocessing by humans.While improving training and references developers often face challenges in storage, memory, networking, and logic. If semiconductor companies provide next gen accelerator architectures, they could enhance computational efficiency.
How AI could drive a big chunk of semiconductor revenues for data centers
The demand for existing chips by semiconductor companies will witness a growth With hardware as the differentiator in AI, but they could also gain by developing workload specific AI accelerators, which are not existing yet. According to the McKinsey report, “AI-related semiconductors will see growth of about 18 percent annually over the next few years—five times greater than the rate for semiconductors used in non-AI applications. By 2025, AI-related semiconductors could account for almost 20 percent of all demand, which would translate into about $67 billion in revenue. Opportunities will emerge at both data centers and the edge. If this growth materializes as expected, semiconductor companies will be positioned to capture more value from the AI technology stack than they have obtained with previous innovations—about 40 to 50 % of the total”.
Data-Center Usage: Cloud-computing data centers use GPUs for almost all training applications. GPUs are poised to be more customized to level up to the demands of DL, especially with ASICs entering the market. CPUs will lose to ASICs as DL based apps come to the fore.
Edge applications: A major chunk of the current edge training happens on PCs and laptops, but more devices may be used for the same purpose in the future. As most edge devices kneel on CPUs or ASICs, by 2025, ASICs are expected to account for 70% of edge inference market while GPUs will account for 20%.
Memory: Memory, especially dynamic random access memory (DRAM)is needed to store data inputs as well as for other tasks during inference and training. AI will be the enabler of opportunities for the memory market as something as small as a model being trained to recognize the image of a flower needs to bank on memory while the model works on the algorithms. AI chip leaders such as Google and Nvidia have adopted high-bandwidth memory (HBM) as the preferred memory solution, although thrice as more than the traditional DRAM— but it shows that customers are willing to pay for expensive AI hardware if they get performance gains.
The McKinsey report states many opportunities but also concludes that ‘ To capture the value they deserve, they’ll need to focus on end to-end solutions for specific industries (also called microvertical solutions), ecosystem development, and innovation that goes far beyond improving compute, memory, and networking technologies.”
No comment yet, add your voice below!