Agentic AI Boosts Server CPU TAM & More (0505-0506)

1. Agentic AI Is Reshaping the Server CPU Market — AMD, ARM, and Intel’s TAM Overhaul

• Core Source

“Why was the server CPU TAM raised 2x from $60B → $120B in just a few months? Confirmed surge in real demand through long-term demand planning discussions with cloud and enterprise customers. When scaling from inference to agentic AI, CPUs take over orchestration, data processing, and parallel execution. The CPU:GPU ratio is being restructured from 1:4~8 to 1:1 or higher.”

“Server CPU TAM outlook dramatically raised: the proliferation of inference and agentic AI is driving an explosion in CPU orchestration demand. Server CPU TAM annual growth rate raised from 18% to 35%+, with a projected market size of $120B+ by 2030.”

“EPYC server CPU — 4 consecutive quarters of record highs: both cloud and enterprise up YoY +50%+. Agentic deployments generally require approximately 3~5x more CPU cores per user and per GPU/XPU.”

“Guided 2Q CPU revenue growth of more than 70%. A signal that server and AI demand remains robustly intact.”

• Expected Impact

Agentic AI refers to AI systems that go beyond answering simple queries — they plan autonomously, call multiple tools, and execute complex tasks end to end. This transition is fundamentally reshaping the semiconductor landscape.

Existing AI infrastructure was designed around GPUs. While GPUs handled large-scale computation, CPUs played a supporting role, with roughly 1 CPU core per 4~8 GPU units. However, with the shift to agentic AI, this structure is inverting. Agents repeatedly call multiple tools in parallel, verify results, correct errors — and all of this orchestration work falls on the CPU. UBS analyzed that agentic deployments require 3~5x more CPU cores per GPU/XPU than before, and AMD itself stated that the CPU:GPU ratio is being restructured to 1:1 or higher.

As a result, AMD raised its 2030 server CPU market size forecast from $60B to $120B — doubling it in just a few months — and lifted its projected annual growth rate from 18% to over 35%. In AMD’s Q1 results, EPYC server CPUs hit record highs for 4 consecutive quarters, with 2Q guided at over 70% YoY growth. On the competitive front, UBS identified ARM Holdings as the top beneficiary thanks to low latency and high energy efficiency, followed by AMD for its multithreading strengths. The trend of agents offloading workloads to PCs could also trigger an edge device upgrade cycle, benefiting both Intel and AMD. The CPU market is being redefined — from a mere GPU auxiliary to the critical bottleneck of AI infrastructure.

2. Palantir Cements Its Status as an AI Platform Leader — Record Growth and Major Guidance Upgrade

• Core Source

“Q1 revenue of $1.633B, up 85% YoY and 16% QoQ. US revenue of $1.282B, up 104% YoY. Commercial segment revenue of $774M, up 95% YoY. 11 consecutive quarters of accelerating growth.”

“We recorded a momentum surge with 85% growth last quarter. This is the highest figure in our history. US business has more than doubled, and based on our confidence in the accelerating US market, we are raising our annual revenue growth rate guidance by 10pp to 71% over last quarter.”

“Palantir’s Rule of 40 score has soared to 145%. We have achieved overwhelming results on this metric — a level reached only by peers such as NVIDIA, Micron, and SK hynix in the AI infrastructure space.”

“FY26 annual guidance raised by 6.5% to $7.65B. Growth outlook adjusted from 61% to 71%. US commercial revenue raised to ‘over 120% growth’ (previously 115%).”

• Expected Impact

Palantir proved through its Q1 results — in hard numbers — that it has evolved beyond a simple software company into a core infrastructure company for enterprise and government-facing AI platforms.

The key metric, the Rule of 40 (growth rate + operating margin combined), hit 145%. In the software industry, exceeding 40% classifies a company as high-quality — Palantir has surpassed this by more than 3x, earning recognition alongside AI infrastructure hardware companies like NVIDIA, Micron, and SK hynix. The US commercial Remaining Deal Value (RDV) reached $4.92B, up 112% YoY, with Total Contract Value (TCV) of $2.41B (+61% YoY) and customer count of 1,007 (+31% YoY) showing broad-based strengthening across booking metrics.

Palantir’s appeal lies not in the AI model itself, but in the scarcity of its Ontology and AIP platform, which connects AI to real enterprise and government decision-making. As generative AI proliferates, demand for error control, governance, and decision-making integration grows — and Palantir holds an unrivaled position in this space with no real substitute. US government segment revenue re-accelerating to 76% YoY growth is also notable, reflecting a structural expansion of AI platform share within defense and national security budgets. With an annual adjusted FCF guidance of $4.2B~$4.4B and $8B in cash on hand, the financial foundation demonstrates that this growth is not driven by excessive cost spending.

3. Optical Connectivity Emerges as AI Infrastructure’s New Bottleneck — NVIDIA·Corning Partnership and Explosive Data Center Interconnect Market Growth

• Core Source

“Corning plans to expand its US optical connectivity manufacturing capacity 10x. Corning also plans to expand US optical fiber production capacity by more than 50%. Corning plans to build 3 new advanced manufacturing facilities in North Carolina and Texas. This expansion is expected to create more than 3,000 high-paying US jobs.”

“The global data center interconnect market is projected to grow from $13.4B in 2025 to $172.7B in 2030 at a CAGR of 67%.”

“As AI factories get larger and more numerous, optical connectivity becomes a critical component of AI infrastructure. NVIDIA and Corning explained that the latest AI workloads require thousands of NVIDIA GPUs, which in turn require high-performance optical fiber, connectivity equipment, and photonics.”

“The supply-demand imbalance for EML lasers has worsened to over 30%, more severe than the 25~30% level mentioned last quarter.”

• Expected Impact

As GPU computational performance in AI infrastructure has grown explosively, the interconnect fabric that carries data has emerged as the new bottleneck. In AI factories composed of thousands of GPUs, data must be exchanged between GPUs and across data centers at extreme speeds — and conventional copper cables simply cannot keep up. The alternative is optical connectivity, which transmits data via light.

The global data center interconnect market is projected to grow from $13.4B in 2025 to $172.7B in 2030 at a CAGR of 67%. Reflecting this, NVIDIA has entered into a multi-year strategic partnership with Corning — the inventor of low-loss optical fiber — with Corning committing to expand optical connectivity production capacity 10x and optical fiber production capacity by over 50%. NVIDIA’s supply chain preemption strategy has now extended from Lumentum and Coherent to Corning, moving toward locking in the entire optical value chain through long-term contracts. Meanwhile, EML lasers — critical for scale-up connections (between GPUs within a rack) — are seeing a supply-demand imbalance exceeding 30%, pointing to sustained margin expansion for optical component companies like Lumentum. With NVIDIA accelerating CPO (Co-Packaged Optics) commercialization to its 2028 Feynman platform, optical connectivity demand is expanding structurally from current data center interconnects into GPU packaging itself, marking entry into a new phase of structural growth.

4. AI Factory Demand Surge Is Creating Structural Power Shortages — Data Center CAPA Delays and Power Market Tightening

• Core Source

“US data center load is expected to grow from approximately 75GW currently to over 135GW by 2030, generating approximately 60GW of incremental capacity, of which approximately 70% will be driven by AI.”

“CAPA growth rate < power demand growth rate → supply-demand pressure. Data center power demand has become the key variable determining market equilibrium. Power reserve margins declining → structural tightening underway.”

“Even after reflecting delays, CAPA grows sharply in 2026~2027. Despite cancellations and delays, total CAPA ultimately increases significantly. Demand is maintained but supply timing is simply pushed back. As a result, the possibility of deepening supply shortages at specific points in time increases.”

“Hyperscaler CapEx growth → approximately 9% flows to WFE. A $100B increase in data center investment leads to approximately $8~10B increase in WFE.”

• Expected Impact

AI data centers are extraordinarily power-intensive facilities. US data center power load is expected to grow from approximately 75GW today to over 135GW by 2030, with approximately 70% of that increment driven by AI. The problem is that power supply growth cannot keep pace with data center demand growth. According to Goldman Sachs analysis, data center capacity continues to face delays and cancellations, yet is structurally entering a phase of nonlinear accelerating growth in 2026~2027, meaning the possibility of deepening supply shortages at specific points in time is rising significantly.

Grid connection delays are already a severe bottleneck. PJM (the US’s largest grid operator) is not reviewing new grid interconnections for the foreseeable future, and grid connection delays of 5~10 years have emerged as the single biggest bottleneck for data center construction. This is not merely a permitting issue — it is a structural constraint limiting the pace of AI infrastructure investment execution. The beneficiaries of this dynamic are power infrastructure companies. Utilities and power generators in tight regions benefit from rising power prices, while transmission, distribution, and power infrastructure companies enjoy structural demand growth regardless of delays. Eaton’s Q1 results — where data center orders in the Electrical Americas (EA) segment surged 240% YoY — represent the real-demand version of this trend. As AI factories scale, power infrastructure is being elevated to the same tier as semiconductors as a critical supply chain variable.

5. DeepSeek Raises First External Funding at $45B Valuation — China’s AI Commercialization Year and the Challenge to US AI Dominance

• Core Source

“DeepSeek’s valuation has soared from $20B at the start of negotiations just a few weeks ago to $45B. China’s largest national semiconductor investment institution, the ‘Big Fund (National Integrated Circuit Industry Investment Fund)’, is in negotiations to lead DeepSeek’s first external fundraising round.”

“China’s LLM token usage market share has surged from 5% to 32% in just one year, surpassing US models (19%) — a tectonic shift. China’s engineering efficiency enables inference costs at 15~20% of US levels, accelerating AI diffusion based on price competitiveness.”

“DeepSeek stated that its latest V4 model is optimized for Huawei’s Ascend 950PR chip, and Huawei’s AI chip sales are surging in the Chinese market, already overtaking NVIDIA this year amid US export controls banning NVIDIA products from entering China.”

“Doubao, which consumes 120 trillion tokens daily with 345 million monthly active users, is now passing computing costs to high-frequency users. Morgan Stanley estimates Doubao’s annual subscription revenue could reach $1.5B in an optimistic scenario.”

• Expected Impact

DeepSeek garnered global attention by developing competitive AI models with far fewer computing resources than US competitors, and is now raising its first external funding with its valuation surging from $20B to $45B in just a few weeks. The lead investor, China’s ‘Big Fund,’ is the central institution behind China’s national semiconductor self-sufficiency strategy — meaning this investment should be understood not merely as a startup investment, but as part of China’s strategy to internalize its AI semiconductor ecosystem.

The critical point is that DeepSeek’s latest V4 model is optimized for Huawei’s Ascend 950PR chip. Huawei’s AI chip sales are already overtaking NVIDIA in the Chinese market as a beneficiary of US export controls. In other words, US chip export restrictions have paradoxically accelerated both the strengthening of Huawei’s AI chip ecosystem within China and the development of efficient models like DeepSeek simultaneously. Structural change is also visible at the broader market level. China’s LLM global token usage share has surged from 5% to 32% in just one year, surpassing US models (19%) (Morgan Stanley) — indicating that the diffusion of Chinese AI models driven by price competitiveness is already in full swing. China’s largest consumer AI app, Doubao, beginning to monetize on top of a base of 345 million monthly active users also signals that the Chinese AI market is transitioning from a subsidy phase to a commercialization phase. The AI model competition is shifting away from being solely a US Big Tech showcase, entering a phase where China is securing a different angle of advantage through price and efficiency.

Comment [0]

Leave a Comment