From Bankruptcy to AI Rivalry: How Lisa Su Transformed AMD into a Major AI Chip Competitor
Introduction
Lisa Su's decade-long leadership of Advanced Micro Devices (AMD) stands as a stunning example of corporate transformation, technological innovation, and strategic clarity. Under her helm since late 2014, AMD's market capitalization skyrocketed from near-bankruptcy levels to over $200 billion, its share price soared more than 4,000%, and its data center GPU sales have nearly doubled in just the past two years. Once an underdog often overshadowed by Intel and NVIDIA, AMD now stands at the vanguard of the AI chip revolution, credibly challenging NVIDIA for a share of the fastest-growing sector in semiconductors: AI datacenter accelerators.
This report explores the complexities of AMD's turnaround under Lisa Su, focusing on the transformation of its GPU and data center business, dramatic valuation growth, critical strategic decisions, and the continuous innovation and ecosystem building that have closed the gap with NVIDIA. The recent growth in AMD's Instinct AI accelerator sales, rising importance of the ROCm software stack, breakthroughs in chiplet and manufacturing strategies, and the evolving competitive and geopolitical landscape are analyzed in depth. AMD’s milestones, partnerships, and future roadmap are detailed, providing a comprehensive view of its journey to becoming a key AI chip rival.
Section 1: Lisa Su – Engineer, Executive, and Architect of AMD’s Rebirth
Early Life and Education
Lisa Su was born in Tainan, Taiwan in November 1969 and immigrated to the United States at the age of three. Raised in New York City, she demonstrated early interest in mathematics and the inner workings of technology, often disassembling and repairing electronics as a child. Su attended the prestigious Bronx High School of Science before pursuing advanced degrees at the Massachusetts Institute of a Technology (MIT), earning a bachelor’s, master’s, and then a Ph.D. in electrical engineering. Her research at MIT focused on silicon-on-insulator (SOI) technology, a foundational innovation that later revolutionized chip performance and energy efficiency.
Pioneering Industrial Career
Su’s early professional journey fortified her reputation as a standout technologist and problem-solver. At Texas Instruments, IBM, and later as CTO of Freescale Semiconductor, she spearheaded developments such as copper interconnects for faster chips and led the team that created the Cell processor for Sony’s PlayStation 3. Each role expanded her expertise across process engineering, chip architecture, and executive management, culminating in a rare blend of deep technical and leadership skills.
Rise at AMD
Su joined AMD in 2012 as Senior Vice President and General Manager, initially tasked with global business execution and diversifying the company’s product base. Her rapid impact saw AMD secure design wins for gaming consoles (Xbox One, PlayStation 4) and refine operational focus. In October 2014, against a backdrop of financial peril and outdated product lines, she was appointed CEO. She became the first woman to lead AMD and remains one of the very few female CEOs in the global semiconductor industry.
Section 2: AMD’s Valuation Growth and Business Reversal – The Numbers
The Pre-Su Crisis
When Lisa Su took the helm, AMD was facing existential financial risk. In 2014 its stock price hovered just above $3, the company was valued under $2–3 billion, and it had already cut a quarter of its workforce. Intel and NVIDIA, then with market caps nearly 100-fold larger, threatened AMD’s survival in both CPUs and GPUs.
Transformation and Compounding Value
By mid-2025, under Su’s discipline and vision, AMD’s market capitalization has soared to $200–$220 billion—an increase of over 55-fold. The stock price catapulted past $120 in 2024, and a $10,000 investment at the start of Su's tenure would now be worth over $400,000.
Key Financial Milestones
- Stock Price: $3 (2014) → $127 (2024)
- Market Cap: ~$2–3B (2014) → $205–220B (2024–25)
- Annual Revenue: $5.5B (2014) → $25.8B (2024)
- Data Center Revenue (2025 Q1): $3.7B (up 57% YoY)
- Data Center GPU Revenue (2024): Forecast up to $5B from <$100M just two years prior
Lisa Su’s own net worth climbed to more than $1.3 billion, her compensation tightly aligned with shareholder value creation—a sharp contrast to her $1M base salary and modest bonus at the outset.
Market Share Inflection
AMD’s resurgence isn’t just financial. Its share of the x86 server CPU market leaped from nearly zero to over 34% by mid-2025, displacing Intel from its position of undisputed dominance. Additionally, AMD closed the performance gap with Intel in client and cloud CPUs, competing directly with high-core, high-efficiency designs in the lucrative data center market.
Section 3: Strategic Corporate Restructuring and Business Model Pivots
From Survival to Outperformance
Upon taking over as CEO, Lisa Su initiated a deep corporate reset. She recognized that survival would not come from incremental cost-cutting, but from bold bets on technology and radical operational discipline.
Focus Areas Defined:
- Deliver only competitive, high-quality products—no “good enough” chips.
- Restore and deepen customer trust—particularly with major system, console, and cloud partners.
- Simplify operations—reduce overhead, exit unprofitable businesses, and refocus on core R&D.
This pivot involved suspending sales of AMD’s then-uncompetitive server chips until performance parity with Intel was achieved, even though this meant forgoing immediate revenue. The strategy, initially controversial, became the backbone of AMD’s eventual resurgence.
Lean, Engineering-Driven Organization
Su restructured AMD to restore engineering primacy. She gave technical teams autonomy and clear mandates for architectural breakthroughs, personally engaging with chip design groups and labs. Her leadership style—demanding but highly technical—instilled a culture where engineering excellence and execution supplanted corporate politicking and bureaucracy.
Business Model Reinvention: Chiplets and Modular Design
A signature Su move was betting heavily on “chiplets”—modular silicon blocks that could be mixed, matched, and manufactured across multiple foundries. This allowed AMD to sidestep the risks and costs of monolithic chip design and to leverage foundries (especially TSMC) more flexibly than Intel’s in-house manufacturing model. This chiplet innovation became a pillar of AMD’s ability to outmaneuver larger rivals during supply chain crunches and to create high-core-count CPUs/GPU architectures at lower cost.
M&A: The Xilinx Acquisition
AMD’s 2022 acquisition of Xilinx for $49 billion, just months after Su became Chair, was another transformational milestone. This deal expanded AMD's technology portfolio to include FPGAs and adaptive computing, positioning it for AI, networking, automotive, and IoT growth.
Section 4: Innovation at the Core—CPU and GPU Architecture Reinvention
The Zen Architecture and Ryzen Processors
The single greatest technical milestone was the “Zen” CPU microarchitecture: a ground-up redesign that finally matched, then outperformed, Intel’s best on both performance and energy efficiency. The first Zen (“Ryzen”) CPUs, launched in 2017, shocked the industry with dramatic performance gains at disruptive prices. Unlike AMD’s failed “Bulldozer” era, Zen immediately recaptured market share in desktops and servers, with quarterly x86 server share jumping from near-zero to double digits within three years.
Subsequent Zen iterations (Zen 2, Zen 3, Zen 4, Zen 5) continued this lead, powering Ryzen for consumer, EPYC for data center, Threadripper for workstations, and custom gaming chips for consoles.
Benefits of “Zen”:
- Scalability: Flexible from mobile laptops to 192-core server monsters
- Performance-per-watt: A leap ahead of legacy monolithic designs
- Chiplet Architecture: First movers in scalable multicore and process decoupling
EPYC: The Server and Data Center Breakthrough
The EPYC line, built on Zen, directly targeted the cloud and enterprise data center markets previously owned by Intel’s Xeon. By 2024, the fifth-generation EPYC “Turin” processors, with up to 192 Zen 5 cores per chip, set industry records in performance, cost efficiency, and memory bandwidth. AMD’s server market share climbed to 34%, and major hyperscalers (Google, Microsoft, Oracle, AWS) deployed EPYC at scale in both CPU and GPU-rich environments.
GPU Innovation: The AMD Instinct AI Accelerator Series
The AI and data center GPU revolution is anchored by AMD’s Instinct product line, with the MI300 series (MI300X/MI300A, and MI350/355) positioning AMD directly against NVIDIA’s formidable H100/B100/GB200 lineup. These accelerators harness:
- High core counts and matrix compute units for AI/ML training and inference
- Massive HBM (high-bandwidth memory) pools (288GB+), exceeding NVIDIA’s flagship for large model context windows
- Modular chiplet designs for flexibility, yield, and cost
- Integration with AMD’s CPU platforms for coherent, full-stack data center deployments
Notably, the MI300 is reported as the “fastest-ramping product in AMD history,” reaching $1 billion in sales within its first two quarters on the market and exceeding $5B in forecast GPU sales for 2024—a nearly tenfold jump year-over-year.
Section 5: The Growth and Transformation of AMD’s AI Data Center GPU Business
Data Center Segment: Near Doubling of GPU Sales
Lisa Su placed AI and data center at the heart of AMD’s long-term strategy as early as 2018. Since then, the data center segment has become AMD’s principal engine of growth. In Q1 2025, data center segment revenue reached $3.7 billion, up 57% year-over-year, with the Instinct GPU and EPYC CPU businesses both posting record results. AMD now forecasts over $5 billion in 2024 data center GPU sales—a massive leap from less than $100 million in 2022, with AI accelerators overtaking gaming as a growth driver.
Current Momentum
- MI300X/MI355 Chip: The MI300X and the newer MI355 now directly compete on price/performance with NVIDIA’s B200—offering higher VRAM, comparable throughput, and reportedly “three times the inference performance” of prior AMD generations.
- Customer Adoption: Over 100 enterprise and hyperscale customers from Microsoft (Azure), Meta, and Google to major research institutions now deploy or actively develop on AMD Instinct GPUs.
- Vertical Integration: AMD is not only selling GPUs but packaging full “rack-level” solutions with acquired specializations from Xilinx and ZT Systems, addressing next-gen, trillion-parameter AI models.
Partnership with Cloud and Hyperscaler Giants
Perhaps the most significant proof of AMD’s traction is its strategic partnerships with Microsoft (Azure), Meta, Google, Oracle, and other cloud megascale customers. Microsoft’s Azure ND MI300X v5 virtual machines provide general availability of AMD accelerators for AI workloads, including powering the Azure OpenAI Service (serving GPT-3.5 and GPT-4 at production scale as an alternative to NVIDIA’s H100).
Meta and other AI-first companies have begun migrating research and inference clusters—particularly for very large language models—onto AMD hardware to reduce dependence on NVIDIA’s supply-constrained and premium-priced stack.
Strategic Bet on AI Infrastructure and Open Ecosystem
AMD has been increasingly chosen as the “scale alternative” by hyperscalers looking to diversify risk, improve negotiating leverage over NVIDIA, and broaden the hardware ecosystem for AI innovation. The Instinct roadmap promises annual updates, directly mirroring NVIDIA’s fast development cycles.
Section 6: ROCm and the Battle for AI Developer Ecosystem Supremacy
The CUDA Challenge: Software as a Moat
NVIDIA’s dominance in AI accelerators is tethered to its proprietary CUDA developer stack—an ecosystem built over 15+ years, now the default for most AI researchers, frameworks, and models. Breaking into this developer lock-in has proven more challenging than matching hardware performance alone.
AMD’s Open Approach: ROCm
To counter CUDA, AMD has invested heavily in ROCm (Radeon Open Compute)—an open-source, modular stack that natively supports frameworks like PyTorch, TensorFlow, Hugging Face transformers, vLLM, DeepSpeed, and others. Major ROCm milestones include:
- ROCm 6/7: Adds support for PyTorch 2.x, FP4/FP6 data types, out-of-the-box compatibility with over 2 million Hugging Face models, and distributed inference optimizations.
- AMD Developer Cloud: Provides frictionless access for devs and enterprises to MI300X hardware, containers, and Jupyter workflows, rapidly lowering barriers to entry and facilitating competitive benchmarking.
ROCm Ecosystem Acceleration
- Community and Enterprise Support: Meta, Microsoft, Cohere, and Hugging Face co-developed and adopted ROCm forks to ensure parity with CUDA tools for their most advanced LLMs.
- Open Source Commitment: AMD has open-sourced critical driver, runtime, kernel, and library layers, inviting community and partner collaboration and avoiding single-vendor lock-in.
Challenges Remain
Despite robust progress, ROCm still trails CUDA in developer mindshare, stable backward compatibility, and the breadth of optimized third-party libraries. However, the gap has narrowed significantly; as of mid-2025, AMD claims its ROCm stack now delivers a 3.5x improvement in inference performance over previous versions and can run top research models (e.g., Llama 3, DeepSeek, Gemma 3) day-one with “no code changes” on Azure infrastructure.
Section 7: The Chiplet Strategy and the Power of the TSMC Partnership
The Rise of Chiplet Innovation
AMD’s early bet on chiplets—that is, constructing CPUs and GPUs from multiple, smaller silicon modules—delivered decisive cost, flexibility, and scalability advantages. As transistor scaling grew more expensive for “monolithic” silicon, chiplets allowed AMD to mix process nodes, rapidly adapt product lines, and increase manufacturing yield.
This approach proved vital:
- TSMC Leverage: AMD could use TSMC’s newer nodes for compute-intensive chiplets, while less dense I/O chiplets remained on older, lower-cost nodes.
- High-Core-Count Innovation: Enabled 64–192 core CPUs and multi-die accelerators, surpassing Intel’s monolithic limits in the data center.
- Rapid Adaptability: The MI300X, for example, leverages a multi-chiplet GPU design that can be easily customized for supercomputing or LLM inference tasks.
TSMC: Fab and Technology Partner
AMD’s deep, strategic partnership with TSMC has proven mutually beneficial. Beyond foundry supply, TSMC uses AMD EPYC processors extensively in its own massive datacenters, and collaborative roadmaps ensure the earliest process node access for high-performance AI workloads.
TSMC manufacturing underpins nearly all of AMD’s key AI and CPU products, blending cutting-edge process (3nm, 5nm) and advanced packaging (chip-on-wafer, hybrid bonding) for optimal density and efficiency.
Section 8: Market Dynamics – AMD vs. NVIDIA in the AI Chip War
Head-to-Head: Performance, Cost, and Adoption
Performance Trends
- MI300X/MI350/MI355: Features up to 288GB of HBM3E memory (vs. 192GB for NVIDIA’s B200 Blackwell), enabling single-GPU deployment of models up to 520B parameters—an increasingly vital metric for LLMs.
- Benchmarks: Synthetic and MLPerf benchmarks suggest AMD’s MI355 is comparable to the NVIDIA B200 and GB200 in inference (for models such as Llama3, Mistral, Deepseek) and offers leading performance-per-dollar for inference at scale.
Economic and Adoption Factors
- Pricing: AMD’s GPUs generally price 10–30% below NVIDIA’s, with cost-efficient configurations attractive to hyperscalers running massive inference clusters or academic research clouds. Instance pricing parity is seen on Microsoft Azure, where MI300X undercuts H100 by about 5–10% for the largest LLM inference workloads.
- Market Share Trajectory: While NVIDIA retains more than 85–90% market share in AI data center accelerators, analysts project AMD’s share could grow to 15% or higher by 2026, particularly as cloud vendors demand multi-sourcing and open ecosystems.
Roadblocks
- Software Ecosystem: Although AMD is making headway, the maturity and developer lock-in of CUDA/libraries remain NVIDIA’s most durable moat.
- Systems Integration: NVIDIA’s “full-stack” approach—tightly coupled chips, networking (NVLink), and system orchestration—still provides an edge in turnkey solutions for AI training clouds, though AMD’s rack-level and scale-out architecture initiatives (UltraEthernet, UALink, Helios) are catching up.
Competitive Milestones and Timeline
Year | Milestone/Event | Significance |
---|---|---|
2014 | Lisa Su named CEO; AMD near bankruptcy | Reset strategy around performance CPUs/GPUs |
2017 | Zen (Ryzen) CPUs launched | Recaptured desktop/casual server market |
2018–19 | First Zen-based EPYC server CPUs; chiplet strategy takes shape | Gained data center market share; chiplet adoption |
2021 | AMD overtakes Intel in server CPU share; Instinct MI200 debuts | AMD’s transition to serious data center competitor; entry to HPC/AI GPUs |
2022 | $49B Xilinx acquisition; Su becomes Chair | Expands into adaptive/FPGA computing |
2023 | Instinct MI300X arrives; launches in Azure, Meta clusters | Enters hyperscale LLM inference race against NVIDIA |
2024 | Data center GPU revenue hits $5B+; ROCm ecosystem scaled | MI350 announced, surpasses NVIDIA vs. memory for LLMs |
2025 | MI355, MI350X launched; MI325X, MI400, and Helios platform roadmapped | AI roadmap promises annual cadence; up to 423GB HBM4 coming on MI400 |
Section 9: Regulatory, Geopolitical, and Macroeconomic Context
Export Controls and Geopolitics
The U.S. government imposed stringent export controls in 2024–25 on high-end GPU sales to China, targeting both NVIDIA and AMD chips (e.g., MI308, MI350). These restrictions impacted AMD’s ability to serve what had once been a significant growth market, cutting estimated 2025 revenue by $1.5 billion and forcing a focus shift to North America, Europe, and India.
A compromise struck in July–August 2025 now allows AMD and NVIDIA to resume select chip sales to China, but both must pay a 15% “fee” (effectively an export tax) on all such sales to the U.S. government under a new bilateral agreement. This framework creates both revenue headwinds and new supply chain complexities, but also cements Nvidia’s and AMD’s strategic relevance.
Policy and Supply Chain Adaptation
With about a quarter of all AMD revenue historically tied to China, Su’s global diversification strategy—including investments in India, Japan, and the EU—is designed to mitigate country risk and align with shifting sovereign cloud and AI infrastructure policies around the globe.
Section 10: Analyst and Investor Perspectives on AMD’s AI Future
Wall Street Sentiment
AMD is now firmly viewed as the principal challenger to NVIDIA’s AI hegemony. Multiple analysts, including top-ranked William Stein of Truist and Hans Mosesmann at Rosenblatt Securities, have raised price targets, citing an inflection point as hyperscalers move from “price checking” to “truly deploying AMD at scale” in the data center.
Financial metrics, such as 55% year-over-year growth in net income, soaring free cash flow, and disciplined capital allocation (e.g., increased buybacks), have further emboldened bullish sentiment, despite ongoing margin pressures and export-related headwinds.
AI Revenue Forecasts and Growth Roadmap
- 2024 AI GPU Revenue: Initial projections of $4.5–5B, up 10x from 2022;actuals may reach $5B–5.5B.
- 2025 AI GPU Revenue: Estimates range from $8–12B, with upside depending on MI350/355 ramp and further cloud scale-out.
- Analysts' View: Expect MI400/Helios to further boost competitive posture in training as well as inference.
Section 11: AMD's Future AI Chip Roadmap and Strategic Direction
Innovation and the Annual Cadence Commitment
Lisa Su’s public commitment to an annual AI chip launch—matching or exceeding NVIDIA’s pace—underscores the urgency AMD feels to deliver continual product leadership. The MI350 series, launched in 2025, brings 288GB HBM3E and 35x inference improvements over the MI300, enabling models up to 520B parameters on a single node. The MI355 is optimized for lower precision (FP4, FP6), critical for massive model inference and training.
Roadmap Highlights
- MI350 Series (2025): 288GB HBM, 35x inference perf vs. MI300, FP6/FP4 optimized.
- MI400 Series (2026): Targeting 423GB HBM4, UltraEthernet/UALink for rack-scale integration, up to 40 petaflops per GPU at FP4.
- Helios Rack-Scale Platform (2026–27): Tight integration of CPUs (Venice), NICs (Vulcano), and up to 72 MI400 GPUs per rack—competing with NVIDIA’s NVL72 and Kyber platforms for exascale AI deployments.
Strategic Bets
- Full-Stack Integration: Leveraging ZT Systems and Pensando for system, networking, and storage integration, not just building chips.
- Open Standards: Founding member of UALink, UltraEthernet Consortium, and Open Compute Project, advocating for multi-vendor consistency in AI infrastructure.
- AI Developer Cloud: Removing friction for developers and enterprises to both benchmark and port CUDA workloads into ROCm, democratizing access and spurring research innovation.
Section 12: Timeline of Key Milestones in Lisa Su's AMD Tenure
Below is a table of key events and milestones illustrating AMD’s transformation under Lisa Su:
Year | Event/Milestone | Strategic Significance |
---|---|---|
2012 | Lisa Su joins AMD as SVP/GM | Sets foundation for operational reset |
2014 | Becomes CEO; AMD on brink | Begins turnaround, bets on Zen/Chiplets |
2017 | Launch of Zen (Ryzen) CPUs | Recaptures performance edge, starts CPU share gains |
2018 | Launches EPYC for server | Entry into data center, product diversification |
2019 | Chiplet design goes mainstream | Cost/performance leap, manufacturing flexibility |
2021 | 3rd-gen EPYC volume, overtakes Intel market cap | AMD as leading CPU server provider |
2022 | $49B Xilinx deal closes, Su becomes Chair | Adaptive compute/AI, larger addressable markets |
2023 | Launch of Instinct MI300, first >$1B data center GPU revenue | AI accelerators take off, cloud partnerships deepen |
2024 | Data center GPU sales exceed $5B, MI350 announced | AMD as credible NVIDIA AI rival, ROCm scales |
2025 | MI355/MI325X release, Azure ND MI300X VM general release, ROCm 7 hits | Accelerates AI market share gains, ramps open developer cloud |
2026 | MI400 and Helios rack platform planned release | Prepares for exascale AI and next-gen compute era |
Section 13: Conclusion – Lessons and Implications from the AMD Transformation
AMD’s rise under Lisa Su provides a textbook example of technology-led corporate renewal. Her leadership recomposed the company’s internal culture, set a relentless focus on product and engineering excellence, and drove AMD to make bold architectural and business bets—particularly in chiplets, AI hardware, and open ecosystems—which have enabled it to outcompete much larger and better-resourced rivals. The near doubling of AMD’s data center GPU sales and 55-fold valuation increase stem less from cost optimization and more from strategic vision, aggressive innovation cycles, and persistent execution amidst daunting competitive and geopolitical challenges.
Today, AMD stands not just as a survivor but as a true innovator in AI, HPC, and cloud infrastructure, increasingly regarded by hyperscalers, developers, and investors as the essential second force keeping the world’s AI infrastructure competitive, open, and cost-effective. Its journey demonstrates that software ecosystems, deep partnerships (with TSMC and cloud providers), supply chain agility, and an open innovation strategy are as crucial as transistor counts on a chip.
Lisa Su’s legacy continues to unfold—her annual cadence roadmap and system-level ambitions for rack-scale AI are forcing even the industry’s incumbents to adapt and accelerate. The story of AMD under her leadership is not only a lesson for semiconductor companies, but a case study in transformation for all industries facing disruption, scale, and relentless technological change.
Table: Key Milestones in AMD's Growth Under Lisa Su
Year | Milestone/Event | Market Cap | Strategic Significance |
---|---|---|---|
2014 | Lisa Su named CEO; AMD near bankruptcy | ~$3 billion | Launches reset; bets on Zen architecture, chiplets |
2017 | Zen (Ryzen) CPUs launch | ~$10–20 billion | Returns AMD to CPU performance parity; breaks Intel monopoly |
2018 | EPYC server CPUs launch | ~$15–30 billion | Data center entry; enterprise and hyperscaler adoption starts |
2019 | Chiplet strategy mainstreamed | ~$40 billion | Enables high scalability; multi-foundry, cost-effective design |
2021 | 3rd-gen EPYC, overtakes Intel in share | ~$90 billion | Establishes AMD as top server CPU provider |
2022 | $49B Xilinx acquisition; Su becomes Chair | ~$120 billion | Expands into adaptive, FPGA, and AI compute |
2023 | Launch of Instinct MI300X; Azure/Meta deals | ~$180 billion | Data center GPU sales top $1B; AI accelerators take off |
2024 | $5B+ data center GPU sales, ROCm 6/7 scale | $200 billion | MI350/MI355 match or beat NVIDIA B200; software ecosystem grows |
2025 | MI325X, MI355X, AI roadmap annual cadence | $220 billion | Cloud scale, rack-level integration, multi-cloud deployments |
2026 | MI400/Helios (planned) | Future >$250B | Prepares for exascale AI, tight system/network integration |
Lisa Su's tenure is characterized by a relentless focus on architectural innovation (Zen/Chiplets), on-time execution of major roadmaps (Ryzen, EPYC, Instinct), strategic partnerships (TSMC, cloud hyperscalers), and an embracing of open-source and developer ecosystems (ROCm) that have dramatically increased both AMD’s competitiveness and value. Each milestone demonstrates both technical differentiation and virtuous compounding of customer, market share, and ecosystem strengths, culminating in AMD’s current role as a true AI chip industry leader. Her example sets a new high-water mark for transformational leadership in the semiconductor sector and beyond.
References (41)
1. Lisa Su: How a Decade of Leadership Transformed AMD from Struggling ....
2. Lisa Su: The Person Who Changed AMD.
3. Lisa Su - Wikipedia.
4. Who Is Lisa Su? AMD CEO’s Career, Net Worth, and Success Story.
5. How AMD CEO Lisa Su rebuilt struggling chipmaker, became a ... - CNBC.
6. If You Invested $10,000 in AMD When Lisa Su Became CEO, This Is How ....
7. Why AMD's GPU Sales Approach CPU Levels Amid AI Data Center Boom ....
8. AMD Reports First Quarter 2025 Financial Results.
9. AMD launches 5th Gen Epyc, as server CPU market share hits 34 percent.
10. AMD's Lisa Su has already beaten Intel. Now comes Nvidia - CNBC.
11. AMD CEO and Semiconductor Pioneer - Interesting Engineering.
12. Zen (microarchitecture) - Wikipedia.
13. AMD's Zen architecture: The fundamentals of these Zen 4 CPUs.
14. AMD CHIPLET ECOSYSTEM.
15. AMD’s 5th Gen EPYC Server CPUs Witnesses Adoption By ... - Wccftech.
16. AMD Instinct MI300X Accelerators Power Microsoft Azure OpenAI Service ....
17. AMD Announces MI350 GPU And Future Roadmap Details - Forbes.
18. AMD InstinctTM MI300X Accelerators.
19. AMD’s MI300 AI GPU sales drive 80 percent growth in data center segment.
20. AMD announces AI roadmap through 2026 - eenewseurope.com.
21. AMD Launches 5th Gen AMD EPYC CPUs, Maintaining Leadership Performance ....
22. AMD and Azure .
23. Microsoft Integrates AMD AI Chips into Azure, Competing ... - EconoTimes.
24. Top analyst flags major AI shift in AMD, reboots price target.
25. NVIDIA vs AMD: Who’s winning the AI chip war in 2025? A full-stack.
26. ROCm for AI - AMD.
27. Enabling the Future of AI: Introducing AMD ROCm 7 and AMD Developer Cloud.
28. AMD ROCmTM Software - GitHub Home.
29. AMD Instinct MI300X Generative AI Accelerator and Platform Architecture.
30. TSMCs First Foundry Customers and Partnerships.
31. AMD vs NVIDIA AI Performance: Real-World Analysis 2025.
32. Advanced Micro Devices, Inc. (AMD) AI Strategy and Financial ....
33. How AMD Plans To Meet Future AI and Compute Requirements.
34. U.S issues export licensing requirements for Nvidia, AMD chips ... - CNBC.
35. Under new, unusual agreement, U.S. will get a 15% cut of Nvidia ... - PBS.
36. As AMD starts shifting focus to next year, just how high can AI revenue ....
37. The Engineer Who Needs No Limelight: Lisa Su Took AMD From the Brink of ....