In Silicon Valley fried chicken shops, there may be no exquisite tableware, but they hold business worth hundreds of billions of dollars.
According to Korean media reports, on February 5th local time, SK Group Chairman Chey Tae-won, who is visiting the United States, and NVIDIA CEO Jensen Huang held an informal “Chimaek” (chicken and beer) meeting at a Korean-style fried chicken restaurant called “99 Chicken” in Silicon Valley.
Observers suggest that the discussion may not be limited to negotiations over the supply of sixth-generation high-bandwidth memory (HBM4), but also include strategic cooperation in building next-generation AI data centers.
Interestingly, reports indicate that the privacy and intimacy of this meeting far exceeded ordinary business negotiations. Attendees included core executives from SK Hynix, Chey Tae-won’s second daughter, Chey Min-jung of IntegralHealth, and Jensen Huang’s daughter, Madison Huang, senior director of NVIDIA’s robotics division.
Analysts believe this “family-style” social scene sends a strong signal: SK and NVIDIA’s relationship is upgrading from business partners to closer strategic allies.
Looking back at history, whenever Chey Tae-won and Jensen Huang meet, the semiconductor market always stirs a big wave. Their meeting in May 2021 directly led to the formation of the “AI triangle alliance” between SK, NVIDIA, and TSMC.
This time, industry insiders believe Chey Tae-won’s visit is not only a commitment to deliver HBM4 on time but also focuses on SK Group’s strategic shift toward becoming a “comprehensive AI solutions provider.” Semiconductor industry sources state:
This is a signal that SK Group is leveraging HBM as a catalyst to officially enter the next-generation AI infrastructure market.
HBM4 Supply Landscape, SK Hynix Promises Unhindered Delivery
The focus of this meeting is on ensuring the supply of HBM4. NVIDIA’s next-generation AI accelerator “Vera Rubin,” set to launch in the second half of the year, will use HBM4 with a capacity of 288 GB per chip.
Because HBM production takes about four months, plus two to three months for TSMC packaging, the entire cycle is 6 to 7 months, which deepens NVIDIA’s reliance on SK Hynix, the industry’s largest capacity provider.
SK Hynix reached an agreement with NVIDIA at the end of last year to supply over 55% of HBM4 demand, and is currently optimizing performance. Chey Tae-won promised Huang Renxun “unhindered supply” during the meeting.
Reports indicate that although SK Hynix’s HBM4 uses 12nm wafer process and 1b DRAM, which are relatively older generation technologies, its performance is comparable to Samsung Electronics’ products using 4nm wafer process and 10nm 6th-generation DRAM.
The market landscape has changed this year. In September last year, Samsung Electronics passed NVIDIA’s quality testing for 12-layer HBM3E products, and this month, it became the first to mass produce and ship HBM4.
Samsung’s HBM4 has a transfer speed of 11.7 Gb per second, exceeding NVIDIA’s requirement of 10 to 11 Gb per second, breaking SK Hynix’s near-monopoly on HBM3E 12-layer products last year.
Industry insiders expect that the two sides also discussed cooperation plans for the seventh-generation HBM (HBM4E), which will officially launch in 2027, and customized HBM (cHBM).
From “Selling Memory” to “Selling AI Infrastructure”
If unhindered supply of HBM4 is “holding the territory,” then Chey Tae-won’s true ambition is to “conquer the territory” by entering the AI data center infrastructure market.
Chey Tae-won is pushing SK Group toward becoming a “comprehensive AI solutions provider,” and his discussion with Jensen Huang is believed to involve collaboration across multiple levels, including AI semiconductors, servers, and data centers.
Further cooperation between SK Hynix and NVIDIA in enterprise-grade solid-state drives (eSSD) is highly anticipated.
In January this year, NVIDIA announced that the Vera Rubin will adopt a new memory solution called “ICMS,” with a full set of equipment equipped with 9,600 terabytes of eSSD, a 16-fold increase compared to existing products. This presents a major opportunity for SK Hynix to expand in the storage field.
Since February 3rd, Chairman Chey has been in the United States holding a series of meetings with several large tech companies including NVIDIA and Meta.
Reports indicate that the two sides may also discuss the next-generation server memory modules—SoCamm (low-power DRAM modules for servers) and flash memory supply—following high-bandwidth memory. Semiconductor industry insiders point out:
SoCamm is the next battleground to change AI server power architectures, and SK Group is leveraging HBM as a powerful tool to officially enter the next-generation AI infrastructure market.
SK Hynix has renamed its US flash memory subsidiary Solidigm to “AI Company,” making it a platform dedicated to SK Group’s AI investments and solutions business.
Its scope not only covers SK Hynix’s AI semiconductor business but also integrates SK Telecom’s AI technology and solutions capabilities, aiming to build a full-chain service system from AI data center design to semiconductor and server delivery. There are even plans to promote data center pilot projects in North America within the year.
This meeting signals Chey Tae-won’s intention to introduce this strategic concept to NVIDIA. Industry observers believe that the two sides may have discussed specific plans for SK Group to provide NVIDIA with comprehensive AI solutions.
This informal meeting at 99 Chicken restaurant could mark the beginning of another major reshuffle in the global semiconductor supply chain, following the 2021 “AI triangle alliance.”
Risk Warning and Disclaimer
Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article are suitable for their particular circumstances. Invest accordingly at your own risk.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Silicon Valley "Chicken Talk": SK Hynix Chairman Meets with Jensen Huang, Secures 55% HBM4 Market Share, and Initiates AI Infrastructure Collaboration
In Silicon Valley fried chicken shops, there may be no exquisite tableware, but they hold business worth hundreds of billions of dollars.
According to Korean media reports, on February 5th local time, SK Group Chairman Chey Tae-won, who is visiting the United States, and NVIDIA CEO Jensen Huang held an informal “Chimaek” (chicken and beer) meeting at a Korean-style fried chicken restaurant called “99 Chicken” in Silicon Valley.
Observers suggest that the discussion may not be limited to negotiations over the supply of sixth-generation high-bandwidth memory (HBM4), but also include strategic cooperation in building next-generation AI data centers.
Interestingly, reports indicate that the privacy and intimacy of this meeting far exceeded ordinary business negotiations. Attendees included core executives from SK Hynix, Chey Tae-won’s second daughter, Chey Min-jung of IntegralHealth, and Jensen Huang’s daughter, Madison Huang, senior director of NVIDIA’s robotics division.
Analysts believe this “family-style” social scene sends a strong signal: SK and NVIDIA’s relationship is upgrading from business partners to closer strategic allies.
Looking back at history, whenever Chey Tae-won and Jensen Huang meet, the semiconductor market always stirs a big wave. Their meeting in May 2021 directly led to the formation of the “AI triangle alliance” between SK, NVIDIA, and TSMC.
This time, industry insiders believe Chey Tae-won’s visit is not only a commitment to deliver HBM4 on time but also focuses on SK Group’s strategic shift toward becoming a “comprehensive AI solutions provider.” Semiconductor industry sources state:
HBM4 Supply Landscape, SK Hynix Promises Unhindered Delivery
The focus of this meeting is on ensuring the supply of HBM4. NVIDIA’s next-generation AI accelerator “Vera Rubin,” set to launch in the second half of the year, will use HBM4 with a capacity of 288 GB per chip.
Because HBM production takes about four months, plus two to three months for TSMC packaging, the entire cycle is 6 to 7 months, which deepens NVIDIA’s reliance on SK Hynix, the industry’s largest capacity provider.
SK Hynix reached an agreement with NVIDIA at the end of last year to supply over 55% of HBM4 demand, and is currently optimizing performance. Chey Tae-won promised Huang Renxun “unhindered supply” during the meeting.
Reports indicate that although SK Hynix’s HBM4 uses 12nm wafer process and 1b DRAM, which are relatively older generation technologies, its performance is comparable to Samsung Electronics’ products using 4nm wafer process and 10nm 6th-generation DRAM.
The market landscape has changed this year. In September last year, Samsung Electronics passed NVIDIA’s quality testing for 12-layer HBM3E products, and this month, it became the first to mass produce and ship HBM4.
Samsung’s HBM4 has a transfer speed of 11.7 Gb per second, exceeding NVIDIA’s requirement of 10 to 11 Gb per second, breaking SK Hynix’s near-monopoly on HBM3E 12-layer products last year.
Industry insiders expect that the two sides also discussed cooperation plans for the seventh-generation HBM (HBM4E), which will officially launch in 2027, and customized HBM (cHBM).
From “Selling Memory” to “Selling AI Infrastructure”
If unhindered supply of HBM4 is “holding the territory,” then Chey Tae-won’s true ambition is to “conquer the territory” by entering the AI data center infrastructure market.
Chey Tae-won is pushing SK Group toward becoming a “comprehensive AI solutions provider,” and his discussion with Jensen Huang is believed to involve collaboration across multiple levels, including AI semiconductors, servers, and data centers.
Further cooperation between SK Hynix and NVIDIA in enterprise-grade solid-state drives (eSSD) is highly anticipated.
In January this year, NVIDIA announced that the Vera Rubin will adopt a new memory solution called “ICMS,” with a full set of equipment equipped with 9,600 terabytes of eSSD, a 16-fold increase compared to existing products. This presents a major opportunity for SK Hynix to expand in the storage field.
Since February 3rd, Chairman Chey has been in the United States holding a series of meetings with several large tech companies including NVIDIA and Meta.
Reports indicate that the two sides may also discuss the next-generation server memory modules—SoCamm (low-power DRAM modules for servers) and flash memory supply—following high-bandwidth memory. Semiconductor industry insiders point out:
SK Hynix has renamed its US flash memory subsidiary Solidigm to “AI Company,” making it a platform dedicated to SK Group’s AI investments and solutions business.
Its scope not only covers SK Hynix’s AI semiconductor business but also integrates SK Telecom’s AI technology and solutions capabilities, aiming to build a full-chain service system from AI data center design to semiconductor and server delivery. There are even plans to promote data center pilot projects in North America within the year.
This meeting signals Chey Tae-won’s intention to introduce this strategic concept to NVIDIA. Industry observers believe that the two sides may have discussed specific plans for SK Group to provide NVIDIA with comprehensive AI solutions.
This informal meeting at 99 Chicken restaurant could mark the beginning of another major reshuffle in the global semiconductor supply chain, following the 2021 “AI triangle alliance.”
Risk Warning and Disclaimer
Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article are suitable for their particular circumstances. Invest accordingly at your own risk.