Intel's Next-Gen Falcon Shores GPU to Consume 1500 W, No Air-Cooled Variant Planned

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
11,049
"Intel may need to develop proprietary hardware modules or a new Open Accelerator Module (OAM) spec to support such extreme power levels, as the current OAM 2.0 tops out around 1000 W. Slated for release in 2025, the Falcon Shores GPU will be Intel's GPU IP based on its next-gen Xe graphics architecture. It aims to be a major player in the AI accelerator market, backed by Intel's robust oneAPI software development ecosystem. While the 1500 W power consumption is sure to raise eyebrows, Intel is betting that the Falcon Shores GPU's supposedly impressive performance will make it an enticing option for AI and HPC customers willing to invest in robust cooling infrastructure. The ultra-high-end accelerator market is heating up, and the HPC accelerator market needs a Ponte Vecchio successor."

1716211625560.png

Source: https://www.techpowerup.com/322592/...-consume-1500-w-no-air-cooled-variant-planned
 
April 1st again already?
Not that much higher than a nvidia B200, the GB200 will go to 2,700w with the ARM CPU (1,200 watt per gpu).

If they really have tiles we could even ask why not go with a 5000watt type affair, if almost all buyer end up buying many of them anyway.
 
Not that much higher than a nvidia B200, the GB200 will go to 2,700w with the ARM CPU (1,200 watt per gpu).

If they really have tiles we could even ask why not go with a 5000watt type affair, if almost all buyer end up buying many of them anyway.
Good to know. I see these crazy high power numbers (relative to desktop parts) and it just blows my mind.
 
Good to know. I see these crazy high power numbers (relative to desktop parts) and it just blows my mind.
They are not trying to build nuclear reactor to power and cool those gpu farms, it will be quite something.

If we are lucky, artificial protection of field that could be replaced by inference will not be too strong, so it will be mitigated by the super computer and industry that used 1 week of giant supercomputer by a 1 minute inference.
 
Not that much higher than a nvidia B200, the GB200 will go to 2,700w with the ARM CPU (1,200 watt per gpu).

If they really have tiles we could even ask why not go with a 5000watt type affair, if almost all buyer end up buying many of them anyway.
But Nvidia doesn't use OAM, they already use their own Proprietary SxM form factor, which supports more power and more connectors for better communication between sockets.
 
Back
Top