XDA Developers on MSN
Stop obsessing over your GPU's core clock — memory clock matters more for local LLM inference
Your self-hosted LLMs care more about your memory performance ...
Hosted on MSN
I'm focusing on this GPU spec instead of VRAM
Which GPU specs should you use to measure performance? This question becomes moot since benchmarks should be your north star when evaluating multiple GPUs. That said, VRAM has been a major point of ...
Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs. Share on Facebook (opens in a new window) Share on X (opens in a new ...
TL;DR: AMD's next-gen Instinct MI450X AI accelerator has pushed NVIDIA to enhance its Rubin VR200 AI GPU, increasing memory bandwidth to 20TB/sec and power to 2300W TGP. Both companies are advancing ...
Fresh and tasty Nvidia GPU rumors are here, with the latest Nvidia GeForce RTX 5090 leak suggesting the future flagship RTX 50 graphics card could have a ludicrously high memory bandwidth that's 78% ...
Intel has a new workstation GPU aimed at local AI.
NVIDIA has launched the new compact single-slot RTX PRO 4500 Blackwell Server Edition with 32GB of GDDR7 memory for servers ...
Kioxia announced the development of Super High IOPS SSD, new type of SSD enabling the GPU to directly access high-speed flash ...
High Bandwidth Memory (HBM) is the commonly used type of DRAM for data center GPUs like NVIDIA's H200 and AMD's MI325X. High Bandwidth Flash (HBF) is a stack of flash chips with an HBM interface. What ...
Kioxia announced its ultra-fast GP SSD series for AI workloads at the 2026 GTC. Micron, Samsung and Phison also had their ...
Nvidia kicked off its GPU Technology Conference keynote with a bevy of new product announcements including a new GPU architecture and CPU-GPU interlink, codenamed Pascal. Share on Facebook (opens in a ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results