China is stepping up its chip self-sufficiency push by combining relatively mature chips with new computing architectures in ...
There are lots of ways that we might build out the memory capacity and memory bandwidth of compute engines to drive AI and HPC workloads better than we have been able to do thus far. But, as we were ...
High Bandwidth Memory (HBM) is the commonly used type of DRAM for data center GPUs like NVIDIA's H200 and AMD's MI325X. High Bandwidth Flash (HBF) is a stack of flash chips with an HBM interface. What ...
At AMD’s Financial Analyst Day earlier this month (which was actually more interesting than it initially sounds), AMD finally confirmed that it was looking to use high-bandwidth memory (HBM) in an ...
If the HPC and AI markets need anything right now, it is not more compute but rather more memory capacity at a very high bandwidth. We have plenty of compute in current GPU and FPGA accelerators, but ...
TL;DR: SK hynix CEO Kwak Noh-Jung unveiled the "Full Stack AI Memory Creator" vision at the SK AI Summit 2025, emphasizing collaboration to overcome AI memory challenges. SK hynix aims to lead AI ...
Microsoft announced today a new security feature for the Windows operating system. Named "Hardware-enforced Stack Protection," this feature allows applications to use the local CPU hardware to protect ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results
Feedback