![]() ![]() High memory bandwidth has been noted as among the most desired features for HPC customers. The Xeon Max Series CPU, the only x86 processor with high bandwidth memory, exhibits a 65% improvement over AMD’s Genoa processor on the High Performance Conjugate Gradients (HPCG) benchmark 1, using less power. The Intel ® Data Center GPU Max Series outperforms Nvidia H100 PCIe card by an average of 30% on diverse workloads 1, while independent software vendor Ansys shows a 50% speedup for the Max Series GPU over H100 on AI-accelerated HPC applications. “Our product portfolio spanning Intel ® Xeon ® CPU Max Series, Intel ® Data Center GPU Max Series, 4th Generation Intel ® Xeon ® Scalable Processors and Habana ® Gaudi ®2 are outperforming the competition on a variety of workloads, offering energy and total cost of ownership advantages, democratizing AI and providing choice, openness and flexibility.”Īt the Intel special presentation, McVeigh highlighted the latest competitive performance results across the full breadth of hardware and shared strong momentum with customers. “Intel is committed to serving the HPC and AI community with products that help customers and end-users make breakthrough discoveries faster,” said Jeff McVeigh, Intel corporate vice president and general manager of the Super Compute Group. More: International Supercomputing Conference 2023 (Quote Sheet) ![]() At the same time, oneAPI and AI tools help developers speed up HPC and AI workloads and enhance code portability across multiple architectures. Intel’s product portfolio – spanning Intel® Xeon® CPU Max Series, Intel® Data Center GPU Max Series, 4th Gen Intel® Xeon® Scalable processors and Habana® Gaudi®2 processors – meets the needs of the HPC community. Intel is committed to serving the high performance computing (HPC) and artificial intelligence (AI) communities with products that help customers and end-users make breakthrough discoveries faster. Product roadmap updates highlight Granite Rapids, a next-generation CPU to address memory bandwidth demands, and Falcon Shores GPU to meet an expanding, diverse set of workloads for HPC and AI.Īrgonne National Laboratory and Intel announce full Aurora specifications, system momentum and international initiative with Hewlett Packard Enterprise (HPE) and partners to bring the power of generative AI and large language models (LLM) to science and society.Įnhanced oneAPI and AI tools help developers speed up HPC and AI workloads and enhance code portability across multiple architectures.Īt the ISC High Performance Conference, Intel showcased leadership performance for high performance computing (HPC) and artificial intelligence (AI) workloads shared its portfolio of future HPC and AI products, unified by the oneAPI open programming model and announced an ambitious international effort to use the Aurora supercomputer to develop generative AI models for science and society. Intel’s broad portfolio of HPC and AI products provides competitive performance, with Intel ® Data Center GPU Max Series 1550 showing an average speedup of 30% over Nvidia H100 on a wide range of scientific workloads. ![]() At ISC’23, Intel details competitive performance for diverse HPC and AI workloads, from memory-bound to generative AI, and introduces new science LLM initiative to democratize AI. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |