![]() ![]() The company will act as an AI studio, creating personal AIs users can interact with in simple, natural ways. ![]() Inflection AI harnessed that performance to build the advanced LLM behind its first personal AI, Pi, which stands for personal intelligence. “Our joint MLPerf submission with NVIDIA clearly demonstrates the great performance our customers enjoy.” Top Performance Available Today “Our customers are building state-of-the-art generative AI and LLMs at scale today, thanks to our thousands of H100 GPUs on fast, low-latency InfiniBand networks,” said Brian Venturo, co-founder and CTO of CoreWeave. That excellence is delivered both per-accelerator and at-scale in massive servers.įor example, on a commercially available cluster of 3,584 H100 GPUs co-developed by startup Inflection AI and operated by CoreWeave, a cloud service provider specializing in GPU-accelerated workloads, the system completed the massive GPT-3-based training benchmark in less than eleven minutes. H100 GPUs set new records on all eight tests in the latest MLPerf training benchmarks released today, excelling on a new MLPerf test for generative AI. Leading users and industry-standard benchmarks agree: NVIDIA H100 Tensor Core GPUs deliver the best AI performance, especially on the large language models ( LLMs) powering generative AI. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |