Menu
in

#Thunder: Lightning AI Compiler for Faster Model Training Speed. #AICompiler

Lightning AI, in partnership with Nvidia, has introduced Thunder, a source-to-source compiler aimed at speeding up the training of AI models by utilizing multiple GPUs. This new tool can provide a 40% increase in training speed for large language models compared to unoptimized code. Thunder was unveiled at Nvidia GTC, showcasing Lightning AI’s commitment to advancing deep learning for PyTorch with Nvidia products. Led by PyTorch core developer Thomas Viehmann, Thunder is designed to support generative AI models across multiple GPUs.

The Thunder compiler addresses the challenges of underutilized GPUs and offers a solution for organizations looking to streamline their model training workflows. By optimizing GPU usage and leveraging profiling tools, Thunder enables customers to train large language models faster and more efficiently. Thunder is now available for use after the release of Lightning 2.2, with different pricing options to cater to individual developers, researchers, startups, and larger organizations.

In summary, Thunder from Lightning AI promises to enhance the speed and efficiency of AI model training by maximizing GPU potential. With its compatibility with Nvidia products and the expertise of core developer Thomas Viehmann, Thunder offers a solution to the challenges of model training optimization. By utilizing Thunder, organizations can save time and resources while scaling their AI model training processes.

Source link

Source link: https://www.globalvillagespace.com/tech/introducing-thunder-lightning-ais-next-generation-ai-compiler-for-enhanced-model-training-speed/

Leave a Reply

Exit mobile version