More

    AMD Unveils ROCm 6.2.3 Enhancing AI Performance on Radeon GPUs




    Iris Coleman
    Oct 13, 2024 02:37

    AMD releases ROCm 6.2.3, boosting AI capabilities for Radeon GPUs with enhanced support for Llama 3, Stable Diffusion, and Triton framework, improving AI development efficiency.





    AMD has launched the latest iteration of its open compute software, AMD ROCm™ 6.2.3, specifically engineered to enhance the performance of Radeon GPUs on native Ubuntu® Linux® systems. This update is aimed at providing superior inference performance for AI models, notably the Llama 3 70BQ4, and enables developers to integrate Stable Diffusion (SD) 2.1 text-to-image capabilities into their AI projects, according to AMD.com.

    Key Features of ROCm 6.2.3

    The new ROCm 6.2.3 release brings several advanced features aimed at accelerating AI development:

    • Support for Llama 3 via vLLM: This feature provides exceptional inference performance on Radeon GPUs with the Llama 3 70BQ4 model.
    • Flash Attention 2 Integration: Designed to optimize memory usage and enhance inference speed, this feature supports forward enablement.
    • Stable Diffusion 2.1 Support: Developers can now incorporate SD text-to-image models into their AI applications.
    • Triton Framework Beta Support: This allows developers to write high-performance AI code with minimal expertise, utilizing AMD hardware efficiently.

    Advancements in AI Development

    Erik Hultgren, Software Product Manager at AMD, emphasized that ROCm 6.2.3 targets specific features to expedite generative AI development. The release includes professional-level performance enhancements for Large Language Model (LLM) inference via vLLM and Flash Attention 2. It also introduces beta support for the Triton framework, broadening the scope for AI development on AMD hardware.

    Evolution of ROCm Support

    AMD’s ROCm support for Radeon GPUs has significantly evolved over the past year, starting with the 5.7 release. Version 6.0 expanded capabilities by incorporating the ONNX runtime and formally qualifying more Radeon GPUs, including the Radeon PRO W7800. The 6.1 update marked another milestone with multi-GPU configuration support and integration with the TensorFlow framework.

    With the current release, ROCm 6.2.3 continues to focus on Linux® systems, with plans to introduce Windows® Subsystem for Linux® (WSL 2) support soon. This strategic approach aims to further enhance the ROCm solution stack for Radeon GPUs, positioning it as a robust option for AI and machine learning development.

    For additional information and resources, visit AMD’s official community page.

    Image source: Shutterstock




    Source link

    Stay in the Loop

    Get the daily email from CryptoNews that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

    Latest stories

    - Advertisement -

    You might also like...