News

News about Lablup Inc and Backend.AI

Oct 31, 2024

News

Lablup x Intel Announce Support for Intel® Gaudi® 2 & 3 Platforms in Backend.AI

  • Lablup

    Lablup

Oct 31, 2024

News

Lablup x Intel Announce Support for Intel® Gaudi® 2 & 3 Platforms in Backend.AI

  • Lablup

    Lablup

Seoul — Lablup announces support for Intel® Gaudi® 2 & Intel® Gaudi® 3 AI Accelerators* in Backend.AI at Supercomputing 2024. Adding support for Intel's AI accelerators to the list of AI accelerator vendors already supported by Backend.AI, including NVIDIA, Rebellions, FuriosaAI, AMD, and others, Lablup is able to offer its customers the most AI accelerators and GPUs on the market, making the Backend.AI platform more competitive and giving customers more choice.

*As of November 2024, Backend.AI supports the Intel® Gaudi® 2 AI accelerator.

*Support for Intel® Gaudi® 3 AI Accelerators is planned on the first half of 2025.

Lablup and Intel have worked closely to unlock the power of Intel® Gaudi® 2 & 3 Platform and make it available in Backend.AI, and as the result of this collaboration, we are pleased to announce Backend.AI now supports the Intel® Gaudi® 2 and Intel® Gaudi® 3 AI Accelerators.

Powerful container orchestration with Sokovan™ by Backend.AI

Sokovan™ is a standalone open-source container orchestrator highly optimized for latest hardware acceleration technologies used in multi-tenant and multi-node scaling scenarios. With fully customizable job scheduling and node allocation policies, it accommodates a hybrid mix of interactive, batch, and service workloads in a single cluster without compromising the performance of the AI.

Get the most out of the AI Accelerators, reach your maximum possibilities.

In complex business, promising AI performance and manageability is the key to success. Intel® Gaudi® 3 AI accelerator, the latest release from Intel, offers powerful AI performance and features. Lablup Backend.AI, a Platform-as-a-Service, offers a wide range of features which are optimized for enterprise-grade AI environments.

Innovation deeply integrated with Intel® Gaudi® 2 & 3 Platform

Customers who have already adopted Intel® Gaudi® 2 AI Accelerators or Intel® Gaudi® 3 AI Accelerators in their business environments, as well as those who will adopt Intel® Gaudi® 2 & 3 Platform in the future, will benefit from the wide range of features that Lablup Backend.AI supports for Intel® Gaudi®. Check out the following examples made possible with Backend.AI with Intel® Gaudi® 2 & 3 Platform.

Card-level accelerator allocation

Maximize Intel® Gaudi® 2 & 3 AI Accelerators cluster workload by lending users the actual number of accelerators intended to. For example, customers can run and train models on their existing preferred platform, then serve on Intel® Gaudi® 2 & 3 Platforms, or vice versa.

External storage allocation

Get the most of out the integrated storage solutions in terms of performance. Utilize vendor-specific filesystem acceleration features without user intervention. Backend.AI supports major, widely used platforms such as Dell PowerScale, VAST Data, WEKA, NetApp, etc.,

Multi-scale workloads

Whatever your environment is, from single-card AI workload which can run small models, to multi-node multi-card AI workload which can run gigantic models, Backend.AI ensure its best performance. At this point as Nov.1, Backend.AI is ready to run Single-card AI workloads and Single-node, Multi-card AI workloads. Multi-node, Multi-card AI workload support will be finalized this year.

Inference statistics management

Monitor up-to-date, detailed metrics about the performance provided by your AI framework. Backend.AI makes inference statistics management easy, not only showing the information from the hardware, but also on software so that administrator can deep-dive into the metrics.

Rule-based inference replica auto scaling

Let the system self-optimize the resource usage. With varied user traffic to inference workloads based on a combination of hardware and software performance metrics, administrators do not need to manually control remaining resources.

*Currently in development (Targeting Dec. 2024)

NUMA-aware resource allocation

Achieve the maximum bare-metal performance by eliminating inter-CPU and PCIe bus overheads within a single node, when there are multiple CPU sockets and multiple accelerators for each socket.

User-based, Project-based storage quota management

Budget-efficient, easy data-space management by limiting data storage quota per user or single project.

Hugepage memory allocation support

Minimize the CPU overheads when using accelerators by using larger memory pages but fewer in number to reduce address translation overheads. This support will be finalized this year.

... and much more

Lablup continues to communicate with Intel, expanding the possibility of Backend.AI. Many more are still in development and will be announced soon. Elevate what your cluster can do with Backand.AI and Intel® Gaudi® 3 AI Accelerators.

Getting most out of your Intel® Gaudi® 3 AI Accelerators.

Backend.AI is designed to bring out the maximum performance Intel® Gaudi® 3 AI Accelerators are capable of. Built on the high-efficiency Intel® Gaudi® platform with proven MLPerf benchmark performance, Intel® Gaudi® 3 AI accelerators are built to handle demanding training and inference. Support AI applications like Large Language Models, Multi-Modal Models and Enterprise RAG in your data center or in the cloud—from node to mega cluster, all running on the Ethernet infrastructure you likely already own. Whether you need a single accelerator or thousands, Intel® Gaudi® 3 can play a pivotal role in your AI success.

Do these with superior user interface.

Unlike other systems, Backend.AI is designed for system administrators to control their system easily as possible. Our consumer-grade user interface makes administrators manage their system within a few clicks and types. Backend.AI WebUI is widely used by our proven customers, and they love what they can do without opening the Command Line Interface.

Make your Intel® Gaudi® 2 & 3 Platform 'manageable' with Backend.AI, unleash your performance.

We are making AI services efficient, scalable, and accessible to scientists, researchers, DevOps, enterprises, and AI enthusiasts. Lablup and Intel are working closely together to enable the success of Generative AI and deep learning-based services that are popular today. With our proven technology, Backend.AI provides hardware-level integration with Intel® Gaudi® 2& 3 Platform for the best effort.

About Intel® Gaudi® 3 AI accelerator

Intel® Gaudi® 3 AI accelerator is driving improved deep learning price-performance and operational efficiency for training and running state-of-the-art models, from the largest language and multi-modal models to more basic computer vision and NLP models. Designed for efficient scalability—whether in the cloud or in your data center, Intel® Gaudi® 3 AI Accelerators bring the AI industry the choice it needs—now more than ever. To learn more about Intel® Gaudi® 3, visit intel.com/gaudi3

About Lablup Backend.AI

Backend.AI supports a wide range of GPUs and AI accelerators on the market to achieve maximum efficiency of its performance and provides a user interface to make everything easy. This allows customers to efficiently build, train, and deliver AI models, from the smallest to the largest language models, significantly reducing the cost and complexity of developing and operating services. Backend.AI is the key to unlock the full potential of Generative AI and Accelerated Computing, transforming your business with cutting-edge technology. To learn more about Backend.AI®, visit backend.ai

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

Namyoung Bldg. 4F/5F, 34, Seolleung-ro 100-gil, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.