Tag : Backend.AI

  • Lablup x Intel Announce Support for Intel® Gaudi® 2 & 3 Platforms in Backend.AI

    By Lablup

    Seoul — Lablup announces support for Intel® Gaudi® 2 & Intel® Gaudi® 3 AI Accelerators* in Backend.AI at Supercomputing 2024. Adding support for Intel's AI accelerators to the list of AI accelerator vendors already supported by Backend.AI, including NVIDIA, Rebellions, FuriosaAI, AMD, and others, Lablup is able to offer its customers the most AI accelerators and GPUs on the market, making the Backend.AI platform more competitive and giving customers more choice.

    *As of November 2024, Backend.AI supports the Intel® Gaudi® 2 AI accelerator.

    *Support for Intel® Gaudi® 3 AI Accelerators is planned on the first half of 2025.

    Lablup and Intel have worked closely to unlock the power of Intel® Gaudi® 2 & 3 Platform and make it available in Backend.AI, and as the result of this collaboration, we are pleased to announce Backend.AI now supports the Intel® Gaudi® 2 and Intel® Gaudi® 3 AI Accelerators.

    Powerful container orchestration with Sokovan™ by Backend.AI

    Sokovan™ is a standalone open-source container orchestrator highly optimized for latest hardware acceleration technologies used in multi-tenant and multi-node scaling scenarios. With fully customizable job scheduling and node allocation policies, it accommodates a hybrid mix of interactive, batch, and service workloads in a single cluster without compromising the performance of the AI.

    Get the most out of the AI Accelerators, reach your maximum possibilities.

    In complex business, promising AI performance and manageability is the key to success. Intel® Gaudi® 3 AI accelerator, the latest release from Intel, offers powerful AI performance and features. Lablup Backend.AI, a Platform-as-a-Service, offers a wide range of features which are optimized for enterprise-grade AI environments.

    Innovation deeply integrated with Intel® Gaudi® 2 & 3 Platform

    Customers who have already adopted Intel® Gaudi® 2 AI Accelerators or Intel® Gaudi® 3 AI Accelerators in their business environments, as well as those who will adopt Intel® Gaudi® 2 & 3 Platform in the future, will benefit from the wide range of features that Lablup Backend.AI supports for Intel® Gaudi®. Check out the following examples made possible with Backend.AI with Intel® Gaudi® 2 & 3 Platform.

    Card-level accelerator allocation

    Maximize Intel® Gaudi® 2 & 3 AI Accelerators cluster workload by lending users the actual number of accelerators intended to. For example, customers can run and train models on their existing preferred platform, then serve on Intel® Gaudi® 2 & 3 Platforms, or vice versa.

    External storage allocation

    Get the most of out the integrated storage solutions in terms of performance. Utilize vendor-specific filesystem acceleration features without user intervention. Backend.AI supports major, widely used platforms such as Dell PowerScale, VAST Data, WEKA, NetApp, etc.,

    Multi-scale workloads

    Whatever your environment is, from single-card AI workload which can run small models, to multi-node multi-card AI workload which can run gigantic models, Backend.AI ensure its best performance. At this point as Nov.1, Backend.AI is ready to run Single-card AI workloads and Single-node, Multi-card AI workloads. Multi-node, Multi-card AI workload support will be finalized this year.

    Inference statistics management

    Monitor up-to-date, detailed metrics about the performance provided by your AI framework. Backend.AI makes inference statistics management easy, not only showing the information from the hardware, but also on software so that administrator can deep-dive into the metrics.

    Rule-based inference replica auto scaling

    Let the system self-optimize the resource usage. With varied user traffic to inference workloads based on a combination of hardware and software performance metrics, administrators do not need to manually control remaining resources.

    *Currently in development (Targeting Dec. 2024)

    NUMA-aware resource allocation

    Achieve the maximum bare-metal performance by eliminating inter-CPU and PCIe bus overheads within a single node, when there are multiple CPU sockets and multiple accelerators for each socket.

    User-based, Project-based storage quota management

    Budget-efficient, easy data-space management by limiting data storage quota per user or single project.

    Hugepage memory allocation support

    Minimize the CPU overheads when using accelerators by using larger memory pages but fewer in number to reduce address translation overheads. This support will be finalized this year.

    ... and much more

    Lablup continues to communicate with Intel, expanding the possibility of Backend.AI. Many more are still in development and will be announced soon. Elevate what your cluster can do with Backand.AI and Intel® Gaudi® 3 AI Accelerators.

    Getting most out of your Intel® Gaudi® 3 AI Accelerators.

    Backend.AI is designed to bring out the maximum performance Intel® Gaudi® 3 AI Accelerators are capable of. Built on the high-efficiency Intel® Gaudi® platform with proven MLPerf benchmark performance, Intel® Gaudi® 3 AI accelerators are built to handle demanding training and inference. Support AI applications like Large Language Models, Multi-Modal Models and Enterprise RAG in your data center or in the cloud—from node to mega cluster, all running on the Ethernet infrastructure you likely already own. Whether you need a single accelerator or thousands, Intel® Gaudi® 3 can play a pivotal role in your AI success.

    Do these with superior user interface.

    Unlike other systems, Backend.AI is designed for system administrators to control their system easily as possible. Our consumer-grade user interface makes administrators manage their system within a few clicks and types. Backend.AI WebUI is widely used by our proven customers, and they love what they can do without opening the Command Line Interface.

    Make your Intel® Gaudi® 2 & 3 Platform 'manageable' with Backend.AI, unleash your performance.

    We are making AI services efficient, scalable, and accessible to scientists, researchers, DevOps, enterprises, and AI enthusiasts. Lablup and Intel are working closely together to enable the success of Generative AI and deep learning-based services that are popular today. With our proven technology, Backend.AI provides hardware-level integration with Intel® Gaudi® 2& 3 Platform for the best effort.

    About Intel® Gaudi® 3 AI accelerator

    Intel® Gaudi® 3 AI accelerator is driving improved deep learning price-performance and operational efficiency for training and running state-of-the-art models, from the largest language and multi-modal models to more basic computer vision and NLP models. Designed for efficient scalability—whether in the cloud or in your data center, Intel® Gaudi® 3 AI Accelerators bring the AI industry the choice it needs—now more than ever. To learn more about Intel® Gaudi® 3, visit intel.com/gaudi3

    About Lablup Backend.AI

    Backend.AI supports a wide range of GPUs and AI accelerators on the market to achieve maximum efficiency of its performance and provides a user interface to make everything easy. This allows customers to efficiently build, train, and deliver AI models, from the smallest to the largest language models, significantly reducing the cost and complexity of developing and operating services. Backend.AI is the key to unlock the full potential of Generative AI and Accelerated Computing, transforming your business with cutting-edge technology. To learn more about Backend.AI®, visit backend.ai

    31 October 2024

  • Uncharted AI: The Age of AI

    By Lablup

    This article is a summary of Jeongkyu Shin's keynote speech on September 24, 2024 at lab | up > /conf/4.

    On September 24, 2024, Lablup's 4th conference, lab | up > /conf/4, was held. The event was attended by a variety of external speakers as well as Lablup employees, and the keynote address was given by Lablup's CEO, Jeongkyu Shin.

    Photo by 'iT dongA'

    This article will cover the advancements in the AI era as introduced by Jeongkyu Shin in his keynote speech, the future trajectory of Lablup, updates on the current products, and some of our new product releases.

    Uncharted Waters

    The title of this keynote, "Uncharted AI - The Age of AI," draws inspiration from the classic game "Uncharted Waters," fondly remembered by many. However, the Uncharted Waters is not merely a game; it represents a significant chapter in the real-life history of our global community.

    During the Age of Discovery, beginning in the 15th century, numerous explorers ventured across the oceans in pursuit of spices, such as the nowaday widely-known "pepper." Although I was not alive during that time to witness it firsthand, so I played it with a game. We may not consider a spice today so valuable, but numerous adventurers risked their lives in its pursuit.

    Uncharted AI

    Like so many people who risked their lives across the ocean in search of spices back then, we're in a new era of artificial intelligence (AI), and we're risking our lives and working with a diverse set of partners to advance AI. The necessity of this effort lies in its commitment to accessibility. If I could harvest pepper in my backyard, I wouldn't have to cross the ocean. At the dawn of a new era, this difference in access creates a skills gap for some and a challenge for others. For Lablup, the skills gap introduced by emerging technologies has catalyzed the dawn of a new era.

    At Lablup, our motto has been clear since our founding in 2015. We've made it our core mission to Make AI Accessible, making technology more accessible and lowering barriers. Our goal was to reduce the barriers to AI accessibility by making the technology itself comprehensible and user-friendly, not merely available as an API.

    As the field of AI advances, the challenge of scaling emerges. As AI technology expands, data it processes increases, computation also intensifies, it moves from single-node to multi-node, and from tens to hundreds of thousands of GPUs. Simultaneously, AI is becoming more compact, operating on devices in the palm of your hand, such as Samsung's Galaxy AI and Apple Intelligence, as well as on IoT sensors like thermometers.

    Simultaneously, we are witnessing efforts to operate AI with greater power and more resources, as well as a surge in endeavors to run AI with less power and fewer resources. If we consider the traditional spectrum of AI, it is expanding both upwards (larger) and downwards (smaller), with the technology needed to shift the scale in either direction being entirely distinct.

    Back in 2015, we were able to construct models using just a GeForce GTX970. However, workloads have expanded so rapidly that for the past four or five years, their growth has surpassed the performance improvements of semiconductors, known as Moore's Law. Consequently, the focus has shifted from enhancing a single chip's performance to combining several chips and utilizing them in parallel.

    Make AI "Scalable"

    Over the past four years, the distributed computing paradigm in AI has undergone significant evolution. We have moved beyond parallel processing to witness a variety of computations occurring concurrently. Diverse tasks like data processing, model training, and service provisioning are now integrated. Simultaneous demands for heterogeneous computational resources have emerged, encompassing databases, training, data processing, fleet management, RAS, and others that align more closely with the service stack.

    Accelerators such as GPUs have become essential for modern computing. We no longer use CPUs and GPUs separately; instead, we must integrate them more closely. The driving force behind this integration is the universal need for GPUs, which leads to bottlenecks that are both physical—such as power, network, and data—and non-physical, including hardware instability, platform management, and software issues. At Lablup, our goal is to eliminate these obstacles to scaling.

    This year at Lablup, we've set a new objective: Make AI Scalable. Our aim is to expand AI workloads across the full range, from accelerators to individual nodes to hyperscale environments. This goal builds upon our initial mission of “Making AI Accessible,” as we eliminate obstacles to scaling, incorporate elements that facilitate scaling, and persist in dismantling barriers to accessing AI technology.

    Through the years, the company's dedication to making AI both accessible and scalable has resulted in numerous innovations. As a result, the number of enterprise GPU running on Backend.AI has grown to nearly 13,000, with some sites managing more than 1,500 GPUs. Additionally, the number of teams (customers) utilizing our products has increased to over 100. In varied sectors such as cloud services, AI accelerator testbeds, and autonomous driving, Backend.AI has established itself as a crucial infrastructure component for AI.

    This massive scale significantly increased the technical challenge. We've had to develop technologies that span the entire spectrum, from single servers to thousands of clusters. We had to “take away everything that are blocking the scaling, and add everything for the scaling.” We would like to use this opportunity to share our recent innovations, the ongoing developments, and the future we are striving to create.

    Open Source

    Lablup is a company that is deeply involved in the open source ecosystem. We are developing and releasing various projects such as Backend.AI, Callosum, aiodocker, aiomonitor (aiotools), Raftify, and many more. Open source is in our DNA. Our experience on the open-source we create, publish, or contribute to across various on-premises environments is a significant competitive edge of us. Backend.AI's support for on-premises environments, compatibility with cloud environments, and more are all capabilities that what we've gained from our open source experience.

    Backend.AI CLI Installer: Easy installation experience with TUI

    The Backend.AI CLI Installer is an open-source initiative designed to enhance the accessibility of Backend.AI. It features a text-based user interface (TUI) for simplified installation, automates the package-based installation process, and includes meta settings for streamlined automatic setup.

    bndev: Easily build your own AI infrastructure

    For enthusiasts who enjoy tinkering and hacking beyond mere package-based installations, we have introduced a development tool named bndev. This tool simplifies the process of constructing and maintaining intricate Backend.AI development environments. The concept behind bndev is to empower everyone to own and maintain their personal AI infrastructure.

    Backend.AI Core

    Backend.AI conducts major version releases biannually, in March and September. The release of version 24.03 took place in March 2024, and the upcoming release of version 24.09 is imminent. Significant updates to Backend.AI Core are expected to influence future releases. Allow me to introduce these changes for you.

    Key Updates

    • Support for NVIDIA NGC(NVIDIA GPU Cloud) NIM(Nemo Infrerence Microservice): Key NGC features, like license-based container image loading, are compatible with Backend.AI.
    • Expanded support for new accelerators including Intel Gaudi2, Rebellions ATOM+, and Furiosa RNGD: Backend.AI allows you to flexibly choose the best AI accelerator to match the characteristics of your workload.
    • General availability of Backend.AI model store, browser, and serving: A comprehensive solution that integrates the essential features of MLOps, simplifying the process for customers to find AI models and deploy them seamlessly into their workflows.
    • Enhanced Task Scheduling: The new Priority Scheduler enables the independent prioritization of tasks, ensuring that tasks of high importance are addressed swiftly and dependably.
    • Agent Selector Concept: The Agent Selector is responsible for determining which nodes the scheduler actually runs the selected tasks on. This part is now easily customizable as a standalone plugin. You can use it to distribute jobs based on different criteria, such as power usage or temperature of each node. We expect this to be a great help in optimizing the operation of your infrastructure by balancing the load across nodes, increasing power efficiency, and more.
    • Our own Docker network plugin: Expanded support for GPUDirect Storage for large-scale data processing, minimizing bottlenecks in moving data within a single node.
    • Cilium-based networking stack for inter-container communication: The implementation has enhanced large-scale distributed learning, resulting in a 30% increase in network performance compared to previous setup.
    • OpenID Connect (OIDC)-based federated authentication scheme: Access various infrastructure services, such as Backend.AI and others, using a single account to significantly streamline account management.
    • Expanded support for enterprise environments: It works with a variety of PrivateContainer Registries, including GitLab, GitHub Enterprise, AWS ECR, and more, and makes it easy to configure hybrid configurations that span both on-premises legacy resources and the cloud.

    Leveraging these updates, Backend.AI is broadening its scope as a cutting-edge AI infrastructure, serving both high-performance computing (HPC) and enterprise needs. Further enhancements will accompany the launch of Backend.AI 24.09.

    Next-gen Sokovan

    We continues to develop the next-generation Sokovan, scheduled for release early the following year. Here is a brief overview of what to expect from Next-gen Sokovan.

    • Dual-engine architecture supporting Kubernetes: In addition to the current proprietary cluster management system, it will function as a native Kubernetes service. This includes managing accelerators through the Kubernetes Operator Proxy. We will seamlessly integrate NVIDIA and AMD device plugins, Intel GPU plugins, among others, to uphold industry standards.
    • Database load balancing with Raftify during high-availability (HA) config: Minimize bottlenecks for metadata services and ensure reliable operation in clusters of tens of thousands of units.
    • Enhanced automatic scaling for serving large language models: API metrics like request patterns and latency, and resource usage are analyzed for optimal scaling
    • Strengthening the project unit: Capable to manage datasets, models, pipelines, and more collectively. The objective is to facilitate fine-grained role-based access control (RBAC) to accommodate diverse collaborative scenarios.
    • Enhanced management capabilities for enterprise customers: You'll have integrated logging and monitoring, as well as audit log tracking for regulatory compliance.

    All of these changes are being made with one goal in mind: to accelerate our customers' AI projects. With the new AI accelerator and connections to other Kubernetes-based solutions, our team is looking forward to further maturing the Backend.AI Core and MLOps features. Stay tuned for the next Sokovan's journey as he takes on a broader role.

    Backend.AI WebUI

    In the near future, the Backend.AI WebUI will be getting a new look. From a user's perspective, the user interface is probably the most important factor that determines the first impression of Backend.AI. We have always recognized the importance of the WebUI and have been innovating on it. We launched ML Desktop last year and GenAI Desktop earlier this year to test different user experiences, and we recently brought a user-friendly UI to our products with Neo Session Launcher.

    Introducing WebUI Neo, the third new evolution of WebUI. Designed in close collaboration with Vice Versa Design Studio with the goal of delivering a rich user experience, this new design language is designed with the user in mind from start to finish. To coincide with the relaunch of Backend.AI, we've redesigned the entire UI/UX to give it a sleeker, more futuristic look and feel.

    WebUI Neo was designed with the concepts of “reducing cognitive load” and “maintaining consistency in visual metaphors.” In terms of reducing cognitive load, we wanted to minimize the amount of complex information users had to type or top-search. For example, when setting up large-scale experiments, we limited the amount of information available in a step by exposing information sequentially, rather than presenting dozens of options at once.

    In terms of “maintaining consistency in visual metaphors,” we've organized UI/UX elements, from screen composition to icons to colors, into similar design patterns for similar concepts, such as experiments, models, and data sets. By this, our users can reuse what they've learned once without having to relearn how to use similar features. WebUI Neo will be applied across both Backend.AI Core and Enterprise.

    In recognition of this innovation, WebUI Neo was awarded the Excellence Award, which is only given to four consortia, at the Seoul Design Foundation and Seoul Metropolitan Government's Industrial Design Development Support Project for Small and Medium-sized Enterprises.

    WebUI Neo will not be included in the Backend.AI 24.09 update right away, but is still being developed and tested with the goal of a general release later this year. We're also finalizing the move from Web Components, which is the codebase used since the first version of WebUI, to React. WebUI Neo is more than just a repackaging of past features; it will continue to add new functionality that is tightly aligned with machine learning workflows and will be the foundation for achieving the high level of automation and ease of use that Backend.AI strives for. This is the future we envision with WebUI Neo, a world where everyone can easily understand and benefit from AI infrastructure beyond its complexity.

    Lablup Enterprise

    The core of Lablup Enterprise, centered on Backend.AI Enterprise, can be described as ___ made easy. Lablup Enterprise aims to make deep-level AI technology innovation easy with end-to-end technology from device driver level to AIOps. We have three ___ made easy concepts: “Scaling made easy”, “Acceleration made easy”, and “Inference made easy”.

    Scaling made easy: FastTrack 2, Finetun.ing, Cluster Designer

    FastTrack 2

    FastTrack 2, released with 24.09, is an automation solution for AI projects at scale. It provides pipeline management based on project groups, making it easy to define and execute complex workflows. It offers a wide range of reusable templates to minimize repetitive tasks. In addition, FastTrack 2 enables you to better leverage your resources by connecting with external partners. You can add model compression nodes and model serving services from partners to your pipeline.

    Finetun.ing

    Finetun.ing is a cloud-based fine-tuning service created in collaboration with FastTrack. It stands out from traditional fine-tuning services by eliminating the need for users to prepare their own data. Typically, fine-tuning involves uploading data to adjust model, but Finetun.ing simplifies this process by allowing users to interactively input prompts. The service then generates synthetic data from these interactions to fine-tune the model. The finetuned models are automatically evaluated and made available for download, complete with a model card. Finetun.ing operates on NVIDIA NemoTron and supports Llama 3.1 and Gemma 2. Ongoing tests aim to enable fine-tuning for an array of new models, with plans to expand the selection in the future.

    Finetun.ing is currently gearing up for its final unveiling, and we've decided to take a waitlist for the first time at this event. You can sign up for the waitlist at https://finetun.ing.

    Cluster Designer

    Backend.AI Cluster Designer is a GUI-based cluster design tool. It automatically calculates the effective performance of a cluster of your desired size and performance, along with the required hardware configuration and estimated cost. It's perfect for those who want to validate the optimal architecture before actually building.

    Backend.AI Helmsman

    Backend.AI Helmsman is an interactive cluster management interface. It makes complex cluster operations possible just by chatting in a terminal. Under the hood, it utilizes a Gemma-based fine-tuning model to accurately understand user intent. It combines packages such as TorchTune, LangGraph, and LangChain to build interactive fine-tuning pipelines for on-premises environments. UI packages and models via the Helmsman CLI and WebUI will be released after the Backend.AI 24.09 release, by the end of the year.

    Acceleration made easy

    The second is “Acceleration made easy”. We support a wide variety of accelerators for AI workloads than any other AI infrastructure platform in existence.

    CPU architecture coverage includes x86 as well as heterogeneous architectures such as Arm and RISC-V. We work closely with the latest accelerators, including NVIDIA's Grace Hopper, AMD's MI Series, Intel Gaudi, GraphCore BOW, GroqCard, Rebelion ATOM+, and Furiosa RNGD, to ensure you get the same user experience and peak performance on Backend.AI.

    Inference made easy

    Finally, “Inference made easy”.

    We've simplified the sharing and distribution of pre-trained models with a unified model store. Inspired by package managers like Choco on Windows and Homebrew on macOS, Lablup ION model recipes allow you to install models and services contributed by the community via GitHub with a single line of command.

    PALI, PALI PALI (PALI2), PALANG

    There's also something new to introduce in terms of model service operations. It's PALI, PALI2, PALANG.

    **Performant AI Launcher for Inference (PALI) is a high-performance inference runtime that combines the Backend.AI model player with a curated model catalog and predefined models. It features flexible scalability and high performance. Anyone can easily install, run NVIDIA NIM, Hugging Face models, and Lablup ION recipes right out of the box to run model services.

    PALI2 is a dedicated hardware infrastructure appliance for PALI. You can easily scale by connecting multiple appliances with PALI. PALI2 is an architecture optimized for AI workloads, delivering high performance and low latency. Depending on your installation, we can provide and update models for different architectures and chip environments.

    We are also preparing a PALI2 appliance that incorporates the NVIDIA reference platform GH200, and KYOCERA Mirai Envision Co., Ltd. in Japan will launch Instant.AI as the first reference platform for PALI2, which will be available for purchase on October 1.

    Reference platforms for the Korean market will be available to reserve in October and for sale in Q4. PALI2 appliances targeting the U.S. and European markets will be available as early as Q4 of this year.

    PALANG is a language model inference platform that includes PALI, FastTrack, Talkativot, and Helmsman. It provides ready-to-use inference and fine-tuning settings, greatly simplifying the deployment and operation of large-scale language models. Talkativot makes it easy to create custom chatbot interfaces and provides software components for model comparison and interface building during development. You can use PALI and PALI2 if you only need references, or PALANG if you need both language model fine-tuning and inference.

    G

    Finally, One More Thing... We'd like to give you a sneak peek at a new project we're currently working on: G, a language model based on Gemma2. It features easy customization with Finetun.ing. It will be used for a variety of purposes, including a backend model for Helmsman and an enterprise agent. Details will be revealed soon.

    From Uncharted AI to Industrial Revolution

    During the Age of Discovery, countless adventurers sailed the globe in search of pepper. Their adventures led to the discovery of many parts of the world that remained uncharted, and the world became more connected through the routes they opened. Shipbuilding and navigation were improved, new trade routes were opened, and innovations were made in medicine, military technology, and more. But that's not all: the Age of Discovery spawned another important event: the Industrial Revolution.

    We are currently living in what is known as the Age of Great AI. It's akin to the dawn of the Age of Discovery, where the doors to new possibilities are just now opening. One person is returning with pepper, while another is constructing a larger vessel to demonstrate that the Earth is round. We are witnessing the equivalent of what the Industrial Revolution brought by the Age of Discovery.

    Engine of AI Infrastructure

    The Industrial Revolution began with James Watt's steam engine. The invention of the steam engine ushered in an era of mass production and mechanization. Now we're in the midst of another revolution. In the face of the tidal wave that is the Age of Great AI, Lablup is building a new engine.

    Lablup is the engine of AI infrastructure. Our technology fuels innovation across industries. While the steam engine harnessed the power of coal, our engine is fueled by data. Just as a car engine converts the energy of gasoline into motion, Lablup provides an efficient and powerful engine that converts the fuel of data into AI, and the value it brings.

    Just as the internal combustion engine gave birth to the automotive industry, AI engines will reshape the data-driven IT industry. Lablup is preparing for the time when everyone and every organization will be able to derive insights and value from their own data, rather than just storing and managing it. Lablup's AI engine is unrivaled in scale and speed. It has the scale to run dozens to tens of thousands of GPUs simultaneously, processing petabytes of data in real time, for the IoT and beyond. Just as the performance of an engine determines the speed of a car, our infrastructure will determine your success in the AI ecosystem.

    So far, you've seen the engines that we had built. With these engines, we want to drive the AI revolution beyond the Age of Great AI. We're going to work on designing and improving the engine so that each and every one of you can be in the driver's seat. We invite you to step on the gas pedal of the AI era with Lablup.

    27 September 2024

  • Finetuning Domain adaptive language model with FastTrack

    By Yonggeun Kwon

    Introduction

    This article explains how to train and evaluate a language model specialized in supply chain and trade-related domains using Backend.AI's MLOps platform, FastTrack. For this language model, we used the gemma-2-2b-it model as the base model, which was continually pretrained with supply chain and trade domain datasets. To train a model specialized in the Question Answering task, domain datasets collected and processed from the web were converted into a Q/A task format, consisting of trainable questions and answers, depending on the use case.

    Developing AI involves stages such as data preprocessing, training, validation, deployment, and inference. Using Lablup's FastTrack, each of these stages can be configured into a single pipeline, allowing for easy customization, such as skipping specific stages or adjusting resource allocation per stage according to the pipeline configuration.

    Concept of Domain Adaptation

    Before diving into model training, a process called Domain Adaptation is necessary. To briefly explain for those unfamiliar, Domain Adaptation refers to the process of refining a pretrained model to make it suitable for a specific domain. Most general-purpose language models we encounter today are not designed to possess expertise in specific fields. These models are typically trained using datasets from general domains to predict the next token effectively and then fine-tuned to fit overall usage directions.

    However, when creating a model for use in a specialized domain, training with general datasets is insufficient. For instance, a model trained in a general domain can understand contexts like "This movie was amazing," but it may struggle with sentences in the legal domain, such as "The court ordered the seizure of the debtor's assets," due to the lack of learning of specific terms and expressions used in each domain. Similarly, if a Q/A task is given, implementing it with general data might not be possible. To properly handle a Q/A task, a language model must be fine-tuned with domain-specific datasets trained for the Q/A task. This fine-tuning process allows the model to better understand the nuances of the task and effectively respond to domain-specific questions from the user.

    This article focuses on the process of developing a model specialized in Supply Chain Management (SCM) and trade domains. As shown in the above image, there is a significant difference between general domain terms like "movie" or "travel" and SCM-specific terms like "air waybill" or "payment manager." To bridge this gap, our goal today is to adjust the model using datasets from SCM and trade domains to enhance the model's understanding of these domains and accurately capture the context.

    In summary, Domain Adaptation is essentially a process of overcoming the gaps between different domains, enabling the model to perform better in new contexts.

    Train model from scratch vs DAPT

    So, why not train the model from scratch using datasets from the specific domain? While this is possible, it comes with several limitations. Training a model from scratch with domain-specific datasets requires extensive data and training because the model lacks both general domain knowledge and domain-specific expertise. Collecting datasets for general domain deep learning is already challenging, but gathering high-quality, domain-specific data is even more difficult. Even if data is collected, preprocessing it to fit model training can be time-consuming and costly. Therefore, training a model from scratch is more suitable for companies with abundant domain-specific data and resources.

    What if you want to develop a domain-adaptive model but don't have access to vast datasets or sufficient resources? In such cases, Domain-Adaptive Pre-Training (DAPT) can be an effective approach. DAPT involves continual pretraining of a model that has already been extensively trained on general domains with domain-specific datasets to develop a specialized model. Since this method builds upon a model that already possesses knowledge of general domains, it requires relatively less cost and fewer datasets compared to training a model from scratch.

    Development environment Setup

    1. Package Installation
    pip install bitsandbytes==0.43.2
    pip install deepspeed==0.14.4
    pip install transformers==4.43.3
    pip install accelerate==0.33.0
    pip install flash-attn==1.0.5
    pip install xforms==0.1.0
    pip install datasets==2.20.0
    pip install wandb
    pip install evaluate==0.4.2
    pip install vertexai==1.60.0
    pip install peft==0.12.0
    pip install tokenizers==0.19.1
    pip install sentencepiece==0.2.0
    pip install trl==0.9.6
    pip install bitsandbytes==0.43.2
    pip install deepspeed==0.14.4
    pip install transformers==4.43.3
    pip install accelerate==0.33.0
    pip install flash-attn==1.0.5
    pip install xforms==0.1.0
    pip install datasets==2.20.0
    pip install wandb
    pip install evaluate==0.4.2
    pip install vertexai==1.60.0
    pip install peft==0.12.0
    pip install tokenizers==0.19.1
    pip install sentencepiece==0.2.0
    pip install trl==0.9.6
    
    1. Import Modules
    import os
    import json
    from datasets import load_from_disk, Dataset,load_dataset
    import torch
    from transformers import AutoTokenizer, AutoModelForCausalLM, Gemma2ForCausalLM, BitsAndBytesConfig, pipeline, TrainingArguments
    from peft import LoraConfig, get_peft_model
    import transformers
    from trl import SFTTrainer
    from dotenv import load_dotenv
    import wandb
    from huggingface_hub import login
    

    Dataset preparation

    The preparation of datasets should vary depending on the purpose of fine-tuning. In this article, since our goal is to train a model that can effectively respond to questions related to the trade domain, we decided to use datasets that we collected ourselves through web crawling. The datasets are categorized into three types: trade certification exam datasets, trade term-definition datasets, and trade lecture script datasets.

    1. Trade Certification Exam Data Set

    질문: 다음 중 우리나라 대외무역법의 성격에 대한 설명으로 거리가 먼 것을 고르시오. 1. 우리나라에서 성립되고 이행되는 대외무역행위는 기본적으로 대외무역법을 적용한다. 2. 타 법에서 명시적으로 대외무역법의 적용을 배제하면 당해 법은 특별법으로서 대외무역법보다 우선 적용된다. 3. 대외무역법은 국내법으로서 국민의 국내 경제생활에 적용되는 법률이기 때문에 외국인이 국내에서 행하는 무역행위는 그 적용 대상이 아니다. 4. 관계 행정기관의 장은 해당 법률에 의한 물품의 수출·수입 요령 그 시행일 전에 지식경제부 장관이 통합하여 공고할 수 있도록 제출하여야 한다. 정답: 대외무역법은 국내법으로서 국민의 국내 경제생활에 적용되는 법률이기 때문에 외국인이 국내에서 행하는 무역행위는 그 적용 대상이 아니다. 질문: ...

    1. Trade Terms Definition Data Set
    {
      "term": "(계약 등을) 완전 무효화하다, 백지화하다, (처음부터) 없었던 것으로 하다(Rescind)",
      "description": "계약을 파기, 무효화, 철회, 취소하는 것; 그렇지 않았음에도 불구하고 계약을 시작부터 무효인 것으로 선언하고 종결짓는 것."
    }
    
    
    1. Trade Lecture Script Dataset

    예전에는 전자상거래 셀러가 엑셀에다가 입력을 해서 수출신고 데이터를 업로드 해서 생성을 했잖아요 그리고 대량으로 전송하는 셀러는 api를 통해서 신고를 했습니다 그런데 그 수출신고 정보의 원천정보를 뭐냐면 쇼핑몰에서 제공하는 판매 주문정보입니다 그래서 그 쇼핑몰에 직접 저희가 연계를 해서 판매 주문 정보를 가져올 수 있게끔 새 서비스를 만들었어요 그래서 API 연계된 쇼핑몰들이 있는데 그게 현재 5개가 연결되어 있는데 쇼피 쇼피파이 라자다 라쿠텐 q10이 있고요 아마존하고 위치도 연계 예정에 있습니다 그래서 셀러는 ...

    To create a model suitable for Q/A tasks, the datasets need to be converted into a question-and-answer format. The first dataset, the trade certification exam dataset, and the second dataset, the trade term-definition dataset, can be converted using simple code. However, upon examining the third dataset, the trade lecture script dataset, it appears challenging to directly convert the conversational data. In this case, an approach can be employed that uses large language models (LLMs) to extract Q/A pairs from the conversational scripts. The prompts and their results are as follows.

    Prompt

    summary: {summary}
    Instruction: 다음 summary를 기반으로 질문과 해당 답변을 추출하세요.
    출력 형식은 JSON으로, 각 질문과 답변을 객체로 묶어 'qa_pairs'라는 배열 안에 넣어야 합니다.
    'questions' key에는 요약에서 도출된 질문 목록이 포함되어야 하며, 'answers' key에는 각 질문에 대한 해당 답변 목록이 포함되어야 합니다:
    출력 예시:
    {{"qa_pairs": [{{"question": "Question1", "answer": "Answer1"}},{{"question": "Question2", "answer": "Answer2"}},{{"question": "Question3", "answer": "Answer3"}},]...}}
    Output:
    

    Output

    {"qa_pairs":
      [
        {"question": "전자상거래 무역 플랫폼은 어떤 서비스를 제공하나요?", "answer": "판매 주문 관리, 창고 관리, 수출입 통관, 배송 관리 등 전자상거래 업무를 위한 다양한 서비스를 제공합니다"},
        {"question": "쇼핑몰과의 API 연계를 통해 무엇을 간소화할 수 있나요?", "answer": "판매 주문 정보를 자동으로 가져와 수출신고를 간소화할 수 있습니다"},
        {"question": "창고 관리 기능은 어떤 것을 관리할 수 있나요?", "answer": "자가 창고, 위촉 창고, 해외 풀필먼트 센터 등을 관리할 수 있습니다"},
        {"question": "수출입 통관 서비스는 어떤 기능을 제공하나요?", "answer": "미국 세관과 연동된 수출입 통관 서비스를 통해 미국 수출 시 수입 신고를 간편하게 처리할 수 있습니다"},
        {"question": "배송 관리 기능은 어떤 서비스를 지원하나요?", "answer": "우체국 EMS, DHL 등 개별 배송과 함께 해외 현지 라스트 마일 배송까지 지원합니다"}
      ]
    }
    

    It looks like we are now ready to convert each dataset into a Q/A dataset using simple code. Below is the code that demonstrates how to convert each dataset into Q/A format.

    import os
    import json
    import re
    from datasets import Dataset, concatenate_datasets, load_from_disk
    
    def replace_dot_number(text):
        result = re.sub(r'\.(\d+)\.', r'. \1.', text)
        return result
    
    def read_json(path):
        with open(path, 'r', encoding='utf-8') as f:
            return json.load(f)
    
    def write_json(data, path):
        with open(path, 'w', encoding='utf-8') as f:
            json.dump(data, f, ensure_ascii=False)
    
    def dataset_maker(data:list) -> Dataset:
        return Dataset.from_list(data)
    
    def save_dataset(dataset, save_path):
        dataset.save_to_disk(save_path)
    
    def exam_qa_formatter():
        data = []
        root = 'dataset/exam_data'
        for file in sorted(os.listdir(root)):
            file_path = os.path.join(root, file)
            content = read_json(file_path)['fixed_text']
            question_list = content.split('질문:')[1:]
            for question in question_list:
                try:
                    question_and_options = replace_dot_number(question.split('정답:')[0]).strip()
                    answer = question.split('정답:')[1].strip()
                    data.append({"context": replace_dot_number(question), "question":question_and_options, "answer":answer})
    
                except Exception as e:
                    pass
        return data
    
    def description_to_term_formattter(kor_term, eng_term, description):
        context = f"{kor_term}: {description}"
        question = f"설명: '{description}' 이 설명에 해당하는 무역 용어는 무엇인가요?"
        answer = kor_term if eng_term is None else f"{kor_term}, {eng_term}"
        return context, question, answer
    
    def term_to_description(kor_term, eng_term, description):
        context = f"{kor_term}: {description}"
        question = f"'{kor_term}({eng_term})' 이라는 무역 용어는 어떤 의미인가요?" if eng_term is not None else f"'{kor_term}' 이라는 무역 용어는 어떤 의미인가요?"
        answer = description
        return context, question, answer
        
    def term_qa_formatter():
        data = []
        root = 'dataset/term_data'
        for file in os.listdir(root):
            file_path = os.path.join(root, file)
            term_set = read_json(file_path)
            if file == 'terms_data_2.json':
                term_set = [item for sublist in term_set for item in sublist]
            for pair in term_set:
                eng_term = pair.get('eng_term', None)
                if 'term' in pair.keys():
                    kor_term = pair['term']
                else:
                    kor_term = pair['kor_term']
                description = pair['description']
                context_1, question_1, answer_1 = description_to_term_formattter(kor_term, eng_term, description)
                context_2, question_2, answer_2 = term_to_description(kor_term, eng_term, description)
                data_1 = {"context": context_1, "question": question_1, "answer": answer_1} 
                data_2 = {"context": context_2, "question": question_2, "answer": answer_2} 
                data.append(data_1)
                data.append(data_2)
        return data
    
    def transcript_qa_formatter():
        data = []
        root = 'dataset/transcript_data/success'
    
        for file in sorted(os.listdir(root)):
            file_path = os.path.join(root, file)
            for line in open(file_path):
                line = json.loads(line)
                context = line['context']
                output = line['json_output']
    
                qa_pairs = json.loads(output)['qa_pairs']
                for pair in qa_pairs:
                    question = pair['question']
                    answer = pair['answer']
                    if type(answer) == list:
                        answer = answer[0]
                    data.append({"context": context, "question": question, "answer": answer})
        return data
    
    ###### Term dataset
    {'context': 'APEC 경제위원회(Economic Committee (EC)): 개별위원회나 실무그룹이 추진하기 어려운 여러분야에 걸친 이슈에 대한 분석적 연구작업을 수행하기 위해 결성된 APEC 기구,',
     'question': "설명: '개별위원회나 실무그룹이 추진하기 어려운 여러분야에 걸친 이슈에 대한 분석적 연구작업을 수행하기 위해 결성된 APEC 기구,' 이 설명에 해당하는 무역 용어는 무엇인가요?",
     'answer': 'APEC 경제위원회(Economic Committee (EC))'}
    
    ###### Transcript dataset
    {'context': '수입 신고는 일반적으로 입항 후에 하는 것이 원칙이며, 보세 구역에서 5부 10장을 작성하여 신고합니다',
     'question': '수입 신고는 언제 하는 것이 원칙인가요?',
     'answer': '수입 신고는 일반적으로 입항 후에 하는 것이 원칙입니다.'}
    
    ###### Exam dataset
    {'context': ' 다음 중 우리나라 대외무역법의 성격에 대한 설명으로 거리가 먼 것을 고르시오. 1. 우리나라에서 성립되고 이행되는 대외무역행위는 기본적으로 대외무역법을 적용한다. 2. 타 법에서 명시적으로 대외무역법의 적용을 배제하면 당해 법은 특별법으로서 대외무역법보다 우선 적용된다. 3. 대외무역법은 국내법으로서 국민의 국내 경제생활에 적용되는 법률이기 때문에 외국인이 국내에서 행하는 무역행위는 그 적용 대상이 아니다. 4. 관계 행정기관의 장은 해당 법률에 의한 물품의 수출·수입 요령 그 시행일 전에 지식경제부 장관이 통합하여 공고할 수 있도록 제출하여야  한다.정답: 대외무역법은 국내법으로서 국민의 국내 경제생활에 적용되는 법률이기 때문에 외국인이 국내에서 행하는 무역행위는 그 적용 대상이 아니다.',
     'question': '다음 중 우리나라 대외무역법의 성격에 대한 설명으로 거리가 먼 것을 고르시오. 1. 우리나라에서 성립되고 이행되는 대외무역행위는 기본적으로 대외무역법을 적용한다. 2. 타 법에서 명시적으로 대외무역법의 적용을 배제하면 당해 법은 특별법으로서 대외무역법보다 우선 적용된다. 3. 대외무역법은 국내법으로서 국민의 국내 경제생활에 적용되는 법률이기 때문에 외국인이 국내에서 행하는 무역행위는 그 적용 대상이 아니다. 4. 관계 행정기관의 장은 해당 법률에 의한 물품의 수출·수입 요령 그 시행일 전에 지식경제부 장관이 통합하여 공고할 수 있도록 제출하여야  한다.',
     'answer': '대외무역법은 국내법으로서 국민의 국내 경제생활에 적용되는 법률이기 때문에 외국인이 국내에서 행하는 무역행위는 그 적용 대상이 아니다.'}
    
    # Exam dataset
    Dataset({
        features: ['context', 'question', 'answer'],
        num_rows: 1430
    })
    
    # Term dataset
    Dataset({
        features: ['context', 'question', 'answer'],
        num_rows: 15678
    })
    
    # Transcript dataset
    Dataset({
        features: ['context', 'question', 'answer'],
        num_rows: 8885
    })
    
    # Concatenated dataset 
    Dataset({
        features: ['context', 'question', 'answer'],
        num_rows: 25993
    })
    

    The combined dataset (training dataset) with the Q/A format is as above. About 26,000 Q/A pairs are expected to be used for training.

    Now, the dataset for fine-tuning is ready. Let’s check how this dataset is actually fed into the model.

    <bos><start_of_turn>user
    Write a hello world program<end_of_turn>
    <start_of_turn>model
    

    On the Huggingface website, you can find the model card for gemma-2-2b-it, which includes information on the chat template format and the definition of the model's prompt format (gemma-2-2b-it). This means that to ask questions to gemma, you need to create a prompt in a format that the model can understand.

    The start of the conversation is marked with <start_of_turn>, and the end of the conversation is marked with <end_of_turn>. The speakers are specified as the user and the model. Therefore, when asking a question to the model, the prompt should follow this format.

    def formatting_func(example):
        prompt_list = []
        for i in range(len(example['question'])):
            prompt_list.append("""<bos><start_of_turn>user
        다음 질문에 대답해주세요:
        {}<end_of_turn>
        <start_of_turn>model
        {}<end_of_turn><eos>""".format(example['question'][i], example['answer'][i]))
            return prompt_list  
    
    This document focuses on training the model using the Q/A dataset, so the approach will be to train the model in the manner of "for this type of question, respond in this way." Considering the previously mentioned chat template, you can write code in the format described above.
    
    At this point, even if tokens are not explicitly included in the chat template, the model may attempt to generate more content beyond the delimiter. To ensure the model provides only an answer and then ends its turn, an <eos> token is added.
    
    

    <start_of_turn>user 다음 질문에 대답해주세요: '(관세)감축률(Reduction Rate)' 이라는 무역 용어는 어떤 의미인가요?<end_of_turn> <start_of_turn>model 관세를 감축하는 정도를 말함. 예를 들어 200%p에 관세감축률이 50%를 적용하면 감축 후 관세는 100%p가 됨. 극단적인 경우로 관세감축률이 100%이면 모든 관세는 감축 후에는 0%p가 됨.<end_of_turn>

    In actual training, examples like the one above will be used as input. Now, the dataset preparation for training is complete.
    
    # Training
    The training code is very simple. We use SFTTrainer, and as the base model, we use the gemma-2-2b-it model, which has been continually pretrained on SCM & trade datasets.
    
    ```python
    model_id = "google/gemma-2-2b-it"
    output_dir = 'QA_finetune/gemma-2-2b-it-lora128'
    tokenizer = AutoTokenizer.from_pretrained(model_id, token=access_token)
    
    model = AutoModelForCausalLM.from_pretrained(
                # "google/gemma-2-2b-it",
                "yonggeun/gemma-2-2b-it-lora128-merged",
                device_map="auto",
                torch_dtype=torch.bfloat16,
                token=access_token,
                attn_implementation="eager", # attn_implementation,
                cache_dir="./models/models",
            )
    
    
    def formatting_func(example):
        prompt_list = []
        for i in range(len(example['question'])):
            prompt_list.append("""<bos><start_of_turn>user
    다음 질문에 대답해주세요:
    {}<end_of_turn>
    <start_of_turn>model
    {}<end_of_turn><eos>""".format(example['question'][i], example['answer'][i]))
        return prompt_list   
    
    
    def train(data):  
        valid_set = data["test"]
        valid_set.save_to_disk('QA_finetune/valid_set/gemma-2-2b-it-lora128')
    
        lora_config = LoraConfig(
            r=256,
            lora_alpha=32,
            lora_dropout=0.05,
            bias="none",
            target_modules=["q_proj", "o_proj", "k_proj", "v_proj", "gate_proj", "up_proj", "down_proj"],
            task_type="CAUSAL_LM",
        )
    
        training_args = TrainingArguments(
            per_device_train_batch_size=2,
            warmup_steps=2,
            logging_steps=1, 
            gradient_accumulation_steps=4,
            # num_train_epochs=3,
            num_train_epochs=3,  
            learning_rate=2e-4,
            save_steps=100,
            fp16=False,
            bf16=True,
            output_dir=output_dir,
            push_to_hub=True,
            report_to="wandb"
        )
    
        trainer = SFTTrainer(
            model=model,
            tokenizer=tokenizer,
            train_dataset=data['train'],
            args=training_args,
            formatting_func=formatting_func,
            peft_config=lora_config,
            max_seq_length=max_length,
            packing= False,
        )
    
        model.config.use_cache = False
    
        print("Training...")
        trainer.train()
        print("Training done!")
    

    Evaluation

    Once the training is successfully completed, it is essential to evaluate the model's performance. This article focuses on evaluating Question Answering performance in a specific domain, which required different metrics than those typically used for benchmarking general models. In this article, the model was evaluated using SemScore and Truthfulness.

    SemScore: An evaluation method based on the semantic textual similarity between the target response and the model's response. (SemScore)

    Evaluating Truthfulness: This method measures truthfulness on a scale of 1 to 5 by providing the model's response and the target answer to an LLM. (Truthfulness)

    Fasttrack pipeline

    Now, let’s create a pipeline in FastTrack that will be used for model training. A pipeline is a unit of work used in FastTrack. Each pipeline can be represented as a collection of tasks, which are the smallest executable units. Multiple tasks within a single pipeline can have dependencies on each other, and their sequential execution is ensured based on these dependencies.

    Create Pipeline

    In the image above, find the blue '+' button to create a new pipeline.

    When creating a pipeline, you can choose the pipeline’s name and description, the location of the data repository to be used, and the environment variables that will be commonly applied in the pipeline. After entering the necessary information, click the "Save" button at the bottom to create the pipeline.

    Drag and create task

    Once a new pipeline is created, you can add a new task to the task template. Click on the "Custom Task" and drag it into the workspace below to create a new task.

    Enter information

    When creating a task, you need to enter the information required for task execution, as shown above. Write the task name and description clearly, and choose between a single node or multiple nodes. In this document, we will perform training on a single node, so we will select a single node.

    Next, you need to write the command. The command essentially runs the session. Make sure to specify the directory of the mounted V-folder correctly so that the script runs without errors. Most of the packages required for training are already installed in the session, but if additional packages need to be installed or there are version issues, you may need to reinstall the packages. In such cases, you can specify the required packages in the requirements.txt file, install them, and then run the other scripts.

    Resource configuration

    Next are the settings for the session, resources, and V-folder.

    Although the code in this article is written based on PyTorch, you can also choose other environments like TensorFlow, Triton server, etc.

    One of the advantages of FastTrack is its ability to utilize resources as efficiently as possible. Even within a single resource group, resources can be divided among multiple sessions, maximizing the resource utilization rate.

    For dataset preparation, GPU computation is not required, so it is acceptable not to allocate GPU resources. This allows you to run the code with minimal resources and allocate GPU resources to other sessions during this time, preventing GPU resources from remaining idle. Furthermore, if parallel model training is needed (e.g., when 10 GPUs are available and each training session requires 5 GPUs), you can train models in parallel. This approach helps reduce resource wastage and shortens training time.

    Select the V-folder where the prepared dataset and training code are correctly located.

    Duplicate or delete task

    By clicking the meatball menu icon (⋯) at the top right corner of the task block, you can duplicate or delete the created task.

    In FastTrack, you can set the order between multiple created tasks like this. This process involves adding dependencies between tasks. In some cases, you can set the next task to run only after several tasks are completed. In such cases, the next task will not proceed until all dependent tasks are finished. The completed example is shown above. In this article, we will proceed in the order of dataset preparation - fine-tuning - evaluation.

    If each task is defined correctly, click "Run" to execute the pipeline.

    On the left side of the FastTrack screen, you can see the pipelines you created. By clicking on them, you can monitor the currently running tasks and previously executed tasks in the pipeline task session.

    Monitoring jobs

    You can monitor the tasks through a screen like the one above. Each task proceeds in the specified order; once a previous task is completed, resources are allocated to start the session for the next task, and when the task is done, the session is terminated. There is also an option to skip tasks if needed. For example, in the image above, you can see that the fine-tuning task is running after skipping the dataset preparation task.

    Skipped tasks are shown in pink, running tasks are in light blue, and tasks scheduled to run are in yellow.

    Log checking

    By clicking the blue button next to each task's name, highlighted with a red square, you can check the logs of each task. This allows you to directly monitor the training progress. The logs appear the same as they would in a terminal, as shown in the screen above, allowing you to verify that the training is progressing correctly.

    Once the pipeline execution is successfully completed, you can check the results. In this document, the evaluation results are plotted and saved as /home/work/XaaS/train/QA_finetune/truthfulness_result.png.

    (Backend.AI's V-folder has a default directory structure of /home/work/~.)

    After training is complete, the result image is generated at the specified path.

    Result checking

    As shown above, you can see the successful execution of the pipeline by checking to the left of each task name.

    Result

    Now, let’s compare the results of the fine-tuned model with the base gemma-2-2b-it model.

    1. SemScore (Semantic text similarity between target response and model response, 1.00 is the best)

      | Base Model | Trained Model | |------------|---------------| | 0.62 | 0.77 |

    The SemScore of the trained model has increased (0.62 -> 0.77). This result indicates that the trained model can generate outputs that are more semantically similar to the target responses. In other words, the trained model has improved in generating responses that are closer to the intended target responses and more semantically consistent. As a result, the overall performance and reliability of the trained model have significantly improved.

    1. Truthfulness The trained model shows a trend of increasing high-score cases and decreasing low-score cases. Low scores (1, 2 points) decreased (1,111 -> 777), while high scores (4, 5 points) increased (108 -> 376). This indicates that the model's ability to identify domain information closer to the truth has improved, showing that the training was effective.

      Truthfulness result

    Conclusion

    In this article, we built a pipeline to train a model specialized in a specific domain using FastTrack, the MLOps platform of Backend.AI.

    Even though we utilized only some of FastTrack’s features, it allowed us to flexibly manage resources, freely configure tasks, reduce training time, and improve resource utilization. Moreover, we were able to train models stably in independent execution environments and monitor the execution information of pipeline jobs, enabling us to track resource usage and execution counts for each pipeline during training.

    In addition to the contents covered in this article, FastTrack supports a variety of additional features such as scheduling and parallel model training. For more information about other features of FastTrack, you can refer to the blog posts written by Kang Ji-hyun and Kang Jung-seok, linked below.

    Although we did not fully utilize all of FastTrack's features, its flexible resource management and free task configuration helped shorten training time and increase resource utilization rates. Furthermore, it provided a stable training environment and allowed us to monitor resource usage and execution frequency within each pipeline through pipeline job execution information. FastTrack also supports many other functionalities such as scheduling and parallel model training. You can find more information about FastTrack in the documents below.

    Backend.AI MLOps 플랫폼 FastTrack을 소개합니다.

    FastTrack 길라잡이: 모델 학습 결과 알림 받기

    26 September 2024

  • Model Variant: Easily Serving Various Model Services

    By Jihyun Kang

    Introduction

    Imagine a scenario where you need to train an AI for research purposes and produce results. Our job would simply be to wait for the AI to correctly learn the data we've taught it. However, if we assume we're creating a service that 'utilizes' AI, things get more complicated. Every factor becomes a concern, from how to apply various models to the system to what criteria to use for scaling under load conditions. We can't carelessly modify the production environment where users exist to get answers to these concerns. If an accident occurs while expanding or reducing the production environment, terrible things could happen. If something terrible does happen, we'll need time to recover from it, and we can't expect the same patience from consumers using our service as we would from researchers waiting for model training. Besides engineering difficulties, there are also cost challenges. Obviously, there's a cost to serving models, and users are incurring expenses even at the moment of training models as resources are being consumed.

    However, there's no need to worry. Many well-made models already exist in the world, and in many cases, it's sufficient for us to take these models and serve them. As those interested in our solution may already know, Backend.AI already supports various features you need when serving models. It's possible to increase or decrease services according to traffic, and to serve various models tailored to users' preferences.

    But the Backend.AI team doesn't stop here. We have enhanced the model service provided from Backend.AI version 23.09 and improved it to easily serve various models. Through this post, we'll explore how to serve various models easily and conveniently.

    This post introduces features that allow you to serve various types of models more conveniently. Since we've already given an explanation about model service when releasing the 23.09 version update, we'll skip the detailed explanation. If you're unfamiliar with Backend.AI's model service, we recommend reading the following post first: Backend.AI Model Service Preview

    Existing Method

    | Requirement | Existing Method | Model Variant | |-------------|-----------------|---------------| | Writing model definition file (model-definition.yaml) | O | X | | Uploading model definition file to model folder | O | X | | Model metadata needed | O | △ (Some can be received automatically) |

    Backend.AI model service required a model definition file (model-definition.yaml) that contains commands to be executed when serving the model in a certain format, in addition to the model metadata needed to run. The service execution order was as follows: Write the model definition file, upload it to the model type folder so it can be read, and when starting the model service, mount the model folder. Then, an API server that automatically transfers the end user's input to the model according to the model definition file and sends back the response value would be executed. However, this method had the disadvantage of having to access the file every time the model definition file needed to be modified. Also, it was cumbersome to write different model definition files each time the model changed because the model path was already set in the model definition file.

    The Model Variant introduced this time is a feature that allows you to serve models immediately by inputting a few configuration values or without any input at all, using only model metadata without a model definition file. Model Variant supports command, vLLM, and NIM (NVIDIA Inference Microservice) methods. The methods of serving and verifying model service execution are as follows.

    Basically, model service requires metadata of the model to be served. Download the model you want to serve from Hugging Face, where you can easily access model metadata. In this example, we used the Llama-2-7b-hf model and Calm3-22b-chat model from Hugging Face. For how to upload model metadata to the model folder, refer to the "Preparing Model Storage" section in the previous post.

    Automatically Serving Model from Built Image (Command Method)

    The first introduced command method is a form where the command part that executes to serve the model in the model definition file is included in the execution image. After specifying the command to execute in the CMD environment variable, build the image and execute it immediately without any other input when actually serving the model. The command method does not support what's called a Health check, which verifies if the service is running properly. Therefore, it's more suitable for immediately setting up and checking a service as a prototype rather than performing large-scale services. The execution method is as follows:

    1. On the start screen, select Llama-2-7b-hf in the Model Storage To Mount item to mount the model folder containing the model metadata corresponding to the model service to be served, and select Predefined Image Command in the Inference Runtime Variant item.

    Activate the Open To Public switch button if you want to provide model service accessible without a separate token.

    모델-서비스-시작화면-모델-메타데이터-마운트-및-CMD-선택

    1. Select the environment to serve. Here, we use vllm:0.5.0 and allocate CPU 4 Core, Memory 16 GiB, NVIDIA CUDA GPU 10 fGPU as resources.

    모델-서비스-시작화면-실행환경-선택-및-자원할당

    1. Finally, select the cluster size and click the start button. The cluster size is set to single node, single container.

    모델-서비스-시작-화면-클러스터-크기-선택-및-시작

    If the service has been successfully launched, the service status will change to HEALTHY and the endpoint address will appear.

    모델-서비스-상세-화면

    Verifying the Service

    If the service has been launched normally, check the service model name with the cURL command:

    curl https://cmd-model-service.asia03.app.backend.ai/v1/models \
    -H "Content-Type: application/json"
    

    모델명-확인하기

    Now, let's send input to the service with the cURL command and check the response:

    For model services run with CMD, the model name is already defined in the image, so after checking the model name, you must enter the model name as the value of the model key when sending a request.

    curl https://cmd-model-service.asia03.app.backend.ai/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
    "model": "image-model",
    "prompt": "San Francisco is a",
    "max_tokens": 7,
    "temperature": 0}'
    

    모델-서비스-요청-결과-화면

    Serving Models in vLLM Mode

    The vLLM mode is similar to the command method introduced earlier, but various options entered when running vLLM can be written as environment variables. The execution method is as follows:

    How to Run

    1. On the start screen, mount the model folder for the model service to be served and select vLLM in the Inference Runtime Variant item.

    모델-서비스-시작-화면-모델-메타데이터-마운트-및-vLLM-선택

    1. Select the environment to serve. As with the command method explained earlier, select vllm:0.5.0, and (although you can set the resources the same) this time we'll allocate CPU 16 Core, Memory 64 GiB, NVIDIA CUDA GPU 10 fGPU.

    모델-서비스-시작-화면-실행환경-선택-및-자원-할당

    1. Finally, select the cluster size and enter the environment variable BACKEND_MODEL_NAME. This value corresponds to the --model-name option in vLLM and becomes the model value specified by the user when sending a request to the service. 모델-서비스-시작-화면-실행환경-선택-및-자원-할당

    Likewise, if the service has been successfully launched, the service status will change to HEALTHY, and the endpoint address where the service is launched will appear.

    모델-서비스-상세-화면

    Verifying the Service

    Let's send input to the service with the cURL command and check the response value. At this time, enter the model value as the BACKEND_MODEL_NAME value you set earlier. Once the input is complete, click the START button to create the service.

    curl https://vllm-calm3-22b-chat.asia03.app.backend.ai/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
    "model": "vllm-model",
    "prompt": "初めて会う日本人ビジネスマンに渡す最高の挨拶は何でしょうか?",
    "max_tokens":  200,
    "temperature": 0
    }'
    

    모델-서비스-요청-결과-화면

    Serving Models in NIM Mode

    To run NIM, you need an API key issued from an account that can access NGC's NIM model registry. For how to obtain the key value, please refer to the following content: NVIDIA Docs Hub : How to get NGC API Key

    The NIM (NVIDIA Inference Microservice) mode is also similar to the command mode, but it must be run with an image that has NVIDIA's NIM-supporting model server built-in. Also, when loading the model, the NGC API key value is needed. Assuming everything is ready, let's start the model service.

    How to Run

    1. On the start screen, select an empty model type folder to cache the metadata to be received from NIM, and select NIM in the Inference Runtime Variant item.

    모델-서비스-시작-화면-모델-폴더-마운트-및-NIM-선택

    1. Select the environment to serve. Here, we use ngc-nim:1.0.0-llama3.8b and set to allocate CPU 8 Core, Memory 32 GiB, NVIDIA CUDA GPU 15 fGPU as resources.

    모델-서비스-시작-화면-실행환경-선택-및-자원-할당

    1. Finally, select the cluster size and enter the default path /models for the environment variable HF_HOME. Then enter NGC_API_KEY and input the issued key value. Once the input is complete, click the CREATE button to create the service.

    모델-서비스-시작-화면-클러스터-크기-선택-환경변수-입력-및-시작

    When using NIM, it may take some time for the first execution as it receives model metadata from the repository. You can check the progress by viewing the container logs for the routing session in service on the session page. 모델-서비스에-대응하는-라우팅-세션 NIM-에서-데이터를-받고-있는-로그가-띄워진-컨테이너-로그-화면

    Like the command and vLLM modes, if the service has been successfully launched, the service status will change to HEALTHY. Let's input the content to send to the service using the endpoint address where the service is launched as follows, and check the response value.

    Verifying the Service

    from openai import OpenAI
    
    client = OpenAI(
      base_url = "https://nim-model-service.asia03.app.backend.ai/v1",
      api_key = "$YOUR_NGC_API_KEY"
    )
    
    completion = client.chat.completions.create(
      model="meta/llama3-8b-instruct",
      messages=[
          {        
            "role":"user", 
            "content":"Hello! How are you?"
          },
          {
            "role":"assistant",
            "content":"Hi! I am quite well, how can I help you today?"
          },
          {
            "role":"user",
            "content":"Can you write me a song?"
          }],
      temperature=0.5,
      top_p=1,
      max_tokens=1024,
      stream=True
    )
    
    for chunk in completion:
      if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")
    

    모델-서비스-요청-결과-화면

    Conclusion

    The Model Variant feature will be of great help to researchers and companies aiming to provide actual services with already trained models. Based on a powerful resource management system and support for various AI accelerators such as NVIDIA GPU, AMD ROCm, TPU, Graphcore IPU, Furiosa Warboy, Rebellions ATOM, Hyperaccel LPU, etc., Backend.AI now provides an integrated environment that can easily deploy services beyond simply training models. Try serving your desired AI model anytime with Backend.AI!

    11 July 2024

  • Backend.AI Open Source Contribution Guide (Jul. 2024)

    By Daehyun Sung

    Backend.AI's core engine utilizes many open-source software components and is itself being developed as open source. When enterprise customers encounter inconveniences or bugs while using Backend.AI, we provide issue tracking and support through customer support and technical support channels. However, those using the open-source version can also directly contribute to the project.

    There are mainly two ways to contribute: creating an issue that explains in detail what problem exists or what improvement ideas you have, and making a pull request to directly contribute code changes. In this post, we'd like to introduce a few things that are good to know in advance for more effective and faster communication with the development team during the contribution process.

    Introduction to GitHub Repositories

    As seen in the previous post Backend.AI Open Source Contribution Guide, Backend.AI was originally developed with repositories divided into the Backend.AI meta-repository and several sub-components.

    However, from version "22.06", Backend.AI has changed to a mono-repository using Pants.

    This transition in the development workflow has greatly helped in resolving package compatibility issues that often occur across multiple individual components, creating a more convenient development environment.

    Pants is a fast, scalable, and user-friendly build system.

    If you want to submit an issue, the first place to look is the Backend.AI repository. The repository named Backend.AI integrates multiple packages using Pants. This repository is not only for project management but also contains code that actually performs functions. All issues related to Backend.AI's server and Client SDK are managed here, and links to other projects are provided through the README.

    When creating a new issue, two default templates are provided: bug report and feature request. However, it's not strictly necessary to follow these templates. Considering the complexity of Backend.AI and its various usage environments, following these templates when writing content makes it easier to share context for problem identification.

    Introduction to Mono-repository

    From version "22.06", Backend.AI has changed to a mono-repository using Pants. A mono-repository is a project with an integrated code base that shares the basic dependencies, data models, features, tooling, and processes of multiple projects. It operates the repository by integrating multiple projects that were previously used into a single project.

    Introduction to Pants

    Backend.AI is installed using Pants as a build system. For more details about Pants, please check the following link Pants - Getting started.

    Relationship between Backend.AI Components

    Figure 1. Relationship structure between major Backend.AI components

    Figure 1 is a diagram showing the relationship between the major components of Backend.AI.

    Figure 2. Major component structure of Backend.AI and examples of execution methods

    Figure 2 is a diagram showing the major component structure of Backend.AI, and shows the location of the source code of the components and execution commands.

    Most of Backend.AI's components are managed in the Backend.AI repository, and the source code is located in the src/ai/backend/ subdirectory. Briefly, summarizing what each component does by directory:

    • src/ai/backend/manager (Manager): Core service that monitors computational resources of the entire cluster, handles session scheduling, provides user authentication and APIs for session execution
    • src/ai/backend/agent (Agent): Service installed on compute nodes to manage and control containers
    • src/ai/backend/common (Common): Library of functions and data formats commonly or frequently used across multiple server-side components
    • src/ai/backend/client (Client SDK for Python): Official command-line interface and library providing API wrapper functions and classes for Python
    • src/ai/backend/storage (Storage Proxy): Service that allows user web browsers or Client SDK to directly perform large-volume I/O from network storage
    • src/ai/backend/web (Web Server): HTTP service that provides routing for Web UI and SPA (single-page app) implementation and web session-based user authentication
    • src/ai/backend/webui (Web UI & Desktop App): Web component-based implementation of the actual UI that users interact with. Also supports Electron-based desktop app builds. Also includes a local lightweight version of the app proxy that allows users to directly access application ports running inside containers.

    Backend.AI Version Management Method

    Backend.AI has major releases every 6 months (March and September each year), with post-release support provided for about 1 year. Therefore, the version number follows the CalVer format in the form of YY.0M.micro (e.g., 20.09.14, 21.03.8). However, due to the version number normalization of the Python packaging system, the version of the wheel package is in the format YY.MM.micro without zero-padding in the month part (e.g., 20.9.14, 21.3.8). Some detailed components with version update cycles different from the main release cycle follow the general SemVer format.

    Essential Packages to Install Before Development

    Before installing Backend.AI, you need to install Docker, Docker Compose v2, etc. first. When installing Backend.AI using the scripts/install-dev.sh script in the repository, it checks for the installation of Docker, Docker Compose v2, etc., and guides you through the installation process. If Python, pyenv, Docker, npm are not installed, you need to install the essential packages as follows. For Python, please install it using the system package's Python3. Then, you need to install pyenv and pyenv-virtualenv.

    $ curl https://pyenv.run | bash
    

    Then, you can install Docker and Docker Compose v2 as follows:

    MacOS

    For MacOS, Docker Desktop on Mac automatically installs Docker and Docker Compose v2.

    Ubuntu, Debian, CentOS, Fedora Core, and other Linux environments

    For Ubuntu, Debian, CentOS, Fedora Core, you can automatically install Docker and Docker Compose v2 using the following script:

    $ sudo curl -fsSL https://get.docker.io | bash
    

    After installing Docker, if you get a unix:///var/run/docker.sock access permission error when running without sudo, like this:

    $ docker ps
    Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission denied
    

    If such a permission problem exists, set the permissions using the following command:

    $ sudo usermod -aG docker $(whoami)
    $ sudo chown root:docker /var/run/docker.sock
    

    After that, reboot and run docker run hello-world to confirm that it runs normally.

    $ docker run hello-world
    Unable to find image 'hello-world:latest' locally
    latest: Pulling from library/hello-world
    c1ec31eb5944: Pull complete
    Digest: sha256:94323f3e5e09a8b9515d74337010375a456c909543e1ff1538f5116d38ab3989
    Status: Downloaded newer image for hello-world:latest
    
    Hello from Docker!
    This message shows that your installation appears to be working correctly.
    
    To generate this message, Docker took the following steps:
    1. The Docker client contacted the Docker daemon.
    2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
        (amd64)
    3. The Docker daemon created a new container from that image which runs the
        executable that produces the output you are currently reading.
    4. The Docker daemon streamed that output to the Docker client, which sent it
        to your terminal.
    
    To try something more ambitious, you can run an Ubuntu container with:
    $ docker run -it ubuntu bash
    
    Share images, automate workflows, and more with a free Docker ID:
    https://hub.docker.com/
    
    For more examples and ideas, visit:
    https://docs.docker.com/get-started/
    

    Instead of changing the group ownership of /var/run/docker.sock with chown, changing the permissions of the /var/run/docker.sock file to 666 allows other users in the group to access it without rebooting.

    sudo chmod 666 /var/run/docker.sock
    

    However, setting the permissions of the /var/run/docker.sock file to 666 creates a security vulnerability.

    You can check if Docker Compose v2 is installed as follows:

    $ sudo docker compose version
    Docker Compose version v2.28.1
    

    If nvm is not installed, you should install nvm as shown in the following link nvm - install & Update Script.

    $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
    

    After installing nvm, install the latest LTS version of Node.js and set it up for use.

    $ nvm install --lts
    $ nvm use --lts
    

    How to Install the Development Environment

    To actually contribute code, you need to write a pull request, and unless it's a simple typo correction or documentation-related contribution, you need to directly modify the code and run it, so it's essential to set up your own development environment. Backend.AI has a structure where multiple components work together, so installation is not complete just by cloning one repository and creating a Python virtual environment with an editable install[1]. At a minimum, you need to set up and run manager, agent, storage-proxy, webserver, and wsproxy to check the functioning GUI, and for the CLI environment, you need to install the client SDK separately. Also, Redis, PostgreSQL, and etcd servers need to be run together for manager operation and communication with the agent.

    If you have installed the essential packages introduced earlier and want to install multiple components of Backend.AI, you can install them using the scripts/install-dev.sh script in the repository. This script does the following:

    • Checks for the installation of pyenv, Python, Docker, npm, etc., and guides the installation method
    • Installs all of these various components in their respective directories
      • At this time, components such as accelerator-cuda, which are necessary for the operation of other components, are additionally installed in an editable state.
    • Adds database/etcd fixtures including basic port settings and example authentication keys that each component can look at each other
    • Creates and runs PostgreSQL, Redis, etcd services using Docker Compose under the name "halfstack"

    When the install-dev script execution is successfully completed, it outputs commands to run service daemons such as manager and agent, and basic configured example account information. Following the instructions, use terminal multiplexers like tmux, screen, or multiple tab features of terminal apps to run service daemons in separate shells, and confirm that the hello world example works. Then you're ready to develop and test Backend.AI.

    Currently, this method only supports Intel (amd64/x86_64) and ARM-based macOS and Ubuntu/Debian/CentOS/Fedora and Linux environments where Docker Compose can be installed.

    Usually, when you first use this install-dev script, it often stops due to various errors or pre-check failures and needs to be run again. In this case, you can easily perform the deletion procedure using the scripts/delete-dev.sh script.

    Installing and Uninstalling Backend.AI

    Using these install-dev and delete-dev scripts, you can freely install and uninstall Backend.AI. First, clone the Backend.AI repository.

    $ git clone https://github.com/lablup/backend.ai 
    

    Then install Backend.AI.

    $ cd backend.ai
    $ ./scripts/install-dev.sh 
    

    After the installation is complete, please take note of the result content that appears on the screen.

    If you want to uninstall Backend.AI, run the scripts/delete-dev.sh script from the location where you cloned the Backend.AI repository.

    $ cd backend.ai
    $ ./scripts/delete-dev.sh 
    

    Things to Know Before Contributing

    As with most projects managed in distributed version control systems, to contribute to Backend.AI, code work should be based on the latest commit of the main branch of the original remote repository, and if conflicts occur, they should be resolved before requesting a review. If you've forked the original repository, the current forked original repository and the actual original repository need to be synchronized.

    Before explaining the method, please refer to the following terminology to help understanding:

    • Original remote repository (upstream): The original Backend.AI repository. All major commit contents are reflected here.
    • Forked original repository (origin): The Backend.AI repository copied to "your" account via GitHub. (Note: Original remote repository != Forked original repository)
    • Code copy (local working copy): The forked repository currently downloaded to your local machine

    Git command branch notation

    • main: The main branch of the current local working copy
    • origin/main: The main branch of the repository (origin) from which I cloned to create my local working copy
    • upstream/main: The main branch belonging to the separately added upstream remote repository

    Workflow concepts

    • At the time of forking, origin/main is created
    • When you clone the forked repository, main is created on your work computer
    • Create a new topic branch from main and proceed with work
    • When you upload this work branch to origin and create a PR, GitHub automatically points to the original repository of the fork
    • At this point, to synchronize changes to the main of the original repository during work, follow the procedure below

    The method of synchronization is as follows:

    • step1: Add the original remote repository as a name called upstream
    $ git remote add upstream https://github.com/lablup/backend.ai
    
    • step2: Fetch the latest commits of the main branch of the original remote repository to the code copy (local working copy)
    $ git fetch upstream
    
    • step3: Bring the latest commit reflection history of the main branch of the original remote repository to origin (the code copy (local working copy) of the original repository you forked)
    $ git switch main && git merge --ff upstream/main
    
    • step4: Reflect the changes in the code copy (local working copy) made in steps 1 ~ 3 to origin (the remote repository of the original repository you forked)
    $ git push origin main
    

    Now upstream/main and origin/main are synchronized through main.

    • step5: Reflect the latest updates to my branch that I'm working on
    $ git switch topic
    $ git merge main
    

    When performing this process, if a history branch is created between origin/main and upstream/main and step 5 is performed incorrectly, it can become extremely difficult to recover. Also, when the CI tools used by Backend.AI test PRs, they are set to find common ancestor commits to see the differences between upstream/main and origin/topic, but if you reuse the main name for the topic branch, these tools will not work properly. If possible, think of always giving a new name when creating a new branch.

    How to Write a Pull Request

    To send a specific bug patch or feature implementation as a PR, you first need to upload it to GitHub. There are several methods, but the following is recommended:

    • Fork the repository on the GitHub repository page. (If you have direct commit permissions, it's recommended to create a branch directly without forking.)
    • In your local working copy, use git remote to point to that forked repository.
      • Following convention, it's good to name Lablup's original repository as upstream and the newly created forked repository as origin.
      • If you installed with install-dev first instead of cloning after forking, the original repository will be origin, so you need to rename the remote.
    • Create a new branch.
      • For branch names, prepend fix/ for bug fixes or feature/ for feature additions or improvements, and summarize the topic in kebab-case. (e.g., feature/additional-cluster-env-vars, fix/memory-leak-in-stats) Other prefixes like docs/, refactor/ are also used.
      • It's possible to write a PR by directly modifying the main branch, but during PR review and modification periods, if additional changes occur on the main branch, you'll have to rebase or merge every time you synchronize with the upstream repository, which is more troublesome. Having a separate branch allows you to rebase and merge when you want.
    • Commit changes to that branch.
      • Commit messages should follow the conventional commit style as much as possible. Like branch names, use title prefixes such as fix:, feat:, refactor:, docs:, release:, and for Backend.AI specifically, setup: for dependency-related commits, repo: for cases like gitignore updates or repository directory structure changes. You can also indicate affected components in parentheses. (e.g., fix(scripts/install-dev): Update for v21.03 release)
      • Commit messages should be written in English.
    • Push the branch and write the PR.
      • For PRs with separate issues, you should write the issue number in the PR body. If you want to reference an issue in the repository, look at the number in the issue link like https://github.com/lablup/backend.ai/issues/401 and write it in the format #401, and GitHub will automatically link it.
      • There's no specific format required for the PR body, but it's good to write what problem it's solving, what principle it's written on, or what tools or libraries were used, and why those choices were made.
      • PR titles and bodies can be written in English or Korean.
      • When you create a PR, you'll see various automated inspection tools in action. In particular, you must sign (register your GitHub username) the CLA (contributor license agreement) for the review to proceed.
      • You must pass all basic coding style and coding rule checks for each language. (For Python code, flake8, mypy, etc.)
      • In repositories with a changes directory and towncrier check, when you create a PR and receive its number, create a file named changes/<PR number>.<modification type> and write a one-line English sentence summarizing the changes in Markdown syntax. (For relatively simple content or if there's a separate existing issue, this content can also serve as the PR body.) Modification types include fix, feature, breaking, misc, deprecation, doc, and parts that differ by project are defined in each repository's pyproject.toml. You can refer to files like CHANGELOG.md or CHANGES.md to see how existing messages were written.
    • Proceed with the review process.
      • When completed, the reviewer usually organizes the commit log in a squash-merge form to create a single commit for merging.
      • Therefore, don't feel burdened about making frequent small modification commits during the review process, and feel free to make commits whenever you think of something.

    It's even better to use tools like GitHub CLI, SourceTree, GitKraken along with git commands.

    Summary

    We've looked at Backend.AI's overall component structure and repository structure, how to install the development environment, and how to write pull requests. I hope this guide has helped you take one step closer to Backend.AI's source code.


    [1]: An "editable" installation refers to a method of installing a Python package to directly look at the source directory, allowing changes to be immediately reflected when importing the package just by modifying the source directory without editing inside the site-packages directory.

    10 July 2024

  • FastTrack Guide: Receiving Notifications for Model Training Results

    By Jeongseok Kang

    From the now-classic AlexNet to various large language models (LLMs) that are garnering a lot of attention these days, we train and evaluate various models to suit our needs. However, realistically, it's difficult for us to gauge when the training will end until we run the model multiple times and gain experience.

    Backend.AI's excellent scheduling minimizes GPU idle time and allows model training to run even while we sleep. Then, what if we could receive the results of a model that finished training while we were asleep? In this article, we'll cover how to receive model training results as messages using the new feature of FastTrack and Slack.

    This article is based on the Backend.AI FastTrack version 24.03.3.

    Before We Start

    This article does not cover how to create a Slack App and Bot. For detailed information, we recommend referring to the official documentation.

    Creating a Pipeline

    Let's create a pipeline for model training. A pipeline is a unit of work used in FastTrack. Each pipeline can be expressed as a collection of tasks, the minimum execution unit. Multiple tasks included in a single pipeline can have interdependencies, and they are executed sequentially according to these dependencies. Resource allocation can be set for each task, allowing flexible resource management.

    When an execution command is sent to a pipeline, it is executed by replicating the exact state at that point, and this unit is called a pipeline job. Multiple pipeline jobs can be run from a single pipeline, and each pipeline job is generated from a single pipeline.

    Create Pipeline button

    Click the Create Pipeline button ("+") at the top of the pipeline list.

    Creating a Pipeline

    You can specify the pipeline's name, description, location of the data store to use, environment variables to be applied commonly across the pipeline, and the method of pipeline initialization. Enter the name "slack-pipeline-0", and then click the "Create" button at the bottom to create the pipeline.

    Creating Tasks

    Dragging a Task

    You can see that the new pipeline has been created. Now let's add some tasks. From the task template list (Task templates) at the top, drag and drop the "Custom Task" block onto the workspace below.

    Entering the Task's Actions

    A task details window appears on the right where you can enter the task's specifics. You can give it a name like model-training-task to indicate its role, and set it to use the pytorch:1.11-py38-cuda11.3 image for model training. Since actual model training can take a long time, for this example, we'll have it execute the following simple commands:

    # Pause for 3 seconds to increase the execution time.
    sleep 3
    # Create a `result.txt` file in the pipeline-dedicated folder. Assume this is the accuracy of the trained model.
    echo "0.$RANDOM" > /pipeline/outputs/result.txt
    

    Creating a Task (1)

    Finally, enter the resource allocation for the task, and then click the "Save" button at the bottom to create the task.

    Dragging Another Task

    You can see that the model-training-task has been created in the workspace. This time, to create a task that reads the value from the result.txt file saved earlier and sends a Slack notification, drag another "Custom Task" block into the workspace below.

    Entering the Task-level Environment Variable `SLACK_TOKEN`

    For this task, set the name to slack-alarm-task, and enter the following script to send a notification to Slack:

    pip install slack-sdk
    python -c '
    import os
    from pathlib import Path
    from slack_sdk import WebClient
    SLACK_BOT_TOKEN = os.environ.get("SLACK_TOKEN")
    JOB_ID = os.environ.get("BACKENDAI_PIPELINE_JOB_ID")
    def main():
        result = Path("/pipeline/input1/result.txt").read_text()
        client = WebClient(token=SLACK_BOT_TOKEN)
        client.chat_postMessage(
            channel="#notification",
            text="Pipeline job({}) finished with accuracy {}".format(JOB_ID, result),
        )
    if __name__ == "__main__":
        main()
    '
    

    The code above uses two environment variables: SLACK_TOKEN and BACKENDAI_PIPELINE_JOB_ID. Environment variables in the BACKENDAI_* format are values automatically added by the Backend.AI and FastTrack systems, where BACKENDAI_PIPELINE_JOB_ID represents the unique identifier of the pipeline job in which each task is running.

    The other environment variable, SLACK_TOKEN, is a task-level environment variable. This feature allows you to manage and change various values without modifying the code.

    Creating a Task (2)

    After allocating appropriate resources for the slack-alarm-task, click the "Save" button at the bottom to create the task.

    Adding Task Dependencies

    Adding Task Dependencies

    Now there are two tasks (model-training-task and slack-alarm-task) in the workspace. Since slack-alarm-task should be executed after model-training-task completes, we need to add a dependency between the two tasks. Drag the mouse from the bottom of the task that should run first (model-training-task) to the top of the task that should run later (slack-alarm-task).

    Running the Pipeline

    Running the Pipeline (1)

    You can see an arrow connecting from model-training-task to slack-alarm-task, indicating that the dependency has been added. Now, to run the pipeline, click the "Run" button in the top right.

    Running the Pipeline (2)

    Before running the pipeline, you can review a brief summary of it. After confirming the presence of the two tasks, click the "Run" button at the bottom.

    Running the Pipeline (3)

    The pipeline was successfully run, and a pipeline job was created. Click "OK" at the bottom to view the pipeline job information.

    Pipeline Job

    The pipeline job was created successfully. You can see that the model training (model-training-task) has completed, and slack-alarm-task is running.

    Receiving Slack Notification

    Slack Notification (1)

    Slack Notification (2)

    You can see that the pipeline job execution results have been delivered to the user via Slack. Now we can sleep soundly.

    30 May 2024

  • 24.03: Release Update

    By Lablup

    24.03, the first release of Backend.AI in 2024, has been released. This update brings significant improvements to the UI and user experience, making it even more functional to install and operate. It includes the following updates since 23. 9, which include

    Backend.AI Core & WebUI

    • We've made installing Backend.AI even easier with the addition of a TUI-based installer, which automates the download process and makes it easy for users to install and get started with Backend.AI.

    • New: Added trash functionality to vfolders. Files in unused vFolders are now moved to the trash instead of deleting them completely, and are then deleted through a complete deletion process. New: Added an argument value to indicate the state of the vfolder.
    • New: Added Backend.AI Model Store, where you can now store, search, and easily utilize various machine learning and deep learning models.
    • Added metadata for indexing to vfolders to utilize indexes instead of full directory scans for queries.
    • Improved system resource utilization by introducing a limit policy for session creation based on the number of pending sessions and requested resource slots. This new resource policy option helps filter and set the maximum value for resource presets and custom resource sliders in the Session Launcher.
    • We've added dark themes to the WebUI, so users can now choose from a variety of options to suit their personal preferences.

    • Improved screen tuning for unaligned line breaks, whitespace, and extruded announcements in the WebUI, as well as stability improvements such as session name validation.
    • The Session Launcher for Model Serving also limits UI input so that only as much input is available as the allocated resources.
    • Added an allowAppDownloadPanel argument to hide the WebUI app download panel in the config.toml file for different UI user options.

    Backend.AI is constantly evolving to provide a more powerful and user-friendly experience while supporting a variety of environments in the ever-changing AI ecosystem. We look forward to seeing what's next! Make your AI accessible with Backend.AI!

    29 March 2024

  • 2023 Winter Intern in Lablup

    By Byeongjo Kim

    Overview

    I applied to the Open Source Contribution Academy (hereinafter referred to as the Contribution Academy) hosted by OpenUp, and worked as a fall intern at Lablup Inc. (hereinafter referred to as Lablup) from November to December for 8 weeks. Afterwards, I extended it for an additional 8 weeks from January to February, working a total of 16 weeks.

    After being discharged from the military, I have written about the experiences I had while working at Lablup, my first company as a developer.

    Motivation for Applying

    Even before the Contribution Academy, I was interested in Lablup, and coincidentally, I had an opportunity to contribute through the Contribution Academy.

    During the Contribution Academy period, I worked on resolving issues and refactoring the webui of Backend.AI.

    While participating in the Contribution Academy, I felt a lot of affection, interest, and enjoyment towards Backend.AI, and I began to think that I wanted to continue contributing after the program ended.

    Lablup happened to provide an opportunity to work there in conjunction with the Contribution Academy, so I applied without hesitation.

    Onboarding

    For the first 3 weeks of the internship, I underwent an onboarding process.

    I went through implementing a RealTime Chat, setting up the Backend.AI environment, and then the Pebble Seminar in that order.

    RealTime Chat

    This was the first assignment to become familiar with the core side of Backend.AI's code. I implemented a real-time chat app using Python, utilizing the aiohttp, aioredis, and asyncio libraries.

    Since there was no condition to store the chat contents, I used the in-memory database redis.

    I made it so that when a user enters a chat room, they subscribe to that chat room, and when a user enters a message, it publishes the entered message to the subscribed users, meaning the other users in the same chat room.

    RealTime Chat in action

    While I was able to handle Python at a basic level from preparing for coding tests, I had no experience using libraries like aiohttp, asyncio, and aioredis, so it took me some time to understand and grasp the concepts.

    However, this assignment helped me a lot in understanding the core side of Backend.AI's code, and it was good to be able to study new libraries.

    Setting up the Backend.AI environment

    Since I had already installed Backend.AI during the Contribution Academy, setting up the environment during the internship period wasn't too difficult.

    However, I was well aware that installing Backend.AI is not easy, as I had encountered many errors and failures while trying to install it during the Contribution Academy, and the other person doing the internship with me also had a lot of difficulties during the installation process.

    Since I had already experienced those failures and knew the solutions, I was able to help, and we were able to install it quickly and move on to other tasks.

    While setting up the environment, I also configured a virtual machine and VPN, and set up the environment on a virtual machine as well, so that I could work even if there were problems on my local machine. After setting up the configuration on the virtual machine, I mainly used the local for development during the subsequent work, and the virtual machine as a test server. The company's VM Farm, which allows for easy management and configuration of virtual machines, made it great for setting up development and testing environments.

    Pebble Seminar

    After completing the RealTime Chat and setting up the Backend.AI environment, I prepared a short seminar based on understanding the structure and code of Backend.AI. I was tasked with presenting on GraphQL and Relay, which are used in the Backend.AI WebUI.

    While I had experience with GraphQL, I felt that my knowledge was lacking for presenting in front of others, and Relay was a new library to me, so I was quite worried about preparing for the Pebble Seminar and read through many documents to prepare. First, I familiarized myself with the concepts by reading the official documentation for GraphQL and Relay, and then analyzed the Backend.AI code one by one to understand how they were applied and functioning in Backend.AI.

    Pebble Seminar preparation materials

    By analyzing the code while preparing for the Pebble Seminar, I naturally came to understand the code running in the WebUI, and this greatly helped me in finding and resolving issues during the subsequent work.

    Resolving Backend.AI issues and implementing features

    After completing the onboarding, I finally joined the frontend team and started resolving Backend.AI issues and implementing features. I had a coffee chat with the frontend lead to define the categories of work for this internship period:

    1. Creating a Table Column Setting component
    2. Researching E2E Testing
    3. Daily tasks

    During the 8-week internship period from November to December, I created a total of 19 Pull Requests, of which 18 were merged, and 1 is still under review. Since I had experience finding and assigning issues during the Contribution Academy, I had less difficulty with it, and because I enjoyed resolving issues, I was able to resolve more issues than others.

    Feature Addition PRs

    1. Implementing Table Columns Setting

    https://github.com/lablup/backend.ai-webui/pull/2071

    This was one of the issues I aimed to work on during the internship period. It was the only component that I conceived and implemented from scratch during the fall internship, rather than refactoring an existing component. Before implementing this feature, I thought it was a simple task that I could finish quickly, but things turned out differently from my expectations.

    First, I realized that I had been thinking too simplistically about creating new components. Even though I had designed and considered the props to be received before creating components in the past, through this issue, I felt that when creating a new component, I should invest more time and effort while considering scalability. I also realized that I should pay more attention to how other sites are designed and what features are applied.

    Table Columns Setting

    2. Adding service endpoint and owner columns to the Model Serving page table

    https://github.com/lablup/backend.ai-webui/pull/2047

    Previously, when creating a model service, users had to go to the detail page to check the endpoint, which is a frequently used feature. So there was a request to add the endpoint to the table column. Additionally, since the admin account can see services of users in the same group, there was a suggestion to have a column showing the service owner. Since the GraphQL fields for retrieving this data were already implemented, I added the fields to the query to fetch the endpoint and service owner data, and then added columns to the table to display the data. The owner column is only shown for admin accounts.

    Implementation view. Screen for admin account (left) and user account (right)

    3. Disabling log button for sessions in CANCELLED state

    https://github.com/lablup/backend.ai-webui/pull/2045

    The CANCELLED state means that the container has never been created or failed to be created. Previously, the log button was enabled even for sessions in the CANCELLED state, and if a user clicked the log button, the agent could not find the container information, resulting in a 500 error. In this PR, I made it so that the log button is disabled for sessions in the CANCELLED state, preventing users from clicking it.

    Session in TERMINATED state (session 1) and CANCELLED state (session 2)

    4. Testing and creating a custom hook for dark mode

    https://github.com/lablup/backend.ai-webui/pull/2120

    Before implementing dark mode, I found components with hardcoded colors and implemented a custom hook named useThemeMode for applying dark mode. When creating the custom hook, I tried to use the useLocalStorageState hook from ahooks, but contrary to my expectations that it would automatically manage states with the same key value, I found that they operated independently. To handle states with the same key value automatically updating when the value changes, I added a custom hook named useLocalStorageGlobalState, and then used that to create the useThemeMode custom hook for setting the dark mode.

    Bug fix PR

    1. Allowing signup without invitation token

    https://github.com/lablup/backend.ai-webui/pull/2046

    In the config.toml, when the allowSignupWithoutConfirmation option is set to true, users can sign up without an invitation token. However, when a user clicked the sign up button, an error occurred because the token value was undefined. In this PR, I modified it so that if allowSignupWithoutConfirmation is true, the token variable is not used. Additionally, previously users could modify other input values while the core was processing the data after clicking the sign up button, and the previous data remained when the dialog was closed and reopened. In this PR, I made it so that other input values cannot be entered while data is being processed, and the previous input values are cleared when the dialog is closed.

    2. Displaying the correct screen for the selected sub-tab on the user management page

    https://github.com/lablup/backend.ai-webui/pull/2055

    On the user management page, there are sub-tabs for displaying active users and deactivated users. However, if a user navigated to another page and then returned to the user management page, even though the sub-tab was set to inactive, the screen displayed the list of active users, causing confusion. This PR resolved that issue by remembering the current sub-tab when navigating to another page, and displaying the appropriate screen for that sub-tab when returning to the user management page.

    Before (left) and after (right) the fix

    Extending the internship

    As I resolved issues, the 8-week period flew by, and it was time to wrap up the fall internship.

    Working at Lablup as my first job after being discharged from the military was an important period for me. During the internship at Lablup, I was able to experience my strengths and weaknesses, what skills I needed to further prepare, and how other developers work. The 2-month period felt very short, and since I had enjoyed working so much during that time, I wanted to continue working. So I expressed my desire to extend the internship to the lead, and we agreed to extend it for another 8 weeks until February. During the fall internship, I had thought a lot about my weaknesses, but I couldn't find my strengths. So, I started with these 3 personal goals:

    1. Find my strengths during this period
    2. Read the documentation whenever I have time
    3. Work even harder, leaving no regrets

    Resolving issues and implementing features during the extended period

    The work during the extended period did not differ much from before. Without the onboarding process and installation, I could focus more on resolving issues.

    Feature Addition PRs

    1. Refactoring ErrorLogList

    https://github.com/lablup/backend.ai-webui/pull/2131

    I refactored the ErrorLog List, which was previously implemented using Lit elements, to React. This feature was the most satisfying issue for me, as I personally use it frequently after the refactoring.

    Before (left) and after (right) refactoring

    During the refactoring, new Search and Error filter features were added.

    Added Search feature (left) and Filter feature (right)

    2. Modal drag functionality

    https://github.com/lablup/backend.ai-webui/pull/2179

    I used the React-draggable library to add functionality for modals to be dragged. By adding the Draggable prop to a modal, it can be applied to any modal that requires dragging.

    Draggable Modal

    By clicking the icon on the left side of the modal title and moving the mouse, the modal can be moved to the desired position on the screen.

    Currently, it is applied to the modal for viewing user information on the user management page and the modal for changing user settings, where you can check it.

    While it is not being used in many places yet, I think this PR will be useful as more components and features are added.

    Bug fix PR

    1. Modifying Vfolder invitation permissions

    https://github.com/lablup/backend.ai-webui/pull/2143

    There was an issue where the user permissions for group vfolders were not being updated. When trying to modify the permissions, the items were not displayed or selectable properly in the select. Previously, the items were being displayed using option tags, but I changed it to use mwc-list-item to display the items and modified the overflow option to resolve this issue.

    Before (left) and after (right) the PR

    2. ResourceGroupSelect extending outside the card

    https://github.com/lablup/backend.ai-webui/pull/2166

    There was an issue where the ResourceGroupSelect value would be displayed outside the card if it was too large.

    Symptoms of the issue

    To resolve this issue, I set the max-width CSS on the Select component so that it cannot exceed the width of the card.

    Additionally, in this PR, I added a Search feature to the Select component, for which I used the useControllableValue hook from ahooks. The useControllableValue hook helps manage props from either the parent or itself. While it was a simple PR, it took me longer than expected since it was my first time using useControllableValue. I was able to resolve this issue with the help of the lead and another intern.

    3. Key pair list not showing when clicking the generate & manage key pair button on the summary page

    https://github.com/lablup/backend.ai-webui/pull/2194

    On the summary page, there are buttons for "Generate New Key Pair" and "Manage Key Pairs." However, when clicking these buttons, instead of showing the key pair list, it simply navigated to the user management page, displaying the user list.

    "Generate New Key Pair" and "Manage Key Pairs" buttons on the summary page

    When clicking the "Generate New Key Pair" button (left) and when clicking "Manage Key Pairs" (right)

    While this issue was not critical, I resolved it because I had experienced a lot of confusion when I first used Backend.AI and didn't fully understand the key pair feature.

    After resolving this issue, I could confirm that the key pair list was displayed on the screen as intended.

    After resolving the issue, when clicking the "Generate New Key Pair" button (left) and when clicking "Manage Key Pairs" (right)

    Completing the Internship

    Thanks to the Contribution Academy, which started from a friend's recommendation after being discharged from the military, I was able to contribute at Lablup for an extended period. Since I had no previous internship or project experience at other companies, this was a very important period for me as I was starting anew after being discharged. It was great that I could experience my strengths and weaknesses, the skills I lack, and the culture of an open-source company at Lablup. How many companies make you want to go to work every day with their horizontal structure, free atmosphere, pleasant working environment, and good equipment? Although I worked at Lablup for 4 months, I felt like I wanted to go to work every day, and if it was Lablup, I could work at a company while doing interesting and desirable work for a long time. Over the 4-month period, I also developed a fondness for Backend.AI, the service provided by Lablup, and I plan to attend the conference hosted by Lablup every year whenever possible to see its advancements and technologies.

    Lablup Office

    This post was also published on the author's personal blog. https://gee05053.tistory.com/32 This post is automatically translated from Korean

    11 March 2024

  • Backend.AI Meets Tool LLMs : Revolutionizing AI Interaction with Tools - Part 3

    By Sergey Leksikov

    Part 3. Making own API Retriever and Question Answering system with few lines of code locally without training and serving LLM

    Previously, in Part 1 we talked about Tool LLM and their usage. Part 2 demonstrated how to run Gorilla LLM on Backend.AI. In the Part 3, there will be talk about the case when there are no GPU available, but we still want to get help and assistance regarding our API.

    Suppose we have Backend.AI, and we want to get information about Backend.AI REST API and Functional API in more interactive way via question answering style. The example of REST API can be described in this documentation: https://docs.backend.ai/en/latest/manager/rest-reference/index.html

    Figure 1. Backend.AI REST API Documentation

    In addition, Backend.AI REST API documentation can be exported into openapi.json format:

    Figure 2. Backend.AI openai.json

    Another source of BackendAI API is functional API defined in Backend.AI Client. We want to know how to interact with Backend.AI and which parts of code are responsible. The client code repository is responsible with managing and interacting with cloud and computing environment:

    Steps to make a Question Answering API system

    1. Let’s setup Backend.AI Client locally from https://github.com/lablup/backend.ai/tree/main/src/ai/backend/client on our local PC environment and create a new directory bai-dev/src/ai/backend/client/gpt_api_client

    Figure 3. The directory location of gpt_api_client

    1. At vector_data directory let’s create two sub directories data1/ which will store a REST api documentation: openapi.json and data2/ will store selected B.AI Client files over which we want to do an API Question Answering.

    Figure 4. Overview of data directories with openapi.json and client function code files

    1. Let’s install python library LlamaIndex library. Pip install llama-index. Note LlamaIndex is not related to Meta LLaMA language model. LlamaIndex is about data structures and methods for efficient processing and storing documents for retrieval.

    2. Let’s convert our api and code files into an embedded vector and store them in a Vector Database with LLamaIndex. Let’s use Jupyter Notebook interactive environment which is also integrated in out VSCode on a local PC.

    Figure 5. Jupyter Notebook interactive environment. Loading openapi.json from data/ directory. Then asking questions from query engine over a vector index.

    1. Vectorize data2/ directory with our code functions

    Figure 6. Load data2/ directory with code files from B.AI Client. Then vectorize them into index and create a question answering engine.

    We can save both indexes using python Pickle or Joblib libraries which are commonly used for storing and serializing objects to later load them into system. joblib.dump(index, "rest_api_index.joblib") and joblib.dump(index, "functional_index.joblib")

    1. Jupyter Notebook environment already provides to us ability to ask questions and get response in interactive way. Additionally, we can load the saved vectorized indexes on FastAPI server and answer questions over the web. In previous Part 2, we set computational session with Gorilla LLM. From the previous demo we still have a computational session with a FastAPI server.

    2. Let’s transfer the files rest_api_index.joblib and functional_index.joblib to api_helper/ vFolder at Backend.AI Cloud session

    3. At file server.py load the vector indexes and define the query engines.

    Figure 7. server.py definition of index files and query engine.

    1. For each query engine we specify an FastAPI endpoint.

    Figure 8. Code snippets for REST and Functional API retrieval

    1. Test server response from your local PC using curl command. When a server gets queried on a specific endpoint, it will get an answer from a user.
    curl -X POST -H "Content-Type: application/json" -d '{"instruction":"Create a new session"}' http://127.0.0.1:8000/rest_api
    

    Figure 9. Command line response from curl command. Example 1

    curl -X POST -H "Content-Type: application/json" -d '{"instruction":"Create a new session"}' http://127.0.0.1:8000/functional
    

    Figure 10. Command line response from curl command. Example 2

    In addition, we can make a web app which receives user input, sends to corresponding endpoint, and receives the answer.

    Figure 11. A web app prototype for Question Answering over Backend.AI REST and Functional API. Example 1

    Figure 12. A web app prototype for Question Answering over Backend.AI REST and Functional API. Example 2

    Conclusion

    In Part 3, we demonstrated how to locally create a Question-Answering system using open-source python library LLamaIndex which helped to convert our documents and Backend.AI code into vector form. The question answering can be done in interactive way in a Jupyter Notebook environment which Visual Studio Code supports with plugins. Furthermore, we decided to move those vector indexes to a Backend.AI Cloud environment where a Gorilla LLM API tuned model is server. Then an API Question-Answering web app was implemented to assist users over network.

    Reference:

    • LLama Index. https://docs.llamaindex.ai/en/stable/

    Demo video for Backend.AI API Helper and Gorilla LLM:

    30 January 2024

  • Backend.AI Meets Tool LLMs : Revolutionizing AI Interaction with Tools - Part 2

    By Sergey Leksikov

    Part 2. Backend.AI Gorilla LLM model serving

    Previously, we talked about the Tool LLM capabilities and usage. In this article, there will be a step-by-step demonstration of how to run the Gorilla LLM model on the Backend.AI Cloud while using Backend.AI Desktop app.

    Figure 1. A Backend.AI Desktop app installed on MacOs

    1. Press a start button to make a session creation menu appear.

    Figure 2. New session start interactive screen

    1. Select NGC-Pytorch 23.07 image

    2. Attach a vFolder which is a working directory containing the model files. For example: api_helper/ directory name.

    Figure 3. Attaching vFolder screen

    1. Select the resource amount 128 GB RAM and 5 fGPU

    Figure 4. Resource selection screen

    1. Select a Visual Studio Code Desktop environment

    Figure 5. IDE environment selection screen

    1. At /home/work/api_helper/ directory create a server.py file

    2. Create a requirements.txt file

    Figure 6. Content of requirements.txt file

    To install requirements run the command: pip install -r requirements.txt

    Figure 7. Executing install requirements command

    1. Create a server.py and define using transformers library the tokenizer and model loader.

    Figure 8. Code snippet of server.py

    1. Define server IP address and port number

    Figure 9. Definition of server IP address and port number

    1. To run the model type: python server.py

    Figure 10. Starting a server.py

    1. Accessing the created server

    VSCode automatically creates a port tunneling session from your device to a Backend.AI Cloud server. You may see the server status by accessing the localhost address and the request will be tunneled to a Backend.AI Cloud. In addition, you may define other custom endpoints according your needs.

    Figure 11. The server run log

    Figure 12. VSCode Port Forwarding configuration

    Figure 13. Accessing the root of a server

    Up to this point, we create a computation session on Backend.AI Cloud, attached an api_helper/ vFolder directory with requirements.txt file and server.py. Then we started our FastAPI server where the Gorilla LLM is gets downloaded from HuggingFace repository and loaded into computation session memory with inference/ api .endpoint

    1. API Inference testing To test the API inference of Gorilla LLM you may create a curl request from your local computer command line:
    curl -X POST -H "Content-Type: application/json" -d '{"text":"Object detection on a photo. <<<api_domain>>>:"}' http://127.0.0.1:8000/inference
    

    Figure 14. An example of curl request

    Figure 15. The GPU workload on a server after receiving the request

    Figure 16. The server logs of receiving the request and printing the result

    1. Defining UI web app. You may use any web technology to make a UI app which can display the result in a better way. For example, you may use html and JavaScript files and place them in static directory under root of server.py Then define an endpoint for a web app.

    Figure 17. Example of adding an html web app to a FastAPI server

    1. Gorilla LLM Web App prototype - an API tuned Large Language Model for API question answering and code generation.

    Figure 18. Gorilla LLM web app prototype. Example 1

    Figure 19. Gorilla LLM web app prototype. Example 2

    Conclusion

    Despite some difficulties of Gorilla LLM serving, LLM tuned on own API has a large potential and promises. Since, the model can provided the most recent results with more accurate parameters and function calls than commercial large models and be useful in tasks such as question answering over API, code autocomplete, API code executions.

    Limitations and difficulties:

    While trying to server the Gorilla LLM model there were following issues to consider:

    • Model may generate response in not expected format
    • Model may generate result different for same questions
    • Parsing and rendering LLM response
    • Eliminating the duplicate sentences and lines

    29 January 2024

  • Backend.AI Meets Tool LLMs : Revolutionizing AI Interaction with Tools - Part 1

    By Sergey Leksikov

    Part 1. Introduction to LLMs and Tool Interaction

    What if future AI technology capabilities were available now? Probably while you are on the way home from your workplace, you could ask an AI Assistant to turn on the air-conditioner in the home before your arrival. At same time you are planning the vacation and after having few options you ask an AI model to do hotel booking on your behalf. As the model books your trip, you receive a notification from a cloud provider about your deep learning model's training progress. You ask the AI Assistant to run another session with another set of parameters for the experiment while targeting specific values for performance accuracy. How be such a futuristic scenario realized in the present days?

    This kind of interaction of LLM with real world could be possible via Application Programmatic Interfaces (API). The specific Tool Large-Language Model (LLM) fine-tuned on APIs dataset can respond user’s query with specific API and that API can invoke a program or functions to make a real-world impact. Large Language Models (LLM) are rising in popularity due to their outstanding capabilities of generating text in context while also having reasoning capability for problem solving. Text model utilization ranges from text generating, editing they as well become useful as a copilot for a programmer. How else can LLMs extend their usage beyond their text-generating capabilities?

    With Tool LLM, we are stepping into an era where AI in addition to understanding our requests, the AI can act on those requests using a universe of online tools. Tool LLM are pushing the boundaries of what AI can do with tools via functional and REST APIs.

    GPT-4 is currently the state-of-the-art among LLMs, topping most AI benchmarks. Consider this scenario, a GPT-4 model is being asked to transcribe the audio file into text of another language. However, when prompted to use specific APIs, GPT-4 may hallucinate and suggest non-existent APIs or provide incorrect arguments. As consequence causing function execution failure and not achieving objectives of user specified task.

    Besides issues with hallucinations and inaccuracies, API documentation and versions are constantly changing. The retraining general purpose LLM is costly and not practical to keep the LLM models updated with constantly changing documentations. Tool LLMs provides a solution to the hallucination issues of general large models, enabling interaction with the physical world via programmatic interfaces. Tool LLM are much smaller, making it feasible to periodically be retrained with recent data. In addition, API documentation Retriever module can be added into model serving pipeline to help supplement the model with the most recent API documentation which is relevant to user’s input query.

    To overcome these challenges, researchers have recently proposed two notable open-source methods for enhancing LLMs tool use abilities such as Gorilla LLM and ToolLLaMA, each having its own advantages and specific use cases. Moreover, those models can be prepared for inference serving on Backend.AI Cloud.

    What is Tool LLM?

    Tool LLM is an LLM which was trained on a dataset with user query and API request with relevant context information such as API code usage and API description documentation. The response from such LLM can be executed as a code. The code execution implies that the LLM can interact with various online services and tools. Such as Cloud Computing Providers, Kubernetes machine learning and Deep Learning libraries and repositories such as HuggingFace, TorchHub, TensorFlowHub.

    The main advantage of such Tool LLM is ability to accurately generate an API response to user query which can be executed to obtain the results.

    Understanding the Types of API

    An Application Programming Interface (API) is a crucial element in modern computing, serving as a set of rules and protocols for how different software applications or hardware systems can communicate and interact.

    Functional APIs are designed to be invoked through function calls within a programming environment. For instance, machine learning and deep learning libraries like HuggingFace and TensorFlow offer various models that can be loaded into memory and utilized through Functional API calls. These APIs are integral in executing specific functions and operations within the software.

    This capability of LLM to generate a code related to an API extends their utility far beyond basic text generation and processing. Tool LLMs can seamlessly integrate with diverse online services and tools, ranging from cloud computing platforms to advanced machine learning libraries. Furthermore, their application is not limited to human queries; they can also be integrated into systems where they interact with other programs or AI agents. This versatility positions Tool LLMs as vital components in complex systems and infrastructures, enhancing their potential for real-world applications.

    In the following sections, we'll delve into how Tool LLM were trained and how they are operated. After that two specific research examples will be covered such as Gorilla LLM and ToolLLaMA.

    Tool LLM Training and Inference Workflow

    Tool LLM training involves several steps which includes setting api database, creating a training dataset, model training and inference.

    The API Database includes descriptions and relevant code samples. To generate a Self-Instruct training dataset there is a need to pre-process API database samples into {Input User Query-API Output} pairs. ChatGPT can help with automatically generating such dataset by covering various scenarios and query complexities which humans might ask. From specific cases to general and abstract cases. After Self-Instruct dataset is generated the model is trained to make accurate prediction in terms of API given user input query.

    For Tool LLM inference, it's crucial that the LLM not only responds with accurate argument parameters but also uses the latest API documentation. Thus, API Document Retriever is used which helps to keep the model with the most recent API changes.

    Figure 1. An overview workflow of Tool LLM training and inference over API instuction dataset

    Case Studies: Gorilla LLM and ToolLLaMA

    Gorilla

    Gorilla, a fine-tuned LLaMA 7 billion-based model that outperforms GPT-4 in writing API calls. The notable aspects of Gorilla are:

    • It addresses the limitations of current LLMs in generating accurate input arguments for APIs and their tendency to hallucinate incorrect API usage.
    • Gorilla integrates with a document API retriever, allowing it to adapt to real-time changes in documentation, a significant advantage considering how frequently APIs get updated.
    • The authors have developed a dataset called APIBench to evaluate the model's abilities, which includes APIs from HuggingFace, TorchHub, and TensorHub totaling 1600+ APIs.
    • Gorilla seems to mitigate hallucination issues and improves the reliability of LLM outputs. Also, Gorilla got updated and extended to work with Cloud providers such as AWS, GCP and managing Kubernetes clusters.

    ToolLLaMA

    ToolLLaMA is a model which was fine-tuned on ToolBench an instruction-tuning dataset for tool based on RapidAPI repository. There are following keypoints of ToolLLaMA:

    • ToolBench covers an impressive range of over 16,000 real-world APIs, offering diverse instruction sets and solution paths.
    • The paper proposes a novel Depth-First Search-Based Decision Tree algorithm (DFSDT) to enhance the reasoning capabilities of LLMs such as multiple tool usage and multi-step reasoning.
    • Finetuned ToolLLAMA on ToolBench matches the performance of ChatGPT and demonstrates the generalization abilities in out-of distribution datasets like APIBench.

    Both papers are significant in pushing the boundaries of LLM’s capabilities in real-world tool use by navigating and utilizing a vast array of APIs. This advancement is crucial for practical applications. Below is a comparative summary table provided.

    Figure 2. A comparative table between two API tuned LLM

    Synergy between Backend.AI and ToolLLM

    The training or model serving of LLM requires a significant computer resource, especially since there is a huge demand for Graphic Processing Units (GPU) with high capacity for RAM and computational speed.

    Backend.AI offers a scalable foundation for building, training, and serving diverse models. Backend.AI includes scaling on demand feature for model inference with adding external node for serving and Load Balance to optimize the workload. Backend.AI has vLLM and TensorRT server which can be used for high performance inference of LLMs. In addition, there is a well-designed user-friendly interface and pipeline maker FastTrack tool to create computing environment sessions of various complexities.

    Conclusion

    The futuristic scenario which can be realized at present day where various AI Assistants and Agents interact with various devices and services are possible through API and Tool LLM specifically fine-tuned on such interactions. Gorilla LLM and ToolLLaMA offer a good opportunity to incorporate them in complex tasks. The workflow of how they trained and served is easy to comprehend. Gorilla LLM could be recommended to use for Machine Learning and cloud administration tasks. While ToolLLaMA for more general API usage, multi-tool, and multi-step cases.

    There is also an advantage of training your own model on your own API documentation or code to have a LLM model which understands your code. Such LLM can be helpful at assisting or interacting with users who want to get the relevant information.

    The Backend.AI can effectively to be a backbone for model training and providing scalable model serving while offering a simplistic GUI. How to set up such models and step by step guide will be explained in other parts.

    Commonly asked questions:

    • Q: What is source of hallucinations and LLM limitations and how it solved in Tool LLM?
    • A: GPT-4, like other Large Language Models, faces limitations such as hallucinations and inaccuracies, which are primarily due to its training on extensive yet potentially outdated or inaccurate datasets from the internet. These 'hallucinations' refer to instances where the model confidently produces information that's either factually incorrect or not based in reality, a challenge stemming from the nature of its purely text-based training data and not directly from its size or lack of interaction with the physical world. To address these issues, Tool LLMs are being developed with a focus on specialization and frequent updates. They are fine-tuned on specific datasets, like API documentation, enabling direct interaction with real-world systems through programmatic interfaces for more accurate and current information. The retraining frequency of Tool LLMs varies, depending on the application and the pace of change in the relevant field, with updates potentially needed monthly, quarterly, or bi-annually to keep the model up-to-date with the latest trends and information.
    • Q: What are example pairs of user Query and API?
    • A: The example pairs are provided below.
    • User Query: "Summarize this article about space exploration."
    • API Output: HuggingFace.summarize(text="Article text here", model="facebook/bart-large-cnn")
    • User Query: "What is the sentiment of this customer review?"
    • API Output: HuggingFace.analyze_sentiment(text="Customer review text", model="distilbert-base-uncased-finetuned-sst-2-english")
    • User Query: "Identify the objects in this photo."
    • API Output: HuggingFace.image_recognition(image_file="path/to/photo.jpg", model="google/vit-base-patch16-224")
    • User Query: "Convert this speech recording to text."
    • API Output: HuggingFace.speech_to_text(audio_file="path/to/recording.wav", model="facebook/wav2vec2-base-960h")
    • Q: How do the GorillaLLM and ToolLLaMA papers differ in their approach to utilizing API documentation during the training and inference of their models?
    • A: GorillaLLM appends relevant API documentation during training and offers two inference modes, while ToolLLaMA employs Sentence-BERT for fine-tuning embeddings in the API domain. GorillaLLM uses BM25 and GPT-Retriever from LLamaIndex for documentation retrieval, whereas ToolLLaMA uses Sentence-BERT for a similar purpose.
    • Q: How frequently should small API models be retrained, and what role does the API Retriever play in handling changes in API documentation?
    • A: Training small API models annually is reasonable, but monthly retraining for API changes isn't practical. The API Retriever, using up-to-date documentation, can mitigate the need for frequent retraining. Evaluating and benchmarking fine-tuned API models and RAG methods is essential for effectiveness.
    • Q: What is the difference between ToolLLM and RAG systems, and how do they function in the context of LLMs?
    • A: ToolLLM is a model fine-tuned on API documentation, focusing on incorporating knowledge. RAG systems, on the other hand, are algorithms for data chunking, storage, search, re-ranking, and synthesis. They can work independently or in combination to enhance LLM efficiency, especially in handling context limits and knowledge updates.

    Reference:

    • Gorilla: Large Language Model Connected with Massive APIs. https://gorilla.cs.berkeley.edu/
    • ToolLLM: Facilitating Large Language Models To Master 16000+ Real-World APIs. https://github.com/OpenBMB/ToolBench

    28 January 2024

  • Raft Consensus algorithm for Backend.AI: Leader election

    By Jeongseok Kang

    High availability (HA) has become an indispensable concept when talking about modern applications. High availability is the ability of an IT system to remain nearly 100% accessible and reliable at all times by eliminating or minimizing downtime^1. Backend.AI, which is developed and serviced by Rableup, also employs various methods to maintain high availability.

    Backend.AI architecture

    Background

    Backend.AI consists of many different components, including managers and agents, storage proxies, and web servers. Each of these components runs as multiple processes in a distributed environment to increase reliability, especially the manager, which is responsible for scheduling session execution and many core functions of Backend.AI. Currently, the manager has an Active-Active HA structure that ensures high availability through load balancing.

    One of the many features of the Backend.AI Manager is event handling. Backend.AI raises various events, such as AgentStartedEvent and DoScheduleEvent, to track the lifecycle of agents and sessions and provide optimal scheduling. For example, when a Backend.AI Agent process runs, it generates an AgentStartedEvent, and the Backend.AI Manager process receives this event and performs a specific action (schedule()). Backend.AI Manager also raises a DoScheduleEvent internally to ensure periodic scheduling. This is where the problem arises. If you are running multiple Backend.AI Manager processes for high availability, having each process raise an event with its own timer adds unnecessary load and can cause the health of the entire system to be unreliable. The Backend.AI Manager implements a GlobalTimer to ensure that only one manager process generates events within the same system. The GlobalTimer uses distributed locks to ensure mutual exclusivity between processes and to ensure that only one process generates events.

    @preserve_termination_log
    async def generate_tick(self) -> None:
        try:
            await asyncio.sleep(self.initial_delay)
            if self._stopped:
                return
            while True:
                try:
                    async with self._dist_lock:
                        if self._stopped:
                            return
                        await self._event_producer.produce_event(self._event_factory())
                        if self._stopped:
                            return
                        await asyncio.sleep(self.interval)
                except asyncio.TimeoutError:  # timeout raised from etcd lock
                    if self._stopped:
                        return
                    log.warn("timeout raised while trying to acquire lock. retrying...")
        except asyncio.CancelledError:
            pass
    

    Currently, Backend.AI provides an interface for distributed locks, [AbstractDistributedLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L33-L44), and we have developed and are using [FileLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L47-L142), [EtcdLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L145-L190) based on the [etcd concurrency API] (https://etcd.io/docs/v3.5/dev-guide/api_concurrency_reference_v3/), and [RedisLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L193-L248) based on [Redis Lock] (https://redis.io/docs/manual/patterns/distributed-locks/) as actual implementations.

    etcd is a distributed, open-source key-value store used to store and manage critical information needed to keep distributed systems running^2, most notably in Kubernetes.

    class AbstractDistributedLock(metaclass=abc.ABCMeta):
        def __init__(self, *, lifetime: Optional[float] = None) -> None:
            assert lifetime is None or lifetime >= 0.0
            self._lifetime = lifetime
    
        @abc.abstractmethod
        async def __aenter__(self) -> Any:
            raise NotImplementedError
    
        @abc.abstractmethod
        async def __aexit__(self, *exc_info) -> Optional[bool]:
            raise NotImplementedError
    

    Requirements

    The GlobalTimer does a good job of controlling event generation on a per-process basis in a distributed environment. However, requirements are always changing and the software needs to change with them. This time, the added requirement was to implement a rate limit for requests. With the current load balancing scheme, we can't guarantee that every request is handled by the same manager, which can lead to the following problems because the state of each manager is not shared.

    1. Set the counters for both managers to 0 and the request count limit to 1.
    2. The first request is received by manager 1.
    3. Increase the counter on manager 1 by 1. (C1: 0 -> 1)
    4. The counter reaches the maximum allowed number of requests and the next request is rejected.
    5. Manager 2 receives the second request due to load balancing.
    6. The counter on manager 2 has not reached the maximum allowed number of times because it is still 0. (C2: 0)
    7. Manager 2 processes the request.
    8. The request count limit didn't work!
    

    Therefore, the following issue has been proposed to discuss ways to improve these limitations.

    Issue suggesting improvements to distributed timers (lablup/backend.ai#415)

    To delegate global state management to a single manager process, represented by a leader, we investigated consensus algorithms and decided to use the Raft Consensus Algorithm (hereafter Raft), which is used in projects such as etcd, which is used as a repository in Kubernetes (https://kubernetes.io/docs/concepts/overview/components/#etcd), and which we believe has been well validated.

    Raft consensus algorithm

    The Raft algorithm was proposed in "In Search of an Understandable Consensus Algorithm"^3 submitted to USENIX in 2014. It was created to improve upon Paxos^4, the leading algorithm at the time, which was difficult to understand and implement in practice due to its complex consensus process, hence the title.

    But our most important goal — and most difficult challenge — was understandability.

    • In Search of an Understandable Consensus Algorithm

    A Raft cluster typically consists of five nodes, because a maximum of two nodes can fail and still satisfy a quorum to maintain the system. Each node in a cluster has one of three states: leader, follower, or candidate. In general, there can be at most one leader in each cluster, with the rest of the nodes being followers.

    Glossary #1

    • quorum: The minimum number of people required to make a decision. (N/2+1)
    State transition diagram of a Raft node (Source: In Search of an Understandable Consensus Algorithm)

    The Raft algorithm delegates all power to an elected leader and makes the flow of logs unidirectional, making it easier to understand the overall picture. The Raft algorithm has the following characteristics

    Glossary #2

    • term: The generation of the current leader or candidate. Incremented by 1 each time a leader election begins.
    • index: Refers to the location of a specific value in the log.
    • commit: Indicates that a specific value from the log was applied to the state machine.
    • commitIndex: Highest index that successfully commits
    • Election Safety: Each term has a maximum of one leader.
    • Leader Append-Only: Readers cannot overwrite or delete logs, they can only add new ones.
    • Log Matching: If two logs have values with the same index and term, all values up to that index are the same.
    • Leader Completeness: If a value is committed to the log in a particular term, all subsequent generations of readers are guaranteed to have that value.
    • State Machine Safety: If one server applies a log value from a particular index to its state machine, another server cannot apply a different value from the same index.

    Using the above features, Raft divides the entire consensus process into three independent parts.

    • Leader election: If the existing leader is not working, a new leader must be elected.
    • Log replication: The leader replicates the request logs it receives from clients to other nodes. The other nodes unconditionally accept the leader's logs.
    • Safety: When one server applies a log value from a particular index to its state machine, another server cannot apply a different value from the same index.

    In this article, we'll discuss the different states a Raft node can be in, and implement the leader election process in code.

    Follower

    Followers do not send requests themselves, but only receive and respond to requests from the leader or candidate. The Behavior Spec for a Follower proposed in the paper and the code written based on it is shown below.

    • Handle RPC requests from leaders and candidates.
    async def on_append_entries(
        self,
        *,
        term: int,
        leader_id: RaftId,
        prev_log_index: int,
        prev_log_term: int,
        entries: Iterable[raft_pb2.Log],
        leader_commit: int,
    ) -> Tuple[int, bool]:
        await self._reset_timeout()
        if term < (current_term := self.current_term):
            return (current_term, False)
        await self._synchronize_term(term)
        return (self.current_term, True)
    
    async def on_request_vote(
        self,
        *,
        term: int,
        candidate_id: RaftId,
        last_log_index: int,
        last_log_term: int,
    ) -> Tuple[int, bool]:
        await self._reset_timeout()
        async with self._vote_request_lock:
            if term < (current_term := self.current_term):
                return (current_term, False)
            await self._synchronize_term(term)
    
            async with self._vote_lock:
                if self.voted_for in [None, candidate_id]:
                    self._voted_for = candidate_id
                    return (self.current_term, True)
            return (self.current_term, False)
    
    async def _synchronize_term(self, term: int) -> None:
        if term > self.current_term:
            self._current_term.set(term)
            await self._change_state(RaftState.FOLLOWER)
            async with self._vote_lock:
                self._voted_for = None
    
    • If you don't receive any requests from leaders or candidates for a period of time, you'll be placed in candidate status.
    async def _wait_for_election_timeout(self, interval: float = 1.0 / 30) -> None:
        while self._elapsed_time < self._election_timeout:
            await asyncio.sleep(interval)
            self._elapsed_time += interval
        await self._change_state(RaftState.CANDIDATE)
    

    Leaders must periodically announce their presence by sending heartbeat messages to their followers. If a follower does not receive any messages for a certain amount of time (election_timeout), it assumes that the cluster is leaderless and starts an election by becoming a candidate to become the new leader.

    Candidate

    The candidate's behavior statement and implementation code is as follows

    • Become a follower when you receive the AppendEntries RPC request from the new leader (see on_append_etries() for followers).
    • Start the election with the following procedure
      • Increase term by 1. (term += 1)
      • Vote for yourself.
      • Initialize the election timeout.
      • Send a RequestVote RPC request to the other nodes.
    async def _start_election(self) -> None:
        self._current_term.increase()
        async with self._vote_lock:
            self._voted_for = self.id
    
        current_term = self.current_term
    
        terms, grants = zip(
            *await asyncio.gather(
                *[
                    asyncio.create_task(
                        self._client.request_vote(
                            to=server,
                            term=current_term,
                            candidate_id=self.id,
                            last_log_index=0,
                            last_log_term=0,
                        ),
                    )
                    for server in self._configuration
                ]
            )
        )
    
    • If you receive votes from a majority of nodes, you are the leader.
        for term in terms:
            if term > current_term:
                await self._synchronize_term(term)
                break
        else:
            if sum(grants) + 1 >= self.quorum:
                await self._change_state(RaftState.LEADER)
    
    • If the election timeout occurs, start a new election.
    case RaftState.CANDIDATE:
        while self.__state is RaftState.CANDIDATE:
            await self._start_election()
            await self._reset_election_timeout()
            await self._initialize_volatile_state()
            if self.has_leadership():
                await self._initialize_leader_volatile_state()
                break
            await asyncio.sleep(self.__election_timeout)
    

    Leader

    • Send the first heartbeat message (an empty AppendEntries request) immediately after the election. Send heartbeat messages periodically thereafter.
    async def _publish_heartbeat(self) -> None:
        if not self.has_leadership():
            return
        terms, successes = zip(
            *await asyncio.gather(
                *[
                    asyncio.create_task(
                        self._client.append_entries(
                            to=server,
                            term=self.current_term,
                            leader_id=self.id,
                            prev_log_index=0,
                            prev_log_term=0,
                            entries=(),
                            leader_commit=self._commit_index,
                        ),
                    )
                    for server in self._configuration
                ]
            )
        )
        for term in terms:
            if term > self.current_term:
                await self._synchronize_term(term)
                break
    
    • When it receives a request from a client, it adds a value to the log. After applying that value to the state machine, send a response to the request.
    • If the follower has a log value with an index greater than the value the leader is tracking (nextIndex), replicate the log to the follower starting at nextIndex.
      • If successful, update the leader's nextIndex and matchIndex.
      • If it fails due to an inconsistency, it decrements the leader's nextIndex and tries again.
    • If the value (N) below exists, update the commitIndex to that value.
      • The majority of matchIndexes are greater than or equal to N (matchIndex >= N)
      • The term of the Nth log is the same as the current term

    The leader manages a nextIndex and a matchIndex for each follower.

    • nextIndex: The next index that should be sent to each follower.
    • matchIndex: the highest index that was successfully replicated to each follower

    Conclusion

    In this article, we've briefly covered the Raft algorithm and written code to perform a leader election. The remaining two features (log replication, membership changes) will face a variety of challenges in actual implementation, including timing issues. If you're interested in learning more about the Raft algorithm, we recommend reading the author's (Diego Ongaro) PhD thesis (CONSENSUS: BRIDGING THEORY AND PRACTICE)^6.

    Finally, let's end by checking out how ChatGPT describes the Raft algorithm. Raft algorithm explained by ChatGPT (Source: OpenAI ChatGPT 3.5)

    This article is based on the code in lablup/aioraft-ng. Please also pay attention to lablup/raftify, the next generation Raft project currently under development at Lablup.

    29 November 2023

  • Backend.AI Model Service Hands-on: Running GPT-NeoX

    By Kyujin Cho

    Backend.AI version 23.09 has been officially released to the public. We covered Model Service, a key feature in version 23.09, in our previous Sneak Peek: Backend.AI Model Service preview article. Since then, we have added a variety of new features, including GUI support, authentication token history management, and more, and we are going to walk you through them in a tutorial format to make it easy to understand the Backend.AI Model Service. In this tutorial post, we will show you how to use the Backend.AI Model Service to run GPT-NeoX models on top of Triton Inference Server. Triton Inference Server is an open source model inference framework from NVIDIA that enables easy HTTP and gRPC1 delivery of its TritonRT, FasterTransformer, and TritonRT-LLM models, as well as PyTorch, TensorFlow, vLLM, and many others.

    Create a Model VFolder

    1. Navigate to the Data & Folders tab. Click the "New Folder" button to open the VFolder creation dialog.
    2. Create a new model folder. It does not matter how you name the folder, but make sure to set the "Usage" at the bottom to "Model". Once you have specified all the values, click the "Create" button at the bottom. Your model VFolder has now been created.

    FasterTransformer Format Model Conversion

    1. Navigate to the "Sessions" tab. Click the "Start" button to open the session creation dialog.
    2. Select ngc-pytorch for "Running Environment" and 23.07 for "Version". Once you have made your selections, click the arrow icon in the lower right corner.
    3. The window to select the VFolder to mount in the session. To load the model, select the VFolder you just created under the "Model storage folder to mount" section. Once you have made your selections, click the arrow icon in the lower right corner.
    4. A window to specify the amount of resources to be used by the model session. You should allocate at least 16 CPU cores and 128 GB of RAM to ensure smooth model conversion. Once you have made your selections, click the arrow icon in the lower right corner.
    5. After confirming that all settings have been applied correctly, click the "Start" button below to start the session.
    6. Once the session is created, a popup will appear to select an app, as shown below. Click the "Console" app to access the terminal environment.
    7. Run the following shell script to download the GPT-NeoX 20B model and convert it to the FasterTransformer format. Note that where the script mentions <VFolder name>, you must replace it with the name of the model VFolder you created.
    cd /home/work/<VFolder name>
    pip install -U transformers bitsandbytes
    git clone https://github.com/NVIDIA/FasterTransformer
    git clone https://huggingface.co/ElutherAI/gpt-neox-20b
    cd neo-gptx-20b
    git lfs install
    git lfs pull
    

    The GPT-NeoX 20B model requires at least 40GB of VRAM to run. If the physical GPU you are using has less VRAM than this and you need to split the model across multiple GPUs, adjust the number in the -i_g parameter to match the number of GPUs you are using.

    cd /home/work/<VFolder name>
    mkdir -p triton-deploy/gpt-neox-20b-ft
    python ~/<VFolder name>/FasterTransformer/examples/pytorch/gptneox/utils/huggingface_gptneox_convert.py \
      -i /home/work/<VFolder name>/gpt-neox-20b \
      -o /home/work/<VFolder name>/triton-deploy/gpt-neox-20b-ft \
      -i_g 1 \
      -m_n GPT-NeoX-20B
    

    1. If you followed all the steps up to step 7, you should have the following folders under the VFolder.
    work@main1[PRRLCIqu-session]:~/GPT-NeoX-Triton-FT$ ls -al
    total 62
    drwxr-xr-x  5 work work 11776 Oct 12 12:14 .
    drwxr-xr-x  9 work work  4096 Oct 12 12:29 ..
    drwxr-xr-x 14 work work 12800 Oct 12 11:24 FasterTransformer
    drwxr-xr-x  3 work work 16896 Oct 12 10:18 gpt-neox-20b
    drwxr-xr-x  3 work work 11776 Oct 12 11:56 triton-deploy
    

    Now it's time to add the configuration file for Triton Inference Server. Create the file triton-deploy/gpt-neox-20b-ft/config.pbtxt and add the following contents.

    If you set the value of the -i_g parameter to anything other than 1 in step 7, you must modify the value of tensor_para_size in the settings below to match the value of -i_g.

    name: "gpt-neox-20b-ft"
    backend: "fastertransformer"
    default_model_filename: "gpt-neox-20b-ft"
    max_batch_size: 1024
    
    model_transaction_policy {
      decoupled: False
    }
    
    input [
      {
        name: "input_ids"
        data_type: TYPE_UINT32
        dims: [ -1 ]
      },
      {
        name: "start_id"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "end_id"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "input_lengths"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
      },
      {
        name: "request_output_len"
        data_type: TYPE_UINT32
        dims: [ -1 ]
      },
      {
        name: "runtime_top_k"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "runtime_top_p"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "beam_search_diversity_rate"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "temperature"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "len_penalty"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "repetition_penalty"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "random_seed"
        data_type: TYPE_UINT64
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "is_return_log_probs"
        data_type: TYPE_BOOL
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "beam_width"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "bad_words_list"
        data_type: TYPE_INT32
        dims: [ 2, -1 ]
        optional: true
      },
      {
        name: "stop_words_list"
        data_type: TYPE_INT32
        dims: [ 2, -1 ]
        optional: true
      },
      {
        name: "prompt_learning_task_name_ids"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "top_p_decay"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "top_p_min"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "top_p_reset_ids"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      }
    ]
    output [
      {
        name: "output_ids"
        data_type: TYPE_UINT32
        dims: [ -1, -1 ]
      },
      {
        name: "sequence_length"
        data_type: TYPE_UINT32
        dims: [ -1 ]
      },
      {
        name: "cum_log_probs"
        data_type: TYPE_FP32
        dims: [ -1 ]
      },
      {
        name: "output_log_probs"
        data_type: TYPE_FP32
        dims: [ -1, -1 ]
      }
    ]
    instance_group [
      {
        count: 1
        kind: KIND_CPU
      }
    ]
    parameters {
      key: "tensor_para_size"
      value: {
        string_value: "1"
      }
    }
    parameters {
      key: "pipeline_para_size"
      value: {
        string_value: "1"
      }
    }
    parameters {
      key: "data_type"
      value: {
        string_value: "fp16"
      }
    }
    parameters {
      key: "model_type"
      value: {
        string_value: "GPT-NeoX"
      }
    }
    parameters {
      key: "model_checkpoint_path"
      value: {
        string_value: "/models/triton-deploy/gpt-neox-20b-ft/1-gpu"
      }
    }
    parameters {
      key: "enable_custom_all_reduce"
      value: {
        string_value: "0"
      }
    }
    
    1. Finally, you need to add the Backend.AI Model Service definition file to the root of the VFolder, under model-definition.yaml (model-definition.yml is also acceptable). Let's take a closer look at the model definition file for running Triton Inference Server.
    models:
    - name: "GPT-NeoX"
      model_path: "/models/triton-deploy"
    ...
    

    This is where you specify the model name and the path to the model.

    The name and path you set here can be accessed by the model server process as the BACKEND_MODEL_NAME and BACKEND_MODEL_PATH environment variables, respectively.

    ...
      service:
        start_command:
          - tritonserver
          - --model-repository=/models/triton-deploy
          - --disable-auto-complete-config
          - --log-verbose
          - "1"
    ...
    

    This is the part that defines the command line syntax for starting the Model Server process.

    ...
        port: 8000
    ...
    

    This is where you fill in the port for API communication that the model server process exposes. If not specified, Triton Inference Server exposes port 8000 for HTTP API communication by default, so you will also write that port in the model definition file.

    ...
        health_check:
          path: /v2/health/ready
          max_retries: 3
          max_wait_time: 5
          expected_status_code: 200
    

    This is where you enable and set up the Health Check feature. If the Health Check feature is enabled, Backend.AI will continuously send HTTP GET requests to the path to verify that it returns an HTTP response code corresponding to the expected_status_code (can be omitted, defaults to 200). If the model server does not respond, or returns an undefined response code, Backend.AI determines that the session is unhealthy and excludes it from the service. When a session is excluded from the service, it is not automatically terminated and the Model Service administrator must manually take the appropriate action by checking container logs, etc. The Health Check feature can be disabled by omitting the syntax entirely. If you do this, Backend.AI will not check the health of the model server and will always assume it is in a healthy state. The max_wait_time is the part that defines the API response timeout. It must be a number in seconds. The max_retries is the number of times the request is retried before the model server is judged to be unhealthy.
    The finished model definition file looks like this.

    models:
    - name: "GPT-NeoX"
      model_path: "/models/triton-deploy"
      service:
        start_command:
          - tritonserver
          - --model-repository=/models/triton-deploy
          - --disable-auto-complete-config
          - --log-verbose
          - "1"
        port: 8000
        health_check:
          path: /v2/health/ready
          max_retries: 3
          max_wait_time: 5
    

    More information about model definition files can be found in the Backend.AI WebUI documentation.

    Now you're all set to run the Model Service.

    Create a Model Service

    1. Navigate to the "Model Serving" tab. Click the "Start Service" button to open the Create Model Service window. Let's take a look at each section in a little more detail.
      • Service name: This is where you specify the name of the Model Service. The name of the Model Service can be used as a subdomain of the Model Service Endpoint (coming soon).
      • Resource Group: This is the field to select the resource group where the Inference Session for the Model Service will be created.
      • Open your app to the outside world: When this feature is enabled, all API requests to the model server must be accompanied by an authentication header before they can be made. For more information about Model Service authentication, see the Backend.AI WebUI documentation.
      • Desired number of routes: A field to specify the number of inference sessions the Model Server process runs in. Setting this value to a number greater than 1 creates multiple identical sessions and enables the round-robin load balancer feature, which distributes API requests evenly among these sessions. This value can be modified at any time after Model Service creation.
      • A panel that specifies the amount of resources for the inference session.

    The GPT-NeoX 20B model requires a minimum of 40 GB of vRAM to run. The relationship between fGPU units and vRAM in Backend.AI may apply differently depending on the settings of your Backend.AI. Consult with the administrator of your Backend.AI for more information. If you have set all the values correctly, press the "OK" button to create the Model Service.

    1. the Model Service has been created. If the Model Service is not yet ready for the model process in the reasoning session, the status will remain "PROVISIONING". Click on the "INFERENCE" section of the "Sessions" tab and you'll see that an inference session has been created corresponding to the Model Service you created in 1. Model Service administrators can click the clipboard icon in the "Control" row to view logs related to the model server processes in an inference session.
    2. When the Model Server process is running normally, the status of the route at the bottom and the status at the top will both change to "HEALTHY", and the address to access the Model Service will appear under "Service Endpoints". You can now access the Triton Inference Server that ran the inference session through that address.

    Conclusion

    In this article, you've learned how to start serving LLM models using the Backend.AI Model Service. The Model Service feature is available in Backend.AI's Cloud Beta. Start serving your own models today!

    1: Not supported by Backend.AI Model Service

    This post is automatically translated from Korean

    21 November 2023

  • 23.09: September 2023 Update

    By Lablup

    23.09: September 2023 update

    In the second half of 2023, we released 23.09, a major release of Backend.AI. In 23.09, we've significantly enhanced the development, fine-tuning, and operational automation of generative AI. We've automatically scaled and load-balanced AI models based on workload, expanded support for various GPUs/NPUs, and increased stability when managing a single node as well as 100-2000+ nodes. The team is working hard to squeeze every last bit out of it. Here are the main improvements since the last [23.03 July update] (/posts/2023/07/31/Backend.AI-23.03-update).

    Backend.AI Core & UI

    • The Backend.AI Model Service feature has been officially released. You can now use Backend.AI to more efficiently prepare environments for inference services as well as training of large models such as LLM. For more information, see the blog Backend.AI Model Service sneak peek.
    • Added the ability to sign in to Backend.AI using OpenID single sign-on (SSO).
    • If your kernel image supports it, you can enable the sudo command without a password in your compute session.
    • Support for Redis Sentinel without HAProxy. To test this, we added the --configure-ha setting to the install-dev.sh file.
    • Added the ability to use the RPC channel between Backend.AI Manager and Agent for authenticated and encrypted communication.
    • Improved the CLI logging feature of Backend.AI Manager.
    • Fixed an issue where Manager could not make an RPC connection when Backend.AI Agent was placed under a NAT environment.
    • The Raft algorithm library, riteraft-py, will be renamed and developed as raftify.
    • Support for the following new storage backends
      • VAST Data
      • KT Cloud NAS (Enterprise only)

    Backend.AI FastTrack

    • Improved UI for supporting various heterogeneous accelerators.
    • Deleting a VFolder now uses an independent unique ID value instead of the storage name.
    • Upgraded Django version to 4.2.5 and Node.js version to 20.
    • Added pipeline template feature to create pipelines in a preset form.
    • If a folder dedicated to a pipeline is deleted, it will be marked as disabled on the FastTrack UI.
    • Improved the process of deleting pipelines.
    • Added a per-task (session) accessible BACKENDAI_PIPELINE_TASK_ID environment variable.
    • Actual execution time per task (session) is now displayed.

    Contribution Academy

    Especially in the past period, the following code contributions were made by junior developer mentees through the 2023 Open Source Contribution Academy organized by NIPA.

    • Created a button to copy an SSH/SFTP connection example to the clipboard.
    • Refactored several Lit elements of the existing WebUI to React.
    • Wrote various test code.
    • Found and fixed environment variable and message errors that were not working properly.

    Backend.AI is constantly evolving to provide a more powerful and user-friendly experience while supporting various environments in the AI ecosystem. Stay tuned for more updates!
    Make your AI accessible with Backend.AI!

    This post is automatically translated from Korean

    26 September 2023

  • 23.03: July 2023 Update

    By Lablup

    23.03: July 2023 update

    A wrap-up of the ongoing updates to Backend.AI 23.03 and 22.09. The development team is working hard to squeeze every last bit out.

    Here are the most important changes in this update

    • Enhanced storage manageability: Added per-user and per-project storage capacity management (quotas) with VFolder v3 architecture.
    • Expanded NVIDIA compatibility: Support for CUDA v12 and NVIDIA H100 series.
    • Extended hardware compatibility: Support for WARBOY accelerators from FuriosaAI.

    Backend.AI Core & UI

    • Supports CUDA v12 and NVIDIA H100 series.
    • Supports the WARBOY accelerator, the first NPU from FuriosaAI company.
    • Added storage capacity management function (Quota) by user and project by applying VFolder v3 architecture.
      • However, it is limited to storage that supports Directory Quota.
    • Fixed an error that caused multi-node cluster session creation to fail.
    • Fixed an error where a compute session in the PULLING state was incorrectly labeled as PREPARING.
    • Fixed an error in which the CLONING state was incorrectly displayed when cloning a data folder with the same name when multiple storage devices have the same folder.
    • Improved the web terminal of a compute session to use zsh as the default shell if the zsh package is installed in the kernel image.
    • Added the ability to know the health status of the (managed) storage proxy and event bus.

    Backend.AI FastTrack

    • Added the ability to set multi-node cluster mode by task.
    • Fixed an error where environment variables set in .env were not applied to the frontend.
    • Fixed an error recognizing out-of-date when accessing with a mobile browser.
    • Added a field to show the cause message when a task-specific error occurs.
    • Fixed other editor-related issues.

    Backend.AI is constantly evolving to provide a more powerful and user-friendly experience while supporting various environments in the ever-changing AI ecosystem. Stay tuned for more updates!
    Make your AI accessible with Backend.AI!

    31 July 2023

  • Digging bitsandbytes issue

    By Jeongseok Kang

    Backend.AI is a popular choice for developing these LLMs because of its ease of use in running large clusters and distributed processing. In fact, we get a lot of feedback and requests from customers, and today I'd like to share how we solved one of them.

    On April 4, 2023, we received a report of an issue where an error occurs when running certain packages in the container environment provided by the NGC Catalog[^1] (NVIDIA GPU Cloud). The NGC Catalog is a list of containers[^2] with optimized environments for developing AI/ML, metaverse, and high-performance computing applications, and because it is operated and distributed directly by NVIDIA, it is highly trusted and considered the standard for CUDA environments in particular. Therefore, an issue with this environment represents a potential risk that many users will face in the future, and we have decided to address this issue as a high priority.

    Reproducing the problem

    I first went through the process of reproducing the issue to determine the exact cause. In this case, I was running ViperGPT[^3] developed by Columbia University and encountered an error in a package called bitsandbytes. ViperGPT has a dependency on bitsandbytes as shown below.

    accelerate==0.18.0
    backoff==2.2.1
    // highlight-next-line
    bitsandbytes==0.38.1
    cityscapesscripts==2.2.1
    git+https://github.com/openai/CLIP.git
    decord==0.6.0
    dill==0.3.6
    ...
    

    I was able to reproduce the problem by simply importing bitsandbytes.

    The execution environment used the nvcr.io/nvidia/pytorch:22.05-py3 image.

    $ pip install bitsandbytes  # 0.37.1
    $ python
    >> import bitsandbytes
    ===================================BUG REPORT===================================
    Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
    ================================================================================
    CUDA exception! Error code: OS call failed or operation not supported on this OS
    CUDA exception! Error code: initialization error
    CUDA SETUP: CUDA runtime path found: /home/work/data/miniconda3/envs/vipergpt/lib/libcudart.so
    /home/work/data/miniconda3/envs/vipergpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
      warn(msg)
    CUDA SETUP: Detected CUDA version 116
    CUDA SETUP: Loading binary /home/work/data/miniconda3/envs/vipergpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
    /home/work/data/miniconda3/envs/vipergpt/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
      warn("The installed version of bitsandbytes was compiled without GPU support. "
    

    The bitsandbytes traverses all the CUDA devices installed in the execution environment and checks their Compute Capability [^4]. We were supposed to check the number of CUDA devices installed in the execution environment using libcuda.so in the following way. We noticed that an error occurs when we call cuDeviceGetCount()[^5]. The error was 304 CUDA_ERROR_OPERATING_SYSTEM.

    def get_compute_capabilities(cuda):
        """
        1. find libcuda.so library (GPU driver) (/usr/lib)
           init_device -> init variables -> call function by reference
        2. call extern C function to determine CC
           (https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__DEVICE__DEPRECATED.html)
        3. Check for CUDA errors
           https://stackoverflow.com/questions/14038589/what-is-the-canonical-way-to-check-for-errors-using-the-cuda-runtime-api
        # bits taken from https://gist.github.com/f0k/63a664160d016a491b2cbea15913d549
        """
    
        nGpus = ct.c_int()
        cc_major = ct.c_int()
        cc_minor = ct.c_int()
    
        device = ct.c_int()
    
        # highlight-next-line
        check_cuda_result(cuda, cuda.cuDeviceGetCount(ct.byref(nGpus)))
        ccs = []
        for i in range(nGpus.value):
            check_cuda_result(cuda, cuda.cuDeviceGet(ct.byref(device), i))
            ref_major = ct.byref(cc_major)
            ref_minor = ct.byref(cc_minor)
            # 2. call extern C function to determine CC
            check_cuda_result(cuda, cuda.cuDeviceComputeCapability(ref_major, ref_minor, device))
            ccs.append(f"{cc_major.value}.{cc_minor.value}")
    
        return ccs
    

    What is bitsandbytes?

    Since the advent of Transformer, language models have shown high performance gains, and it has become a trend to increase the size of the model by stacking more Transformer blocks. This has led to a large number of GPU resources being required not only to train the model but also to service it. For example, to service GPT-3 with 175B parameters, eight 80GB A100 GPUs costing about $15,000 are required. This is a huge burden not only for individuals, but also for enterprises or research institutes, which is why there is a lot of research on lightweighting inference models for servicing.

    Image source: A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes (Hugging Face)

    bitsandbytes has open-sourced LLM.int8()[^6], a work by Tim Dettmers, a PhD candidate at the University of Washington, with Facebook AI Research (now Meta AI). It has shown to reduce the size of the model while maintaining performance by applying a vector-wise quantization method that treats each vector independently when computing matrix products, and by using a mix of 8-bit and 16-bit techniques to minimize losses by representing important vectors in 16-bit. It has been merged into Hugging Face's Transformer implementation and is used in a variety of models including [Llama2] (https://github.com/facebookresearch/llama-recipes/blob/cd82118b74d2fd739bd6227af33b661d04a97406/requirements.txt#L6), [QLoRA] (https://github.com/artidoro/qlora/blob/6c6fc4653abd17ce550f48878a24c7bd8772e98a/requirements.txt#L1), [KoAlpaca] (https://github.com/Beomi/KoAlpaca/blob/4596f882957d286b4d60559b97dcf783822d23f5/webui/requirements.txt#L5), and [KULLM] (https://github.com/nlpai-lab/KULLM/blob/b7a78b62ed6cd9d83c51ad5a92a9dd40b9f35998/requirements.txt#L4).

    Identify the cause

    Now that we've located and reproduced the problem, it's time to get to the bottom of it. I looked to see if there were any similar cases, but I couldn't find any. Also, cuInit() was called normally, making it even more difficult to pinpoint the cause.

    import ctypes
    
    count = ctypes.c_int()
    
    libcuda = ctypes.CDLL("libcuda.so")
    libcuda.cuInit(0)  # 0 (CUDA_SUCCESS)
    libcuda.cuDeviceGetCount(ctypes.byref(count))  # 304 (CUDA_ERROR_OPERATING_SYSTEM)
    
    libcudart = ctypes.CDLL("libcudart.so")
    libcudart.cudaGetDeviceCount(ctypes.byref(count))  # 304 (CUDA_ERROR_OPERATING_SYSTEM)
    

    I filed an issue on the GitHub repo (TimDettmers/bitsandbytes#264) for advice, and was told to update the package to the latest version and try again. After updating to version 0.38.0.post1, which was the latest at the time, I tested again, and the same problem occurred. I couldn't afford to lose too much time, so I decided to switch gears and remove the offending part.

    Image source: Greco-Roman Mythology in Comics (Ghana Publishers)

    Troubleshooting

    My first approach was to use CUDA-Python[^7]. CUDA-Python is the CUDA Python Low-Level Bindings package officially distributed by NVIDIA. I had used it before and found it useful, so I immediately thought of it and decided to install and test it.

    $ pip install cuda-python
    
    from cuda import cuda
    from cuda import cudart
    
    cuda.cuInit(0)  # (<CUresult.CUDA_SUCCESS: 0>,)
    cudart.cudaGetDeviceCount()  # (<cudaError_t.cudaSuccess: 0>, 1)
    

    Fortunately, cudart.cudaGetDeviceCount() worked fine, and I proceeded to test integrating it into bitsandbytes. However, calling torch.cuda.is_available() after calling cuda.cuInit(0) resulted in an error. This was because I called cudaGetDeviceCount() inside torch.cuda.is_available().

    from cuda import cuda, cudart
    
    cuda.cuInit(0)  # <CUresult.CUDA_SUCCESS: 0>,)
    cuda.cudaGetDeviceCount()  # (<cudaError_t.cudaSuccess: 0>, 1)
    
    import bitsandbytes
    
    # ...
    # /opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:82: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 304: OS call failed or operation not supported on this OS (Triggered internally at /opt/pytorch/pytorch/c10/cuda/CUDAFunctions.cpp:109.)
    #   return torch._C._cuda_getDeviceCount() > 0
    # ...
    

    The problem seemed to be back to square one. I took a breath and calmly reread the error log above. Then something caught my eye.

    torch._C._cuda_getDeviceCount() > 0

    Note that bitsandbytes was already using PyTorch internally, which means it had a dependency on PyTorch. To be precise, `bitsandbytes' had a dependency on lion-pytorch, which had a dependency on PyTorch. And PyTorch already had an interface to CUDA functions, which I decided to take advantage of this time.

    Fortunately, all of the CUDA functions used by bitsandbytes existed in PyTorch. I made the following changes to the functions that were previously called via libcuda.so and libcudart.so.

    libcuda/libcudarttorch
    libcuda.cuDeviceGetCount()torch.cuda.device_count()
    libcuda.cuDeviceGet()torch.cuda.device()
    libcuda.cuDeviceComputeCapability()torch.cuda.get_device_capability()
    libcudart.cudaRuntimeGetVersion()torch.version.cuda

    After verifying that it worked after the change, I registered a PR in the GitHub repository (TimDettmers/bitsandbytes#375) to apply to the distribution package version.

    Postscript

    On July 14, 2023, about two months after registering the PR, the patch was merged into the main branch and included in version 0.40.1.

    I was also able to get some feedback from the author, Tim Dettmers, whose thoughts and philosophy are evident in this short article. Through this opportunity, I was able to learn more about LLM's ecosystem. It was also the first time in a long time that I was able to feel the fun of open source activities. I think the appeal of open source activities is that we can collaborate beyond spatial constraints and learn from each other's ideas. We run an open source version of Backend.AI alongside an enterprise version. We will always strive to provide a better user experience and a better developer experience.

    [^1]: NVIDIA GPU Cloud [^2]: The NGC catalog hosts containers for AI/ML, metaverse, and HPC applications and are performance-optimized, tested, and ready to deploy on GPU-powered on-prem, cloud, and edge systems. [^3]: ViperGPT: Visual Inference via Python Execution for Reasoning, March 14, 2023. [^4]: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capability [^5]: https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__DEVICE.html#group__CUDA__DEVICE_1g52b5ce05cb8c5fb6831b2c0ff2887c74 [^6]: LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale, November 10, 2022. [^7]: https://developer.nvidia.com/cuda-python

    This post is automatically translated from Korean

    28 July 2023

  • 23.03: May 2023 Update

    By Lablup

    A recap of the ongoing updates to Backend.AI 23.03 and 22.09. The development team is working hard to squeeze every last bit out.

    Here are the most important changes in this update:

    • Expanded hardware compatibility: Expanded hardware compatibility with support for ATOM accelerator idle checking and Dell EMC storage backends from Rebeillons.
    • High-speed upload enhancements: Introduced SFTP functionality to support high-speed uploads to storage.
    • Development Environment Enhancements: Enhanced the development environment by allowing sessions to be accessed in remote SSH mode from local Visual Studio Code.
    • Increased manageability: Improved the user interface for administrators to make it easier to set up AI accelerators and manage resource groups.

    Backend.AI Core & UI

    • Added support for idle state checking of ATOM accelerators.
    • Introduced SFTP functionality to support high-speed uploads directly to storage.
    • Added ability to force periodic password updates based on administrator settings.
    • Added an upload-only session (SYSTEM) tab.
    • Added Inference type to the allowed session types.
    • Added the ability to connect to a session in remote SSH mode from local Visual Studio Code.
    • Added support for uploading folders from Folder Explorer.
    • Improved the display of the amount of shared memory allocated when creating a session.
    • Added support for Dell EMC storage backend.
    • Improved the accuracy of container memory usage measurement.
    • Improved the ability to run multiple agents concurrently on a single compute node.
    • Added project/resource group name filter for administrators.
    • Added user interface for administrators to set various AI accelerators, including GPUs, in resource presets/policies.
    • Added a user interface for administrators to display the allocation and current usage of various accelerators, including GPUs.
    • Added a user interface for administrators to set the visibility of resource groups.
    • Provided a user interface for administrators to view the idle-checks value per session.
    • Added recursion option when uploading vfolders in the CLI, and improved relative path handling.
    • Added a recursive option in the CLI to terminate sessions with dependencies on specific session termination at once.
    • Added a new mock-accelerator plugin for developers, replacing the old cuda-mock plugin.
    • Added status and statistics checking API for internal monitoring of the storage proxy for developers.

    Backend.AI FastTrack

    • Improved searching for vfolders by name when adding pipeline modules.
    • Added an indication to easily recognize success/failure after pipeline execution.

    Backend.AI Forklift

    • Bug fixes and stability improvements.
    • Support for deleting build job history.
    • Supports pagination of the build task list.

    Backend.AI is constantly evolving to support a variety of environments in the ever-changing AI ecosystem, while providing a more robust and user-friendly experience. Stay tuned to see what's next!

    Make your AI accessible with Backend.AI!

    This post is automatically translated from Korean

    31 May 2023

  • Sneak Peek: Backend.AI Model Service Preview

    By Kyujin Cho

    Introduction

    As super-sized AI models flood the market, there is a growing concern about not only developing the models, but also how to deliver them "well" and "efficiently" to users. Prior to Large Language Models (LLMs), the computing power of AI models was focused on training rather than inference, as the hardware requirements for attempting to make inferences with a trained model were much smaller than the computing power needed to train the model. Deployers of models could get enough power for inference from the NPU of a real user's end device (such as a smartphone). However, with the advent of LLMs, the tables were turned.

    Take Meta's [OPT 175b] (https://github.com/facebookresearch/metaseq) as an example: OPT-175b, as its name implies, has 175 billion parameters and requires roughly 320+ GB of GPU memory just to load them onto the GPU to perform inference tasks. That's a huge difference from the 4GB that pre-LLM image processing models used to require.
    With this change in AI model behavior, efficiently managing service resources has become paramount to keeping your service running reliably. In this article, we'll preview Backend.AI's upcoming model service feature, Backend.AI Model Service, and show you how it will allow you to efficiently run your AI model from training to serving with a single infrastructure.

    Backend.AI Model Service

    Backend.AI Model Service is a model serving system that runs on top of the existing Backend.AI solution. It takes Backend.AI's tried-and-true container management technology and container app delivery system, AppProxy[^1], to the next level, enabling both AI training and model service in one infrastructure without installing additional components and by simply upgrading the existing Backend.AI infrastructure. It also supports an auto-scaling feature that automatically scales up and down inference sessions based on per-session GPU usage, number of API calls, or time of day, allowing you to effectively manage AI resources used for inference.

    Inference Sessions

    Inference sessions in Backend.AI are conceptually the same as traditional training sessions. You can use the same execution environment you've been using for training for inference sessions, or you can deploy a dedicated execution environment just for inference sessions. Inference sessions are volatile and stateless, so you can terminate them at any time if the session is not performing well. In this case, Backend.AI will attempt to recover the original state by creating a new inference session, while simultaneously forwarding inference requests to other living inference sessions to minimize downtime for the inference service.

    Model storage

    Models to be served through Backend.AI are managed as "model storage" units. Model storage consists of model files, code for model services, and model definition files.

    Model definition file

    The model definition file is where you define the information for running a service provider's model in the Backend.AI Model Service. The model definition file contains information about the model, the ports exposed by the model service, and a set of tasks that must be executed to run the model service. If your model service provides a health check feature that reports its own health, you can use that information to take action, such as excluding sessions from the service if they are in bad health.

    models:
      - name: "KoAlpaca-5.8B-model"
        model_path: "/models/KoAlpaca-5.8B"
        service:
          pre_start_actions:
            - action: run_command
              args:
                command: ["pip3", "install", "-r", "/models/requirements.txt"]
          start_command:
            - uvicorn
            - --app-dir
            - /models
            - chatbot-api:app
            - --port
            - "8000"
            - --host
            - "0.0.0.0"
          port: 8000
          health_check:
            path: /health
            max_retries: 10
    

    Here is an example of a well-defined model definition file, which contains a set of steps to run the KoAlpaca 5.8B model as a model service.

    Tutorial: Model Service with Backend.AI Model Service

    In this tutorial, we'll actually use Backend.AI to service a KoAlpaca 5.8B model quantized to 8 bits.

    Write the API server code

    Write a simple API server to serve the model.

    import os
    from typing import Any, List
    
    from fastapi import FastAPI, Response
    from fastapi.responses import RedirectResponse, StreamingResponse, JSONResponse
    from fastapi.staticfiles import StaticFiles
    import numpy as np
    from pydantic import BaseModel
    import torch
    from transformers import pipeline, AutoModelForCausalLM
    import uvicorn
    
    URL = "localhost:8000"
    KOALPACA_MODEL = os.environ["BACKEND_MODEL_PATH"]
    
    torch.set_printoptions(precision=6)
    
    app = FastAPI()
    
    model = AutoModelForCausalLM.from_pretrained(
        KOALPACA_MODEL,
        device_map="auto",
        load_in_8bit=True,
    )
    
    
    pipe = pipeline(
        "text-generation",
        model=model,
        tokenizer=KOALPACA_MODEL,
    )
    
    
    class Message(BaseModel):
        role: str
        content: str
    
    
    class ChatRequest(BaseModel):
        messages: List[Message]
    
    
    BASE_CONTEXTS = [
        Message(role="맥락", content="KoAlpaca(코알파카)는 EleutherAI에서 개발한 Polyglot-ko 라는 한국어 모델을 기반으로, 자연어 처리 연구자 Beomi가 개발한 모델입니다."),
        Message(role="맥락", content="ChatKoAlpaca(챗코알파카)는 KoAlpaca를 채팅형으로 만든 것입니다."),
        Message(role="명령어", content="친절한 AI 챗봇인 ChatKoAlpaca 로서 답변을 합니다."),
        Message(role="명령어", content="인사에는 짧고 간단한 친절한 인사로 답하고, 아래 대화에 간단하고 짧게 답해주세요."),
    ]
    
    
    def preprocess_messages(messages: List[Message]) -> List[Message]:
        ...
    
    
    def flatten_messages(messages: List[Message]) -> str:
        ...
    
    
    def postprocess(answer: List[Any]) -> str:
        ...
    
    
    @app.post("/api/chat")
    async def chat(req: ChatRequest) -> StreamingResponse:
        messages = preprocess_messages(req.messages)
        conversation_history = flatten_messages(messages)
        ans = pipe(
            conversation_history,
            do_sample=True,
            max_new_tokens=512,
            temperature=0.7,
            top_p=0.9,
            return_full_text=False,
            eos_token_id=2,
        )
        msg = postprocess(ans)
    
        async def iterator():
            yield msg.strip().encode("utf-8")
    
        return StreamingResponse(iterator())
    
    
    @app.get("/health")
    async def health() -> Response:
        return JSONResponse(content={"healthy": True})
    
    
    @app.exception_handler(404)
    async def custom_404_handler(_, __):
        return RedirectResponse("/404.html")
    
    
    app.mount(
        "/",
        StaticFiles(directory=os.path.join(KOALPACA_MODEL, "..", "chatbot-ui"), html=True),
        name="html",
    )
    

    Create a model definition file

    Create a model definition file for your API server.

    models:
      - name: "KoAlpaca-5.8B-model"
        model_path: "/models/KoAlpaca-Ployglot-5.8B"
        service:
          pre_start_actions:
            - action: run_command
              args:
                command: ["pip3", "install", "-r", "/models/requirements.txt"]
          start_command:
            - uvicorn
            - --app-dir
            - /models
            - chatbot-api:app
            - --port
            - "8000"
            - --host
            - "0.0.0.0"
          port: 8000
          health_check:
            path: /health
            max_retries: 10
    

    In a session of the model service, model storage is always mounted under the /models path.

    Prepare model storage

    Add the model API server code you wrote, the model definition file, and the KoAlpaca model to your model storage.

    Create a model service

    With both the model file and the model definition file ready, you can now start the Backend.AI Model Service. The Model Service can be created using the backend.ai service create command in the Backend.AI CLI. The arguments accepted by service create are almost identical to the backend.ai session create command. After the image to use, you pass the ID of the model storage and the number of inference sessions to initially create.

    Using backend.ai service info, you can check the status of the model service and the inference sessions belonging to the service. You can see that one inference session has been successfully created.

    Use the Reasoning API

    You can use the backend.ai service get-endpoint command to see the inference endpoint of a created model service. The inference endpoint continues to have a unique value until a model service is created and removed. If a model service belongs to multiple inference sessions, AppProxy will distribute requests across the multiple inference sessions.

    Restricting access to the Reasoning API

    If you want to restrict who can access the inference API, you can enable authentication for the inference API by starting the model service with the --public option removed. Authentication tokens can be issued with the backend.ai service generate-token command.

    Scaling inference sessions

    The backend.ai service scale command allows you to change the scale of inference sessions belonging to the model service.

    Closing thoughts

    So far, we've learned about Backend.AI Model Service and how to actually deploy a model service with the Model Service feature. Backend.AI Model Service is targeted for general availability in Backend.AI 23.03. We're working hard to make the Model Service feature publicly available in the near future, so stay tuned.

    ---]

    [^1]: Available from Backend.AI Enterprise.

    This post is automatically translated from Korean

    30 May 2023

  • 23.03: March 2023 Update

    By Lablup

    23.03: 2023년 3월 업데이트

    We're excited to announce version 23.03.0, the first major release of Backend.AI for 2023. Some features will continue to be rolled out in subsequent updates.

    Specifically in this update:

    • Support for the 'inference' service with a new computation session type.
    • Support for 'model' management with a new storage folder type.
    • Support for managing storage capacity on a per-user and per-project basis.
    • Significant improvements to FastTrack's pipeline versioning and UI.

    Backend.AI Core & UI (23.03)

    • Added model management and inference session management capabilities.
      • More advanced inferential endpoint management and network routing layers will be added in subsequent updates.
    • The codebase has been updated to be based on Python 3.11.
    • Introduced React components to the frontend and leveraged Relay to introduce a faster and more responsive UI.
    • Full support for cgroup v2 as an installation environment, starting with Ubuntu 22.04.
    • Updated the vfolder structure to v3 for storage capacity management on a per-user and per-project basis.
    • Kernel and sessions are now treated as separate database tables, and the state transition tracking process has been improved to work with less database load overall.
    • Improved the way the agent displays the progress of the image download process when running a session.
    • Improved the display of GPU usage per container in CUDA 11.7 and later environments.
    • Scheduling priority can be specified by user and project within each resource group.
    • Supports two-factor authentication (2FA) login based on one-time password (TOTP) to protect user accounts.
    • Support for users to register their own SSH keypair for session access.
    • Supports user interfaces for Graphcore IPUs and Rebellions ATOM devices.

    Backend.AI Forklift (23.03)

    • Added Dockerfile templates and advanced editing capabilities.
    • Support for creating container images for inference.
    • Extended image management capabilities to work with the Harbor registry.

    Backend.AI FastTrack (23.03)

    • Storage folder contents can be viewed directly from the FastTrack UI.
    • Improved session state synchronisation with Core to event-based.
    • You can set the maximum number of iterations for a pipeline schedule.
    • If a task fails to execute, the pipeline job is automatically cancelled instead of waiting.
    • Added pipeline versioning. You can track the shape history of your pipeline, and you can recall the contents at a specific point in time to continue working on it.
    • You can modify pipelines in YAML format directly through the code editor.

    개발 및 연구 프레임워크 지원

    • Supports TensorFlow 2.12, PyTorch 1.13
    • Support for NGC (NVIDIA GPU Cloud) TensorFlow 22.12 (tf2), NGC PyTorch 22.12, NGC Triton 22.08
    • Added python-ff:23.01 image, which provides the same libraries and packages as Google Colab

    In addition to what we've listed above, we've included many bug fixes and internal improvements.
    Stay tuned for more to come!

    This post is automatically translated from Korean

    31 March 2023

  • Concurrent React Changed Everything: Distinguishing Renders That Aren't Rushed

    By Jongeun Lee

    Backend.AI's MLOps platform, FastTrack is using React 18. We will explore the differences between rushed and non-rushed renders enabled by the Concurrent Renderer in React 18.

    The Concurrent feature in React, initially introduced as Async Rendering at JSConf Iceland in 2018, was not fully integrated into React 18 until the year 2022. As you might expect from this time period, the Concurrent Renderer is the biggest and most significant change in React 18. Even though the renderer has been updated, React developers can still run code written for versions before React 18 with minimal changes. It is possible to build user interfaces with React without knowledge of React's Concurrent Renderer. Understanding the Concurrent Renderer and its applications can simplify the complexities in your mind during React development, enabling you to create user interfaces(UI) that provide an enhanced user experience(UX). This article will not delve into the inner workings of the Concurrent Renderer. Instead, it will focus on defining what the Concurrent Renderer is and how it can transform the mindset of React developers, which is crucial for those creating applications using React.

    To summarize the content of this article, here's what you need to know:

    Because of the Concurrent Renderer,

    • Component rendering can be interrupted.
    • Parts of the tree can be rendered even when they are not visible on the screen.
    • This allows React developers to distinguish between non-rush renders like never before.

    “React components are abstractly pure functions.”
    React components are actually created as JavaScript functions. (You can also create it as a class, although this is generally not advised in most situations.) A function generates an output based on the provided input. Changing the input can alter the output, hence it is necessary to execute the function again to generate a new result. (A pure function consistently returns the same output for identical inputs.)

    What are the inputs and outputs of a React component?
    The inputs of a React component are known as properties, or 'props', which the component receives as a function. The outputs are the React elements that are returned by the function.

    Is state via hooks also an input?
    'hooks' can be conceptually understood as inputs to a function. Similar to React props, they act as triggers that prompt a re-render when their values change, leading to variations in the output of our React component.

    Now, back to the topic of rendering.

    Component rendering can be interrupted.

    The essence of Concurrent React lies in the ability to interrupt rendering, a feature unavailable before React 18(except in experimental form). In previous versions, when a React component began rendering, all JavaScript operations were blocked until the rendering completed. This means that if the rendering function takes a long time, the event handler function that handles the user's click cannot be executed until the element is returned. However, with React 18, rendering can now be interrupted.

    const A = ({ count }) => {
      return (
        <div>
          <span>{count}</span>
          <B/>
          <C/>
        </div>
      );
    };
    
    const B = () => {
      const [text, setText] = useState("");
      return (
        <div>
          <input value={text} onChange={(e) => setText(e.target.value)} />
          <D/>
        </div>
      );
    };
    
    const C = () => {
      return <span>C</span>;
    };
    
    const D = ({ text }) => {
      verySlowFunction(text); //Consider this function that takes a few seconds to compute.
      return <span>D</span>;
    };
    

    In earlier versions of React 18, rendering component A necessitated the rendering of components B and C, and component D had to be rendered for B. No other JavaScript operations could be performed until A's return value, a React element, was returned. The component tree that A returned was rendered as a single block, and it was not possible to interrupt A's rendering once it had begun.

    In Concurrent React, it is possible to interrupt the rendering process. Why is it necessary to interrupt rendering? You can think of the following:

    • When the current render in progress is no longer valid(stale)
      • For example, consider the situation in the code above where A's count prop is rendering with a value of 1. Before this render completes, count changes to 2, resulting in a render request for A on day 2. Consequently, the rendering result from day 1 becomes obsolete as it does not reflect the most recent value. By halting the rendering of day 1 and promptly beginning the rendering of day 2, you can present the user with the most recent value quickly.
    • When you have something you want to do before the ongoing render updates the screen you want to show.
      • When a user event occurs during rendering, it's possible to halt the ongoing render and give precedence to the event handler for an immediate response.

    These are all cases where you're improving the UX by stopping rendering that component so it can do something else.

    It is possible to render sections of the tree that are not visible on the display.

    Concurrent React enables you to render components corresponding to specific screen areas separately, in addition to what is currently visible on the screen. This feature allows the existing render to remain visible and functional while independently rendering a future screen update in advance, swapping it in once rendering is complete. Concerns may arise about rendering more than necessary and reducing usability. Yet, thanks to the Concurrent Renderer, this separate rendering process can be halted at any moment, ensuring it does not disrupt user interactions. Ultimately, this capability can enhance the user experience.

    So far, we've seen two features of the Concurrent Renderer, and now we'll see how they are utilized to “distinguish between non-rush renders”.

    Distinguish between non-rush renders

    Examples of rushed and non-rushed renders
     
    Consider the experience of visiting your website for the first time via a browser. What's the most urgent thing you need to do when you're faced with a white, blank screen? The most critical action is to display your site's content promptly. If the screen remains white for an extended period, users may not wait and leave. Therefore, it's essential to prioritize rendering the initial content quickly.
     

    On the left sidebar of your homepage, there is a set of menus for navigation. If a user intends to select Menu A but accidentally selects Menu B, and then attempts to select Menu A again while Menu B is still loading, the screen for Menu B will complete rendering before the screen for Menu A appears.
     

    If we consider such user pressed menu B and then pressed menu A immediately. it is more urgent to render the screen for Menu A than it is to render the screen for Menu B, because the screen for B is now invalid.

    As a React developer, you can inform React about non-rush renders by specifying which input changes that trigger a render are not pressing. The hooks that facilitate this for developers are useDeferredValue and useTransition. Both APIs, introduced in React 18, serve to defer non-rush rendering. We will examine these two hooks individually to grasp the distinctions between them.

    useDeferredValue: Separate using a changed input value

    It is used by components that use a specific value and want to handle changes to that specific value in a non-rush manner.

    function App() {
      const [text, setText] = useState('');
      return (
        <>
          <input value={text} onChange={e => setText(e.target.value)} />
          <SlowList text={text} />
        </>
      );
    }
    

    The example code above is one of the useDeferredValue examples from beta.reactjs.org.

    In this scenario, text serves as a state variable; thus, any changes to text will cause the App to re-render. The same text is also passed as a prop to both <input> and <SlowList>. Consequently, when text is modified, it initiates a re-render of App, and as part of this process, both input and SlowList will update to reflect the new text. However, if SlowList has a lengthy render time, the user's input will not appear until the rendering is fully completed, regardless of how fast the user types.

    In this scenario, input represents the user's keyboard input, which is rendered quickly, while SlowList is a result of the user's input and is rendered more slowly than input. We can utilize useDeferredValue to generate a deferredText, which will be displayed during a non-rush render, with text initiating an rush render.

    function App() {
      const [text, setText] = useState('');
      const deferredText = useDeferredValue(text);
      return (
        <>
          <input value={text} onChange={e => setText(e.target.value)} />
          <SlowList text={deferredText} />
        </>
      );
    }
    

    In this manner, when the text value changes, deferredText immediately retains the previous text value. Concurrently, deferredText undergoes a separate offscreen rendering with the new value. Only after this rendering is complete, both text and deferredText update to the latest value. The rendering of deferredText is not a rushed process and can be halted.

    If there be successive non-rushed render requests for the same component, the initial non-rushed render will cease and commence rendering the most recent change, provided it has not concluded. For instance, with text, if a user inputs 'A' followed by 'B' in quick succession into an empty input field, the render for 'A' will initiate. If 'B' is entered before the rendering of 'A' concludes, the render for 'A' will stop, and the rendering for 'AB' will begin.

    useTransition: Separate using a function that changes the input

    Previously, we discussed how both useTransition and useDeferredValue help manage non-urgent renderings. Now, let's explore the distinctions between the two and delve into useTransition.

    :warning: CAUTION

    To clarify the distinction, the example of useDeferredValue has been altered to demonstrate useTransition. It's important to note that useTransition is not compatible with input, as it necessitates synchronous updates. For an explanation of this limitation, refer to the Troubleshooting section on the useDeferredValue page at beta.reactjs.org.

    function App() {
      const [text, setText] = useState("");
      const [isPending, startTransition] = useTransition();
      return (
        <>
          <button
            onClick={(e) => {
              startTransition(() => setText((v) => v + "a"));
            }}
          >
            a 키
          </button>
          <SlowList text={text} />
        </>
      );
    }
    

    Different things:

    If useDeferredValue utilizes its value, text, to specify a non-urgent render, then it employs setText, which alters the value and triggers a render. In cases where text is not available, understanding setText alone is sufficient.

    It is not possible to instantly display the change in text as it occurs within startTransition. A distinct render will initiate for the updated text, but as it is a separate process, the actual screen render won't recognize the updated value, though it will be aware that the separate render is underway through isPending. The useTransition hook delays the change in state, and the useDeferredValue hook postpones certain renderings based on the altered state.

    Common things:

    If multiple non-rush render requests for the same component are made through startTransition, the initial render—similar to useDeferredValue—will be canceled if it's still ongoing, and a new render with the latest value will commence.

    Wrapping

    React 18's Concurrent Renderer introduces the ability to "distinguish between non-rush renders." Utilizing useTransition and useDeferredValue, it allows for updates to complex structures without compromising usability. Prior to React 18, achieving such seamless usability demanded significant development work. Now, with the streamlined process of "distinguishing between non-rush renders," developers can offer users a smooth user experience.

    29 January 2023

  • Introducing FastTrack: Backend.AI MLOps Platform

    By Jihyun Kang

    Introducing FastTrack, the MLOps Platform of Backend.AI. FastTrack allows you to organize each step of data preprocessing, training, validation, deployment, and inference into a single pipeline. FastTrack makes it easy for you to customize each step as you build your pipeline. In this article, we'll explain why you need an MLOps platform, how Backend.AI FastTrack came to be, and what makes itself really unique.

    Rise of MLOps Platforms

    Over the past few years, the IT industry, as well as most industries undergoing digital transformation, has been working hard to adopt AI to make meaningful predictions from scattered data and respond to rapidly changing markets. In order to make a good use of AI in this process, it is necessary to respond to various stages such as model training and optimization, hardware introduction considering data I/O, model version management, etc. The concept of MLOps (Machine Learning Operations) emerged from this. If you are unfamiliar with the concept, we recommend that you skim our 'MLOps series' before reading this article.

    FastTrack: History

    In 2019, we added the Backend.AI pipeline as a beta release to address the demand for DevOps pipelines. We developed and tested the ability to simplify the process of creating and managing complex pipelines, and to operate unidirectional pipelines that split into two or more paths in the middle. However, with the rise of the MLOps concept and the proliferation of various pipeline solutions such as AirFlow, MLFlow, and KubeFlow, we shifted our development direction to integrating and supporting open source pipeline tools instead of developing pipeline features as a full-fledged feature.

    Meanwhile, AI development pipelines have became increasingly complexed, and became clear that open-source MLOps pipeline tools were unable to meet the diverse needs of the users. At this point, we decided to revive the pipeline feature of Backend.AI. During the process of revitalizing and prototyping the Backend.AI pipeline, we changed the direction of development to a MLOps pipeline solution that works with the Backend.AI cluster, but stands independently, so that we could directly address our user requests.

    With such a colorful history, Lablup's AI/MLOps solution is called 'FastTrack'. This name came from the airport or logistics, a lane which expedites passenger or custom clearance. FastTrack became available with Backend.AI 22.09, and still being tested to meet our customer standards.

    FastTrack: What it is

    FastTrack is a machine learning workflow platform that enables users to tailor multiple work units based on Backend.AI clusters and execute them as a Directed Acyclic Graph (DAG). Users can run sessions for each stage of the machine learning pipeline, linked through pre- and post-relationships, allowing them to integrate steps like data preprocessing, training, validation, deployment, monitoring, and optimization into a unified workflow as needed. This means users can more efficiently build and reuse models by structuring sessions into workflows and automatically scheduling them after each phase, rather than manually crafting them in a conventional Backend.AI cluster.

    FastTrack: Structure and features

    FastTrack categorizes workflow templates as pipelines, executes workflows to pipeline jobs, divides the units of work in a workflow into tasks, and the units of work that are to be executed into task instances. The flowchart following outlines the step-by-step progression of work within FastTrack.

    Pipeline

    A pipeline is a structured collection of data and tasks, represented by a Directed Acyclic Graph (DAG). In setting up an AI workflow, constructing a pipeline prompts FastTrack to create a specific folder in your Backend.AI cluster dedicated to pipelines. This setup facilitates the monitoring of training progress via artifacts. FastTrack streamlines the modification of task relationships with an intuitive drag-and-drop interface, allowing for immediate visual feedback in the form of a schematic flow and verification through a YAML file. Moreover, managing pipelines in YAML format allows for easy export, import, and sharing among users.

    Pipeline Job

    Within the FastTrack GUI, the progress of job units is indicated by the color of the nodes associated with each unit. Similar to pipelines, the information and relationships of the task instances being configured in YAML are managed. Upon completion of all task instances, the pipeline job's status is displayed as either successful or failed.

    Task

    A task is the smallest unit of execution in a pipeline that allows you to allocate resources by purpose. For example, sole task for model training can dedicate a lot of GPU resources, to use resources more efficiently, as opposed to preprocessing. You can also specify the execution environment. Based on the images supported by the Backend.AI cluster, you can use images such as TensorFlow, PyTorch, Python 3.x, NGC TensorFlow, NGC PyTorch, etc. without Docker building process. You can also mount virtual folders created by the Backend.AI cluster on a per-task basis as needed.

    Task Instance

    Task instances are physical objects created when a pipeline job is created, based on the task informations that makes up the pipeline. Executing an AI workflow means that the task instances that make up the pipeline job are executed according to the specified preceding and following relationships. Task instances currently have a 1:1 correspondence with Sessions in the Backend.AI cluster, equating the state of a session with the state of a task instance, but we plan to expand beyond sessions to other units of execution in the near future.

    Wrap up

    So far, we've covered MLOps with an introduction to FastTrack, the Backend.AI MLOps platform. The latest release of Backend.AI FastTrack is version 22.09 (in the time of Nov.2022). Our development plans include a range of user-friendly features, such as debugging pipelines, creating dependencies between pipelines, optimizing the usage of resources for tasks, and providing support for GitHub-based model and data repositories. True to Lablup's vision of empowering anyone to develop and use AI models from anywhere, FastTrack will simplify the process of building automated models. We look forward to your interest in our future endeavors.

    29 November 2022

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

Namyoung Bldg. 4F/5F, 34, Seolleung-ro 100-gil, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.