Tag : Backend.AI

  • Model Variant: Easily Serving Various Model Services

    By Jihyun Kang

    Introduction

    Imagine a scenario where you need to train an AI for research purposes and produce results. Our job would simply be to wait for the AI to correctly learn the data we've taught it. However, if we assume we're creating a service that 'utilizes' AI, things get more complicated. Every factor becomes a concern, from how to apply various models to the system to what criteria to use for scaling under load conditions. We can't carelessly modify the production environment where users exist to get answers to these concerns. If an accident occurs while expanding or reducing the production environment, terrible things could happen. If something terrible does happen, we'll need time to recover from it, and we can't expect the same patience from consumers using our service as we would from researchers waiting for model training. Besides engineering difficulties, there are also cost challenges. Obviously, there's a cost to serving models, and users are incurring expenses even at the moment of training models as resources are being consumed.

    However, there's no need to worry. Many well-made models already exist in the world, and in many cases, it's sufficient for us to take these models and serve them. As those interested in our solution may already know, Backend.AI already supports various features you need when serving models. It's possible to increase or decrease services according to traffic, and to serve various models tailored to users' preferences.

    But the Backend.AI team doesn't stop here. We have enhanced the model service provided from Backend.AI version 23.09 and improved it to easily serve various models. Through this post, we'll explore how to serve various models easily and conveniently.

    This post introduces features that allow you to serve various types of models more conveniently. Since we've already given an explanation about model service when releasing the 23.09 version update, we'll skip the detailed explanation. If you're unfamiliar with Backend.AI's model service, we recommend reading the following post first: Backend.AI Model Service Preview

    Existing Method

    | Requirement | Existing Method | Model Variant | |-------------|-----------------|---------------| | Writing model definition file (model-definition.yaml) | O | X | | Uploading model definition file to model folder | O | X | | Model metadata needed | O | △ (Some can be received automatically) |

    Backend.AI model service required a model definition file (model-definition.yaml) that contains commands to be executed when serving the model in a certain format, in addition to the model metadata needed to run. The service execution order was as follows: Write the model definition file, upload it to the model type folder so it can be read, and when starting the model service, mount the model folder. Then, an API server that automatically transfers the end user's input to the model according to the model definition file and sends back the response value would be executed. However, this method had the disadvantage of having to access the file every time the model definition file needed to be modified. Also, it was cumbersome to write different model definition files each time the model changed because the model path was already set in the model definition file.

    The Model Variant introduced this time is a feature that allows you to serve models immediately by inputting a few configuration values or without any input at all, using only model metadata without a model definition file. Model Variant supports command, vLLM, and NIM (NVIDIA Inference Microservice) methods. The methods of serving and verifying model service execution are as follows.

    Basically, model service requires metadata of the model to be served. Download the model you want to serve from Hugging Face, where you can easily access model metadata. In this example, we used the Llama-2-7b-hf model and Calm3-22b-chat model from Hugging Face. For how to upload model metadata to the model folder, refer to the "Preparing Model Storage" section in the previous post.

    Automatically Serving Model from Built Image (Command Method)

    The first introduced command method is a form where the command part that executes to serve the model in the model definition file is included in the execution image. After specifying the command to execute in the CMD environment variable, build the image and execute it immediately without any other input when actually serving the model. The command method does not support what's called a Health check, which verifies if the service is running properly. Therefore, it's more suitable for immediately setting up and checking a service as a prototype rather than performing large-scale services. The execution method is as follows:

    1. On the start screen, select Llama-2-7b-hf in the Model Storage To Mount item to mount the model folder containing the model metadata corresponding to the model service to be served, and select Predefined Image Command in the Inference Runtime Variant item.

    Activate the Open To Public switch button if you want to provide model service accessible without a separate token.

    모델-서비스-시작화면-모델-메타데이터-마운트-및-CMD-선택

    1. Select the environment to serve. Here, we use vllm:0.5.0 and allocate CPU 4 Core, Memory 16 GiB, NVIDIA CUDA GPU 10 fGPU as resources.

    모델-서비스-시작화면-실행환경-선택-및-자원할당

    1. Finally, select the cluster size and click the start button. The cluster size is set to single node, single container.

    모델-서비스-시작-화면-클러스터-크기-선택-및-시작

    If the service has been successfully launched, the service status will change to HEALTHY and the endpoint address will appear.

    모델-서비스-상세-화면

    Verifying the Service

    If the service has been launched normally, check the service model name with the cURL command:

    curl https://cmd-model-service.asia03.app.backend.ai/v1/models \
    -H "Content-Type: application/json"
    

    모델명-확인하기

    Now, let's send input to the service with the cURL command and check the response:

    For model services run with CMD, the model name is already defined in the image, so after checking the model name, you must enter the model name as the value of the model key when sending a request.

    curl https://cmd-model-service.asia03.app.backend.ai/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
    "model": "image-model",
    "prompt": "San Francisco is a",
    "max_tokens": 7,
    "temperature": 0}'
    

    모델-서비스-요청-결과-화면

    Serving Models in vLLM Mode

    The vLLM mode is similar to the command method introduced earlier, but various options entered when running vLLM can be written as environment variables. The execution method is as follows:

    How to Run

    1. On the start screen, mount the model folder for the model service to be served and select vLLM in the Inference Runtime Variant item.

    모델-서비스-시작-화면-모델-메타데이터-마운트-및-vLLM-선택

    1. Select the environment to serve. As with the command method explained earlier, select vllm:0.5.0, and (although you can set the resources the same) this time we'll allocate CPU 16 Core, Memory 64 GiB, NVIDIA CUDA GPU 10 fGPU.

    모델-서비스-시작-화면-실행환경-선택-및-자원-할당

    1. Finally, select the cluster size and enter the environment variable BACKEND_MODEL_NAME. This value corresponds to the --model-name option in vLLM and becomes the model value specified by the user when sending a request to the service. 모델-서비스-시작-화면-실행환경-선택-및-자원-할당

    Likewise, if the service has been successfully launched, the service status will change to HEALTHY, and the endpoint address where the service is launched will appear.

    모델-서비스-상세-화면

    Verifying the Service

    Let's send input to the service with the cURL command and check the response value. At this time, enter the model value as the BACKEND_MODEL_NAME value you set earlier. Once the input is complete, click the START button to create the service.

    curl https://vllm-calm3-22b-chat.asia03.app.backend.ai/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
    "model": "vllm-model",
    "prompt": "初めて会う日本人ビジネスマンに渡す最高の挨拶は何でしょうか?",
    "max_tokens":  200,
    "temperature": 0
    }'
    

    모델-서비스-요청-결과-화면

    Serving Models in NIM Mode

    To run NIM, you need an API key issued from an account that can access NGC's NIM model registry. For how to obtain the key value, please refer to the following content: NVIDIA Docs Hub : How to get NGC API Key

    The NIM (NVIDIA Inference Microservice) mode is also similar to the command mode, but it must be run with an image that has NVIDIA's NIM-supporting model server built-in. Also, when loading the model, the NGC API key value is needed. Assuming everything is ready, let's start the model service.

    How to Run

    1. On the start screen, select an empty model type folder to cache the metadata to be received from NIM, and select NIM in the Inference Runtime Variant item.

    모델-서비스-시작-화면-모델-폴더-마운트-및-NIM-선택

    1. Select the environment to serve. Here, we use ngc-nim:1.0.0-llama3.8b and set to allocate CPU 8 Core, Memory 32 GiB, NVIDIA CUDA GPU 15 fGPU as resources.

    모델-서비스-시작-화면-실행환경-선택-및-자원-할당

    1. Finally, select the cluster size and enter the default path /models for the environment variable HF_HOME. Then enter NGC_API_KEY and input the issued key value. Once the input is complete, click the CREATE button to create the service.

    모델-서비스-시작-화면-클러스터-크기-선택-환경변수-입력-및-시작

    When using NIM, it may take some time for the first execution as it receives model metadata from the repository. You can check the progress by viewing the container logs for the routing session in service on the session page. 모델-서비스에-대응하는-라우팅-세션 NIM-에서-데이터를-받고-있는-로그가-띄워진-컨테이너-로그-화면

    Like the command and vLLM modes, if the service has been successfully launched, the service status will change to HEALTHY. Let's input the content to send to the service using the endpoint address where the service is launched as follows, and check the response value.

    Verifying the Service

    from openai import OpenAI
    
    client = OpenAI(
      base_url = "https://nim-model-service.asia03.app.backend.ai/v1",
      api_key = "$YOUR_NGC_API_KEY"
    )
    
    completion = client.chat.completions.create(
      model="meta/llama3-8b-instruct",
      messages=[
          {        
            "role":"user", 
            "content":"Hello! How are you?"
          },
          {
            "role":"assistant",
            "content":"Hi! I am quite well, how can I help you today?"
          },
          {
            "role":"user",
            "content":"Can you write me a song?"
          }],
      temperature=0.5,
      top_p=1,
      max_tokens=1024,
      stream=True
    )
    
    for chunk in completion:
      if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")
    

    모델-서비스-요청-결과-화면

    Conclusion

    The Model Variant feature will be of great help to researchers and companies aiming to provide actual services with already trained models. Based on a powerful resource management system and support for various AI accelerators such as NVIDIA GPU, AMD ROCm, TPU, Graphcore IPU, Furiosa Warboy, Rebellions ATOM, Hyperaccel LPU, etc., Backend.AI now provides an integrated environment that can easily deploy services beyond simply training models. Try serving your desired AI model anytime with Backend.AI!

    11 July 2024

  • Backend.AI Open Source Contribution Guide (Jul. 2024)

    By Daehyun Sung

    Backend.AI's core engine utilizes many open-source software components and is itself being developed as open source. When enterprise customers encounter inconveniences or bugs while using Backend.AI, we provide issue tracking and support through customer support and technical support channels. However, those using the open-source version can also directly contribute to the project.

    There are mainly two ways to contribute: creating an issue that explains in detail what problem exists or what improvement ideas you have, and making a pull request to directly contribute code changes. In this post, we'd like to introduce a few things that are good to know in advance for more effective and faster communication with the development team during the contribution process.

    Introduction to GitHub Repositories

    As seen in the previous post Backend.AI Open Source Contribution Guide, Backend.AI was originally developed with repositories divided into the Backend.AI meta-repository and several sub-components.

    However, from version "22.06", Backend.AI has changed to a mono-repository using Pants.

    This transition in the development workflow has greatly helped in resolving package compatibility issues that often occur across multiple individual components, creating a more convenient development environment.

    Pants is a fast, scalable, and user-friendly build system.

    If you want to submit an issue, the first place to look is the Backend.AI repository. The repository named Backend.AI integrates multiple packages using Pants. This repository is not only for project management but also contains code that actually performs functions. All issues related to Backend.AI's server and Client SDK are managed here, and links to other projects are provided through the README.

    When creating a new issue, two default templates are provided: bug report and feature request. However, it's not strictly necessary to follow these templates. Considering the complexity of Backend.AI and its various usage environments, following these templates when writing content makes it easier to share context for problem identification.

    Introduction to Mono-repository

    From version "22.06", Backend.AI has changed to a mono-repository using Pants. A mono-repository is a project with an integrated code base that shares the basic dependencies, data models, features, tooling, and processes of multiple projects. It operates the repository by integrating multiple projects that were previously used into a single project.

    Introduction to Pants

    Backend.AI is installed using Pants as a build system. For more details about Pants, please check the following link Pants - Getting started.

    Relationship between Backend.AI Components

    Figure 1. Relationship structure between major Backend.AI components

    Figure 1 is a diagram showing the relationship between the major components of Backend.AI.

    Figure 2. Major component structure of Backend.AI and examples of execution methods

    Figure 2 is a diagram showing the major component structure of Backend.AI, and shows the location of the source code of the components and execution commands.

    Most of Backend.AI's components are managed in the Backend.AI repository, and the source code is located in the src/ai/backend/ subdirectory. Briefly, summarizing what each component does by directory:

    • src/ai/backend/manager (Manager): Core service that monitors computational resources of the entire cluster, handles session scheduling, provides user authentication and APIs for session execution
    • src/ai/backend/agent (Agent): Service installed on compute nodes to manage and control containers
    • src/ai/backend/common (Common): Library of functions and data formats commonly or frequently used across multiple server-side components
    • src/ai/backend/client (Client SDK for Python): Official command-line interface and library providing API wrapper functions and classes for Python
    • src/ai/backend/storage (Storage Proxy): Service that allows user web browsers or Client SDK to directly perform large-volume I/O from network storage
    • src/ai/backend/web (Web Server): HTTP service that provides routing for Web UI and SPA (single-page app) implementation and web session-based user authentication
    • src/ai/backend/webui (Web UI & Desktop App): Web component-based implementation of the actual UI that users interact with. Also supports Electron-based desktop app builds. Also includes a local lightweight version of the app proxy that allows users to directly access application ports running inside containers.

    Backend.AI Version Management Method

    Backend.AI has major releases every 6 months (March and September each year), with post-release support provided for about 1 year. Therefore, the version number follows the CalVer format in the form of YY.0M.micro (e.g., 20.09.14, 21.03.8). However, due to the version number normalization of the Python packaging system, the version of the wheel package is in the format YY.MM.micro without zero-padding in the month part (e.g., 20.9.14, 21.3.8). Some detailed components with version update cycles different from the main release cycle follow the general SemVer format.

    Essential Packages to Install Before Development

    Before installing Backend.AI, you need to install Docker, Docker Compose v2, etc. first. When installing Backend.AI using the scripts/install-dev.sh script in the repository, it checks for the installation of Docker, Docker Compose v2, etc., and guides you through the installation process. If Python, pyenv, Docker, npm are not installed, you need to install the essential packages as follows. For Python, please install it using the system package's Python3. Then, you need to install pyenv and pyenv-virtualenv.

    $ curl https://pyenv.run | bash
    

    Then, you can install Docker and Docker Compose v2 as follows:

    MacOS

    For MacOS, Docker Desktop on Mac automatically installs Docker and Docker Compose v2.

    Ubuntu, Debian, CentOS, Fedora Core, and other Linux environments

    For Ubuntu, Debian, CentOS, Fedora Core, you can automatically install Docker and Docker Compose v2 using the following script:

    $ sudo curl -fsSL https://get.docker.io | bash
    

    After installing Docker, if you get a unix:///var/run/docker.sock access permission error when running without sudo, like this:

    $ docker ps
    Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission denied
    

    If such a permission problem exists, set the permissions using the following command:

    $ sudo usermod -aG docker $(whoami)
    $ sudo chown root:docker /var/run/docker.sock
    

    After that, reboot and run docker run hello-world to confirm that it runs normally.

    $ docker run hello-world
    Unable to find image 'hello-world:latest' locally
    latest: Pulling from library/hello-world
    c1ec31eb5944: Pull complete
    Digest: sha256:94323f3e5e09a8b9515d74337010375a456c909543e1ff1538f5116d38ab3989
    Status: Downloaded newer image for hello-world:latest
    
    Hello from Docker!
    This message shows that your installation appears to be working correctly.
    
    To generate this message, Docker took the following steps:
    1. The Docker client contacted the Docker daemon.
    2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
        (amd64)
    3. The Docker daemon created a new container from that image which runs the
        executable that produces the output you are currently reading.
    4. The Docker daemon streamed that output to the Docker client, which sent it
        to your terminal.
    
    To try something more ambitious, you can run an Ubuntu container with:
    $ docker run -it ubuntu bash
    
    Share images, automate workflows, and more with a free Docker ID:
    https://hub.docker.com/
    
    For more examples and ideas, visit:
    https://docs.docker.com/get-started/
    

    Instead of changing the group ownership of /var/run/docker.sock with chown, changing the permissions of the /var/run/docker.sock file to 666 allows other users in the group to access it without rebooting.

    sudo chmod 666 /var/run/docker.sock
    

    However, setting the permissions of the /var/run/docker.sock file to 666 creates a security vulnerability.

    You can check if Docker Compose v2 is installed as follows:

    $ sudo docker compose version
    Docker Compose version v2.28.1
    

    If nvm is not installed, you should install nvm as shown in the following link nvm - install & Update Script.

    $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
    

    After installing nvm, install the latest LTS version of Node.js and set it up for use.

    $ nvm install --lts
    $ nvm use --lts
    

    How to Install the Development Environment

    To actually contribute code, you need to write a pull request, and unless it's a simple typo correction or documentation-related contribution, you need to directly modify the code and run it, so it's essential to set up your own development environment. Backend.AI has a structure where multiple components work together, so installation is not complete just by cloning one repository and creating a Python virtual environment with an editable install[1]. At a minimum, you need to set up and run manager, agent, storage-proxy, webserver, and wsproxy to check the functioning GUI, and for the CLI environment, you need to install the client SDK separately. Also, Redis, PostgreSQL, and etcd servers need to be run together for manager operation and communication with the agent.

    If you have installed the essential packages introduced earlier and want to install multiple components of Backend.AI, you can install them using the scripts/install-dev.sh script in the repository. This script does the following:

    • Checks for the installation of pyenv, Python, Docker, npm, etc., and guides the installation method
    • Installs all of these various components in their respective directories
      • At this time, components such as accelerator-cuda, which are necessary for the operation of other components, are additionally installed in an editable state.
    • Adds database/etcd fixtures including basic port settings and example authentication keys that each component can look at each other
    • Creates and runs PostgreSQL, Redis, etcd services using Docker Compose under the name "halfstack"

    When the install-dev script execution is successfully completed, it outputs commands to run service daemons such as manager and agent, and basic configured example account information. Following the instructions, use terminal multiplexers like tmux, screen, or multiple tab features of terminal apps to run service daemons in separate shells, and confirm that the hello world example works. Then you're ready to develop and test Backend.AI.

    Currently, this method only supports Intel (amd64/x86_64) and ARM-based macOS and Ubuntu/Debian/CentOS/Fedora and Linux environments where Docker Compose can be installed.

    Usually, when you first use this install-dev script, it often stops due to various errors or pre-check failures and needs to be run again. In this case, you can easily perform the deletion procedure using the scripts/delete-dev.sh script.

    Installing and Uninstalling Backend.AI

    Using these install-dev and delete-dev scripts, you can freely install and uninstall Backend.AI. First, clone the Backend.AI repository.

    $ git clone https://github.com/lablup/backend.ai 
    

    Then install Backend.AI.

    $ cd backend.ai
    $ ./scripts/install-dev.sh 
    

    After the installation is complete, please take note of the result content that appears on the screen.

    If you want to uninstall Backend.AI, run the scripts/delete-dev.sh script from the location where you cloned the Backend.AI repository.

    $ cd backend.ai
    $ ./scripts/delete-dev.sh 
    

    Things to Know Before Contributing

    As with most projects managed in distributed version control systems, to contribute to Backend.AI, code work should be based on the latest commit of the main branch of the original remote repository, and if conflicts occur, they should be resolved before requesting a review. If you've forked the original repository, the current forked original repository and the actual original repository need to be synchronized.

    Before explaining the method, please refer to the following terminology to help understanding:

    • Original remote repository (upstream): The original Backend.AI repository. All major commit contents are reflected here.
    • Forked original repository (origin): The Backend.AI repository copied to "your" account via GitHub. (Note: Original remote repository != Forked original repository)
    • Code copy (local working copy): The forked repository currently downloaded to your local machine

    Git command branch notation

    • main: The main branch of the current local working copy
    • origin/main: The main branch of the repository (origin) from which I cloned to create my local working copy
    • upstream/main: The main branch belonging to the separately added upstream remote repository

    Workflow concepts

    • At the time of forking, origin/main is created
    • When you clone the forked repository, main is created on your work computer
    • Create a new topic branch from main and proceed with work
    • When you upload this work branch to origin and create a PR, GitHub automatically points to the original repository of the fork
    • At this point, to synchronize changes to the main of the original repository during work, follow the procedure below

    The method of synchronization is as follows:

    • step1: Add the original remote repository as a name called upstream
    $ git remote add upstream https://github.com/lablup/backend.ai
    
    • step2: Fetch the latest commits of the main branch of the original remote repository to the code copy (local working copy)
    $ git fetch upstream
    
    • step3: Bring the latest commit reflection history of the main branch of the original remote repository to origin (the code copy (local working copy) of the original repository you forked)
    $ git switch main && git merge --ff upstream/main
    
    • step4: Reflect the changes in the code copy (local working copy) made in steps 1 ~ 3 to origin (the remote repository of the original repository you forked)
    $ git push origin main
    

    Now upstream/main and origin/main are synchronized through main.

    • step5: Reflect the latest updates to my branch that I'm working on
    $ git switch topic
    $ git merge main
    

    When performing this process, if a history branch is created between origin/main and upstream/main and step 5 is performed incorrectly, it can become extremely difficult to recover. Also, when the CI tools used by Backend.AI test PRs, they are set to find common ancestor commits to see the differences between upstream/main and origin/topic, but if you reuse the main name for the topic branch, these tools will not work properly. If possible, think of always giving a new name when creating a new branch.

    How to Write a Pull Request

    To send a specific bug patch or feature implementation as a PR, you first need to upload it to GitHub. There are several methods, but the following is recommended:

    • Fork the repository on the GitHub repository page. (If you have direct commit permissions, it's recommended to create a branch directly without forking.)
    • In your local working copy, use git remote to point to that forked repository.
      • Following convention, it's good to name Lablup's original repository as upstream and the newly created forked repository as origin.
      • If you installed with install-dev first instead of cloning after forking, the original repository will be origin, so you need to rename the remote.
    • Create a new branch.
      • For branch names, prepend fix/ for bug fixes or feature/ for feature additions or improvements, and summarize the topic in kebab-case. (e.g., feature/additional-cluster-env-vars, fix/memory-leak-in-stats) Other prefixes like docs/, refactor/ are also used.
      • It's possible to write a PR by directly modifying the main branch, but during PR review and modification periods, if additional changes occur on the main branch, you'll have to rebase or merge every time you synchronize with the upstream repository, which is more troublesome. Having a separate branch allows you to rebase and merge when you want.
    • Commit changes to that branch.
      • Commit messages should follow the conventional commit style as much as possible. Like branch names, use title prefixes such as fix:, feat:, refactor:, docs:, release:, and for Backend.AI specifically, setup: for dependency-related commits, repo: for cases like gitignore updates or repository directory structure changes. You can also indicate affected components in parentheses. (e.g., fix(scripts/install-dev): Update for v21.03 release)
      • Commit messages should be written in English.
    • Push the branch and write the PR.
      • For PRs with separate issues, you should write the issue number in the PR body. If you want to reference an issue in the repository, look at the number in the issue link like https://github.com/lablup/backend.ai/issues/401 and write it in the format #401, and GitHub will automatically link it.
      • There's no specific format required for the PR body, but it's good to write what problem it's solving, what principle it's written on, or what tools or libraries were used, and why those choices were made.
      • PR titles and bodies can be written in English or Korean.
      • When you create a PR, you'll see various automated inspection tools in action. In particular, you must sign (register your GitHub username) the CLA (contributor license agreement) for the review to proceed.
      • You must pass all basic coding style and coding rule checks for each language. (For Python code, flake8, mypy, etc.)
      • In repositories with a changes directory and towncrier check, when you create a PR and receive its number, create a file named changes/<PR number>.<modification type> and write a one-line English sentence summarizing the changes in Markdown syntax. (For relatively simple content or if there's a separate existing issue, this content can also serve as the PR body.) Modification types include fix, feature, breaking, misc, deprecation, doc, and parts that differ by project are defined in each repository's pyproject.toml. You can refer to files like CHANGELOG.md or CHANGES.md to see how existing messages were written.
    • Proceed with the review process.
      • When completed, the reviewer usually organizes the commit log in a squash-merge form to create a single commit for merging.
      • Therefore, don't feel burdened about making frequent small modification commits during the review process, and feel free to make commits whenever you think of something.

    It's even better to use tools like GitHub CLI, SourceTree, GitKraken along with git commands.

    Summary

    We've looked at Backend.AI's overall component structure and repository structure, how to install the development environment, and how to write pull requests. I hope this guide has helped you take one step closer to Backend.AI's source code.


    [1]: An "editable" installation refers to a method of installing a Python package to directly look at the source directory, allowing changes to be immediately reflected when importing the package just by modifying the source directory without editing inside the site-packages directory.

    10 July 2024

  • FastTrack Guide: Receiving Notifications for Model Training Results

    By Jeongseok Kang

    From the now-classic AlexNet to various large language models (LLMs) that are garnering a lot of attention these days, we train and evaluate various models to suit our needs. However, realistically, it's difficult for us to gauge when the training will end until we run the model multiple times and gain experience.

    Backend.AI's excellent scheduling minimizes GPU idle time and allows model training to run even while we sleep. Then, what if we could receive the results of a model that finished training while we were asleep? In this article, we'll cover how to receive model training results as messages using the new feature of FastTrack and Slack.

    This article is based on the Backend.AI FastTrack version 24.03.3.

    Before We Start

    This article does not cover how to create a Slack App and Bot. For detailed information, we recommend referring to the official documentation.

    Creating a Pipeline

    Let's create a pipeline for model training. A pipeline is a unit of work used in FastTrack. Each pipeline can be expressed as a collection of tasks, the minimum execution unit. Multiple tasks included in a single pipeline can have interdependencies, and they are executed sequentially according to these dependencies. Resource allocation can be set for each task, allowing flexible resource management.

    When an execution command is sent to a pipeline, it is executed by replicating the exact state at that point, and this unit is called a pipeline job. Multiple pipeline jobs can be run from a single pipeline, and each pipeline job is generated from a single pipeline.

    Create Pipeline button

    Click the Create Pipeline button ("+") at the top of the pipeline list.

    Creating a Pipeline

    You can specify the pipeline's name, description, location of the data store to use, environment variables to be applied commonly across the pipeline, and the method of pipeline initialization. Enter the name "slack-pipeline-0", and then click the "Create" button at the bottom to create the pipeline.

    Creating Tasks

    Dragging a Task

    You can see that the new pipeline has been created. Now let's add some tasks. From the task template list (Task templates) at the top, drag and drop the "Custom Task" block onto the workspace below.

    Entering the Task's Actions

    A task details window appears on the right where you can enter the task's specifics. You can give it a name like model-training-task to indicate its role, and set it to use the pytorch:1.11-py38-cuda11.3 image for model training. Since actual model training can take a long time, for this example, we'll have it execute the following simple commands:

    # Pause for 3 seconds to increase the execution time.
    sleep 3
    # Create a `result.txt` file in the pipeline-dedicated folder. Assume this is the accuracy of the trained model.
    echo "0.$RANDOM" > /pipeline/outputs/result.txt
    

    Creating a Task (1)

    Finally, enter the resource allocation for the task, and then click the "Save" button at the bottom to create the task.

    Dragging Another Task

    You can see that the model-training-task has been created in the workspace. This time, to create a task that reads the value from the result.txt file saved earlier and sends a Slack notification, drag another "Custom Task" block into the workspace below.

    Entering the Task-level Environment Variable `SLACK_TOKEN`

    For this task, set the name to slack-alarm-task, and enter the following script to send a notification to Slack:

    pip install slack-sdk
    python -c '
    import os
    from pathlib import Path
    from slack_sdk import WebClient
    SLACK_BOT_TOKEN = os.environ.get("SLACK_TOKEN")
    JOB_ID = os.environ.get("BACKENDAI_PIPELINE_JOB_ID")
    def main():
        result = Path("/pipeline/input1/result.txt").read_text()
        client = WebClient(token=SLACK_BOT_TOKEN)
        client.chat_postMessage(
            channel="#notification",
            text="Pipeline job({}) finished with accuracy {}".format(JOB_ID, result),
        )
    if __name__ == "__main__":
        main()
    '
    

    The code above uses two environment variables: SLACK_TOKEN and BACKENDAI_PIPELINE_JOB_ID. Environment variables in the BACKENDAI_* format are values automatically added by the Backend.AI and FastTrack systems, where BACKENDAI_PIPELINE_JOB_ID represents the unique identifier of the pipeline job in which each task is running.

    The other environment variable, SLACK_TOKEN, is a task-level environment variable. This feature allows you to manage and change various values without modifying the code.

    Creating a Task (2)

    After allocating appropriate resources for the slack-alarm-task, click the "Save" button at the bottom to create the task.

    Adding Task Dependencies

    Adding Task Dependencies

    Now there are two tasks (model-training-task and slack-alarm-task) in the workspace. Since slack-alarm-task should be executed after model-training-task completes, we need to add a dependency between the two tasks. Drag the mouse from the bottom of the task that should run first (model-training-task) to the top of the task that should run later (slack-alarm-task).

    Running the Pipeline

    Running the Pipeline (1)

    You can see an arrow connecting from model-training-task to slack-alarm-task, indicating that the dependency has been added. Now, to run the pipeline, click the "Run" button in the top right.

    Running the Pipeline (2)

    Before running the pipeline, you can review a brief summary of it. After confirming the presence of the two tasks, click the "Run" button at the bottom.

    Running the Pipeline (3)

    The pipeline was successfully run, and a pipeline job was created. Click "OK" at the bottom to view the pipeline job information.

    Pipeline Job

    The pipeline job was created successfully. You can see that the model training (model-training-task) has completed, and slack-alarm-task is running.

    Receiving Slack Notification

    Slack Notification (1)

    Slack Notification (2)

    You can see that the pipeline job execution results have been delivered to the user via Slack. Now we can sleep soundly.

    30 May 2024

  • 24.03: Release Update

    By Lablup

    24.03, the first release of Backend.AI in 2024, has been released. This update brings significant improvements to the UI and user experience, making it even more functional to install and operate. It includes the following updates since 23. 9, which include

    Backend.AI Core & WebUI

    • We've made installing Backend.AI even easier with the addition of a TUI-based installer, which automates the download process and makes it easy for users to install and get started with Backend.AI.

    • New: Added trash functionality to vfolders. Files in unused vFolders are now moved to the trash instead of deleting them completely, and are then deleted through a complete deletion process. New: Added an argument value to indicate the state of the vfolder.
    • New: Added Backend.AI Model Store, where you can now store, search, and easily utilize various machine learning and deep learning models.
    • Added metadata for indexing to vfolders to utilize indexes instead of full directory scans for queries.
    • Improved system resource utilization by introducing a limit policy for session creation based on the number of pending sessions and requested resource slots. This new resource policy option helps filter and set the maximum value for resource presets and custom resource sliders in the Session Launcher.
    • We've added dark themes to the WebUI, so users can now choose from a variety of options to suit their personal preferences.

    • Improved screen tuning for unaligned line breaks, whitespace, and extruded announcements in the WebUI, as well as stability improvements such as session name validation.
    • The Session Launcher for Model Serving also limits UI input so that only as much input is available as the allocated resources.
    • Added an allowAppDownloadPanel argument to hide the WebUI app download panel in the config.toml file for different UI user options.

    Backend.AI is constantly evolving to provide a more powerful and user-friendly experience while supporting a variety of environments in the ever-changing AI ecosystem. We look forward to seeing what's next! Make your AI accessible with Backend.AI!

    29 March 2024

  • 2023 Winter Intern in Lablup

    By Byeongjo Kim

    Overview

    I applied to the Open Source Contribution Academy (hereinafter referred to as the Contribution Academy) hosted by OpenUp, and worked as a fall intern at Lablup Inc. (hereinafter referred to as Lablup) from November to December for 8 weeks. Afterwards, I extended it for an additional 8 weeks from January to February, working a total of 16 weeks.

    After being discharged from the military, I have written about the experiences I had while working at Lablup, my first company as a developer.

    Motivation for Applying

    Even before the Contribution Academy, I was interested in Lablup, and coincidentally, I had an opportunity to contribute through the Contribution Academy.

    During the Contribution Academy period, I worked on resolving issues and refactoring the webui of Backend.AI.

    While participating in the Contribution Academy, I felt a lot of affection, interest, and enjoyment towards Backend.AI, and I began to think that I wanted to continue contributing after the program ended.

    Lablup happened to provide an opportunity to work there in conjunction with the Contribution Academy, so I applied without hesitation.

    Onboarding

    For the first 3 weeks of the internship, I underwent an onboarding process.

    I went through implementing a RealTime Chat, setting up the Backend.AI environment, and then the Pebble Seminar in that order.

    RealTime Chat

    This was the first assignment to become familiar with the core side of Backend.AI's code. I implemented a real-time chat app using Python, utilizing the aiohttp, aioredis, and asyncio libraries.

    Since there was no condition to store the chat contents, I used the in-memory database redis.

    I made it so that when a user enters a chat room, they subscribe to that chat room, and when a user enters a message, it publishes the entered message to the subscribed users, meaning the other users in the same chat room.

    RealTime Chat in action

    While I was able to handle Python at a basic level from preparing for coding tests, I had no experience using libraries like aiohttp, asyncio, and aioredis, so it took me some time to understand and grasp the concepts.

    However, this assignment helped me a lot in understanding the core side of Backend.AI's code, and it was good to be able to study new libraries.

    Setting up the Backend.AI environment

    Since I had already installed Backend.AI during the Contribution Academy, setting up the environment during the internship period wasn't too difficult.

    However, I was well aware that installing Backend.AI is not easy, as I had encountered many errors and failures while trying to install it during the Contribution Academy, and the other person doing the internship with me also had a lot of difficulties during the installation process.

    Since I had already experienced those failures and knew the solutions, I was able to help, and we were able to install it quickly and move on to other tasks.

    While setting up the environment, I also configured a virtual machine and VPN, and set up the environment on a virtual machine as well, so that I could work even if there were problems on my local machine. After setting up the configuration on the virtual machine, I mainly used the local for development during the subsequent work, and the virtual machine as a test server. The company's VM Farm, which allows for easy management and configuration of virtual machines, made it great for setting up development and testing environments.

    Pebble Seminar

    After completing the RealTime Chat and setting up the Backend.AI environment, I prepared a short seminar based on understanding the structure and code of Backend.AI. I was tasked with presenting on GraphQL and Relay, which are used in the Backend.AI WebUI.

    While I had experience with GraphQL, I felt that my knowledge was lacking for presenting in front of others, and Relay was a new library to me, so I was quite worried about preparing for the Pebble Seminar and read through many documents to prepare. First, I familiarized myself with the concepts by reading the official documentation for GraphQL and Relay, and then analyzed the Backend.AI code one by one to understand how they were applied and functioning in Backend.AI.

    Pebble Seminar preparation materials

    By analyzing the code while preparing for the Pebble Seminar, I naturally came to understand the code running in the WebUI, and this greatly helped me in finding and resolving issues during the subsequent work.

    Resolving Backend.AI issues and implementing features

    After completing the onboarding, I finally joined the frontend team and started resolving Backend.AI issues and implementing features. I had a coffee chat with the frontend lead to define the categories of work for this internship period:

    1. Creating a Table Column Setting component
    2. Researching E2E Testing
    3. Daily tasks

    During the 8-week internship period from November to December, I created a total of 19 Pull Requests, of which 18 were merged, and 1 is still under review. Since I had experience finding and assigning issues during the Contribution Academy, I had less difficulty with it, and because I enjoyed resolving issues, I was able to resolve more issues than others.

    Feature Addition PRs

    1. Implementing Table Columns Setting

    https://github.com/lablup/backend.ai-webui/pull/2071

    This was one of the issues I aimed to work on during the internship period. It was the only component that I conceived and implemented from scratch during the fall internship, rather than refactoring an existing component. Before implementing this feature, I thought it was a simple task that I could finish quickly, but things turned out differently from my expectations.

    First, I realized that I had been thinking too simplistically about creating new components. Even though I had designed and considered the props to be received before creating components in the past, through this issue, I felt that when creating a new component, I should invest more time and effort while considering scalability. I also realized that I should pay more attention to how other sites are designed and what features are applied.

    Table Columns Setting

    2. Adding service endpoint and owner columns to the Model Serving page table

    https://github.com/lablup/backend.ai-webui/pull/2047

    Previously, when creating a model service, users had to go to the detail page to check the endpoint, which is a frequently used feature. So there was a request to add the endpoint to the table column. Additionally, since the admin account can see services of users in the same group, there was a suggestion to have a column showing the service owner. Since the GraphQL fields for retrieving this data were already implemented, I added the fields to the query to fetch the endpoint and service owner data, and then added columns to the table to display the data. The owner column is only shown for admin accounts.

    Implementation view. Screen for admin account (left) and user account (right)

    3. Disabling log button for sessions in CANCELLED state

    https://github.com/lablup/backend.ai-webui/pull/2045

    The CANCELLED state means that the container has never been created or failed to be created. Previously, the log button was enabled even for sessions in the CANCELLED state, and if a user clicked the log button, the agent could not find the container information, resulting in a 500 error. In this PR, I made it so that the log button is disabled for sessions in the CANCELLED state, preventing users from clicking it.

    Session in TERMINATED state (session 1) and CANCELLED state (session 2)

    4. Testing and creating a custom hook for dark mode

    https://github.com/lablup/backend.ai-webui/pull/2120

    Before implementing dark mode, I found components with hardcoded colors and implemented a custom hook named useThemeMode for applying dark mode. When creating the custom hook, I tried to use the useLocalStorageState hook from ahooks, but contrary to my expectations that it would automatically manage states with the same key value, I found that they operated independently. To handle states with the same key value automatically updating when the value changes, I added a custom hook named useLocalStorageGlobalState, and then used that to create the useThemeMode custom hook for setting the dark mode.

    Bug fix PR

    1. Allowing signup without invitation token

    https://github.com/lablup/backend.ai-webui/pull/2046

    In the config.toml, when the allowSignupWithoutConfirmation option is set to true, users can sign up without an invitation token. However, when a user clicked the sign up button, an error occurred because the token value was undefined. In this PR, I modified it so that if allowSignupWithoutConfirmation is true, the token variable is not used. Additionally, previously users could modify other input values while the core was processing the data after clicking the sign up button, and the previous data remained when the dialog was closed and reopened. In this PR, I made it so that other input values cannot be entered while data is being processed, and the previous input values are cleared when the dialog is closed.

    2. Displaying the correct screen for the selected sub-tab on the user management page

    https://github.com/lablup/backend.ai-webui/pull/2055

    On the user management page, there are sub-tabs for displaying active users and deactivated users. However, if a user navigated to another page and then returned to the user management page, even though the sub-tab was set to inactive, the screen displayed the list of active users, causing confusion. This PR resolved that issue by remembering the current sub-tab when navigating to another page, and displaying the appropriate screen for that sub-tab when returning to the user management page.

    Before (left) and after (right) the fix

    Extending the internship

    As I resolved issues, the 8-week period flew by, and it was time to wrap up the fall internship.

    Working at Lablup as my first job after being discharged from the military was an important period for me. During the internship at Lablup, I was able to experience my strengths and weaknesses, what skills I needed to further prepare, and how other developers work. The 2-month period felt very short, and since I had enjoyed working so much during that time, I wanted to continue working. So I expressed my desire to extend the internship to the lead, and we agreed to extend it for another 8 weeks until February. During the fall internship, I had thought a lot about my weaknesses, but I couldn't find my strengths. So, I started with these 3 personal goals:

    1. Find my strengths during this period
    2. Read the documentation whenever I have time
    3. Work even harder, leaving no regrets

    Resolving issues and implementing features during the extended period

    The work during the extended period did not differ much from before. Without the onboarding process and installation, I could focus more on resolving issues.

    Feature Addition PRs

    1. Refactoring ErrorLogList

    https://github.com/lablup/backend.ai-webui/pull/2131

    I refactored the ErrorLog List, which was previously implemented using Lit elements, to React. This feature was the most satisfying issue for me, as I personally use it frequently after the refactoring.

    Before (left) and after (right) refactoring

    During the refactoring, new Search and Error filter features were added.

    Added Search feature (left) and Filter feature (right)

    2. Modal drag functionality

    https://github.com/lablup/backend.ai-webui/pull/2179

    I used the React-draggable library to add functionality for modals to be dragged. By adding the Draggable prop to a modal, it can be applied to any modal that requires dragging.

    Draggable Modal

    By clicking the icon on the left side of the modal title and moving the mouse, the modal can be moved to the desired position on the screen.

    Currently, it is applied to the modal for viewing user information on the user management page and the modal for changing user settings, where you can check it.

    While it is not being used in many places yet, I think this PR will be useful as more components and features are added.

    Bug fix PR

    1. Modifying Vfolder invitation permissions

    https://github.com/lablup/backend.ai-webui/pull/2143

    There was an issue where the user permissions for group vfolders were not being updated. When trying to modify the permissions, the items were not displayed or selectable properly in the select. Previously, the items were being displayed using option tags, but I changed it to use mwc-list-item to display the items and modified the overflow option to resolve this issue.

    Before (left) and after (right) the PR

    2. ResourceGroupSelect extending outside the card

    https://github.com/lablup/backend.ai-webui/pull/2166

    There was an issue where the ResourceGroupSelect value would be displayed outside the card if it was too large.

    Symptoms of the issue

    To resolve this issue, I set the max-width CSS on the Select component so that it cannot exceed the width of the card.

    Additionally, in this PR, I added a Search feature to the Select component, for which I used the useControllableValue hook from ahooks. The useControllableValue hook helps manage props from either the parent or itself. While it was a simple PR, it took me longer than expected since it was my first time using useControllableValue. I was able to resolve this issue with the help of the lead and another intern.

    3. Key pair list not showing when clicking the generate & manage key pair button on the summary page

    https://github.com/lablup/backend.ai-webui/pull/2194

    On the summary page, there are buttons for "Generate New Key Pair" and "Manage Key Pairs." However, when clicking these buttons, instead of showing the key pair list, it simply navigated to the user management page, displaying the user list.

    "Generate New Key Pair" and "Manage Key Pairs" buttons on the summary page

    When clicking the "Generate New Key Pair" button (left) and when clicking "Manage Key Pairs" (right)

    While this issue was not critical, I resolved it because I had experienced a lot of confusion when I first used Backend.AI and didn't fully understand the key pair feature.

    After resolving this issue, I could confirm that the key pair list was displayed on the screen as intended.

    After resolving the issue, when clicking the "Generate New Key Pair" button (left) and when clicking "Manage Key Pairs" (right)

    Completing the Internship

    Thanks to the Contribution Academy, which started from a friend's recommendation after being discharged from the military, I was able to contribute at Lablup for an extended period. Since I had no previous internship or project experience at other companies, this was a very important period for me as I was starting anew after being discharged. It was great that I could experience my strengths and weaknesses, the skills I lack, and the culture of an open-source company at Lablup. How many companies make you want to go to work every day with their horizontal structure, free atmosphere, pleasant working environment, and good equipment? Although I worked at Lablup for 4 months, I felt like I wanted to go to work every day, and if it was Lablup, I could work at a company while doing interesting and desirable work for a long time. Over the 4-month period, I also developed a fondness for Backend.AI, the service provided by Lablup, and I plan to attend the conference hosted by Lablup every year whenever possible to see its advancements and technologies.

    Lablup Office

    This post was also published on the author's personal blog. https://gee05053.tistory.com/32 This post is automatically translated from Korean

    11 March 2024

  • Backend.AI Meets Tool LLMs : Revolutionizing AI Interaction with Tools - Part 3

    By Sergey Leksikov

    Part 3. Making own API Retriever and Question Answering system with few lines of code locally without training and serving LLM

    Previously, in Part 1 we talked about Tool LLM and their usage. Part 2 demonstrated how to run Gorilla LLM on Backend.AI. In the Part 3, there will be talk about the case when there are no GPU available, but we still want to get help and assistance regarding our API.

    Suppose we have Backend.AI, and we want to get information about Backend.AI REST API and Functional API in more interactive way via question answering style. The example of REST API can be described in this documentation: https://docs.backend.ai/en/latest/manager/rest-reference/index.html

    Figure 1. Backend.AI REST API Documentation

    In addition, Backend.AI REST API documentation can be exported into openapi.json format:

    Figure 2. Backend.AI openai.json

    Another source of BackendAI API is functional API defined in Backend.AI Client. We want to know how to interact with Backend.AI and which parts of code are responsible. The client code repository is responsible with managing and interacting with cloud and computing environment:

    Steps to make a Question Answering API system

    1. Let’s setup Backend.AI Client locally from https://github.com/lablup/backend.ai/tree/main/src/ai/backend/client on our local PC environment and create a new directory bai-dev/src/ai/backend/client/gpt_api_client

    Figure 3. The directory location of gpt_api_client

    1. At vector_data directory let’s create two sub directories data1/ which will store a REST api documentation: openapi.json and data2/ will store selected B.AI Client files over which we want to do an API Question Answering.

    Figure 4. Overview of data directories with openapi.json and client function code files

    1. Let’s install python library LlamaIndex library. Pip install llama-index. Note LlamaIndex is not related to Meta LLaMA language model. LlamaIndex is about data structures and methods for efficient processing and storing documents for retrieval.

    2. Let’s convert our api and code files into an embedded vector and store them in a Vector Database with LLamaIndex. Let’s use Jupyter Notebook interactive environment which is also integrated in out VSCode on a local PC.

    Figure 5. Jupyter Notebook interactive environment. Loading openapi.json from data/ directory. Then asking questions from query engine over a vector index.

    1. Vectorize data2/ directory with our code functions

    Figure 6. Load data2/ directory with code files from B.AI Client. Then vectorize them into index and create a question answering engine.

    We can save both indexes using python Pickle or Joblib libraries which are commonly used for storing and serializing objects to later load them into system. joblib.dump(index, "rest_api_index.joblib") and joblib.dump(index, "functional_index.joblib")

    1. Jupyter Notebook environment already provides to us ability to ask questions and get response in interactive way. Additionally, we can load the saved vectorized indexes on FastAPI server and answer questions over the web. In previous Part 2, we set computational session with Gorilla LLM. From the previous demo we still have a computational session with a FastAPI server.

    2. Let’s transfer the files rest_api_index.joblib and functional_index.joblib to api_helper/ vFolder at Backend.AI Cloud session

    3. At file server.py load the vector indexes and define the query engines.

    Figure 7. server.py definition of index files and query engine.

    1. For each query engine we specify an FastAPI endpoint.

    Figure 8. Code snippets for REST and Functional API retrieval

    1. Test server response from your local PC using curl command. When a server gets queried on a specific endpoint, it will get an answer from a user.
    curl -X POST -H "Content-Type: application/json" -d '{"instruction":"Create a new session"}' http://127.0.0.1:8000/rest_api
    

    Figure 9. Command line response from curl command. Example 1

    curl -X POST -H "Content-Type: application/json" -d '{"instruction":"Create a new session"}' http://127.0.0.1:8000/functional
    

    Figure 10. Command line response from curl command. Example 2

    In addition, we can make a web app which receives user input, sends to corresponding endpoint, and receives the answer.

    Figure 11. A web app prototype for Question Answering over Backend.AI REST and Functional API. Example 1

    Figure 12. A web app prototype for Question Answering over Backend.AI REST and Functional API. Example 2

    Conclusion

    In Part 3, we demonstrated how to locally create a Question-Answering system using open-source python library LLamaIndex which helped to convert our documents and Backend.AI code into vector form. The question answering can be done in interactive way in a Jupyter Notebook environment which Visual Studio Code supports with plugins. Furthermore, we decided to move those vector indexes to a Backend.AI Cloud environment where a Gorilla LLM API tuned model is server. Then an API Question-Answering web app was implemented to assist users over network.

    Reference:

    • LLama Index. https://docs.llamaindex.ai/en/stable/

    Demo video for Backend.AI API Helper and Gorilla LLM:

    30 January 2024

  • Backend.AI Meets Tool LLMs : Revolutionizing AI Interaction with Tools - Part 2

    By Sergey Leksikov

    Part 2. Backend.AI Gorilla LLM model serving

    Previously, we talked about the Tool LLM capabilities and usage. In this article, there will be a step-by-step demonstration of how to run the Gorilla LLM model on the Backend.AI Cloud while using Backend.AI Desktop app.

    Figure 1. A Backend.AI Desktop app installed on MacOs

    1. Press a start button to make a session creation menu appear.

    Figure 2. New session start interactive screen

    1. Select NGC-Pytorch 23.07 image

    2. Attach a vFolder which is a working directory containing the model files. For example: api_helper/ directory name.

    Figure 3. Attaching vFolder screen

    1. Select the resource amount 128 GB RAM and 5 fGPU

    Figure 4. Resource selection screen

    1. Select a Visual Studio Code Desktop environment

    Figure 5. IDE environment selection screen

    1. At /home/work/api_helper/ directory create a server.py file

    2. Create a requirements.txt file

    Figure 6. Content of requirements.txt file

    To install requirements run the command: pip install -r requirements.txt

    Figure 7. Executing install requirements command

    1. Create a server.py and define using transformers library the tokenizer and model loader.

    Figure 8. Code snippet of server.py

    1. Define server IP address and port number

    Figure 9. Definition of server IP address and port number

    1. To run the model type: python server.py

    Figure 10. Starting a server.py

    1. Accessing the created server

    VSCode automatically creates a port tunneling session from your device to a Backend.AI Cloud server. You may see the server status by accessing the localhost address and the request will be tunneled to a Backend.AI Cloud. In addition, you may define other custom endpoints according your needs.

    Figure 11. The server run log

    Figure 12. VSCode Port Forwarding configuration

    Figure 13. Accessing the root of a server

    Up to this point, we create a computation session on Backend.AI Cloud, attached an api_helper/ vFolder directory with requirements.txt file and server.py. Then we started our FastAPI server where the Gorilla LLM is gets downloaded from HuggingFace repository and loaded into computation session memory with inference/ api .endpoint

    1. API Inference testing To test the API inference of Gorilla LLM you may create a curl request from your local computer command line:
    curl -X POST -H "Content-Type: application/json" -d '{"text":"Object detection on a photo. <<<api_domain>>>:"}' http://127.0.0.1:8000/inference
    

    Figure 14. An example of curl request

    Figure 15. The GPU workload on a server after receiving the request

    Figure 16. The server logs of receiving the request and printing the result

    1. Defining UI web app. You may use any web technology to make a UI app which can display the result in a better way. For example, you may use html and JavaScript files and place them in static directory under root of server.py Then define an endpoint for a web app.

    Figure 17. Example of adding an html web app to a FastAPI server

    1. Gorilla LLM Web App prototype - an API tuned Large Language Model for API question answering and code generation.

    Figure 18. Gorilla LLM web app prototype. Example 1

    Figure 19. Gorilla LLM web app prototype. Example 2

    Conclusion

    Despite some difficulties of Gorilla LLM serving, LLM tuned on own API has a large potential and promises. Since, the model can provided the most recent results with more accurate parameters and function calls than commercial large models and be useful in tasks such as question answering over API, code autocomplete, API code executions.

    Limitations and difficulties:

    While trying to server the Gorilla LLM model there were following issues to consider:

    • Model may generate response in not expected format
    • Model may generate result different for same questions
    • Parsing and rendering LLM response
    • Eliminating the duplicate sentences and lines

    29 January 2024

  • Backend.AI Meets Tool LLMs : Revolutionizing AI Interaction with Tools - Part 1

    By Sergey Leksikov

    Part 1. Introduction to LLMs and Tool Interaction

    What if future AI technology capabilities were available now? Probably while you are on the way home from your workplace, you could ask an AI Assistant to turn on the air-conditioner in the home before your arrival. At same time you are planning the vacation and after having few options you ask an AI model to do hotel booking on your behalf. As the model books your trip, you receive a notification from a cloud provider about your deep learning model's training progress. You ask the AI Assistant to run another session with another set of parameters for the experiment while targeting specific values for performance accuracy. How be such a futuristic scenario realized in the present days?

    This kind of interaction of LLM with real world could be possible via Application Programmatic Interfaces (API). The specific Tool Large-Language Model (LLM) fine-tuned on APIs dataset can respond user’s query with specific API and that API can invoke a program or functions to make a real-world impact. Large Language Models (LLM) are rising in popularity due to their outstanding capabilities of generating text in context while also having reasoning capability for problem solving. Text model utilization ranges from text generating, editing they as well become useful as a copilot for a programmer. How else can LLMs extend their usage beyond their text-generating capabilities?

    With Tool LLM, we are stepping into an era where AI in addition to understanding our requests, the AI can act on those requests using a universe of online tools. Tool LLM are pushing the boundaries of what AI can do with tools via functional and REST APIs.

    GPT-4 is currently the state-of-the-art among LLMs, topping most AI benchmarks. Consider this scenario, a GPT-4 model is being asked to transcribe the audio file into text of another language. However, when prompted to use specific APIs, GPT-4 may hallucinate and suggest non-existent APIs or provide incorrect arguments. As consequence causing function execution failure and not achieving objectives of user specified task.

    Besides issues with hallucinations and inaccuracies, API documentation and versions are constantly changing. The retraining general purpose LLM is costly and not practical to keep the LLM models updated with constantly changing documentations. Tool LLMs provides a solution to the hallucination issues of general large models, enabling interaction with the physical world via programmatic interfaces. Tool LLM are much smaller, making it feasible to periodically be retrained with recent data. In addition, API documentation Retriever module can be added into model serving pipeline to help supplement the model with the most recent API documentation which is relevant to user’s input query.

    To overcome these challenges, researchers have recently proposed two notable open-source methods for enhancing LLMs tool use abilities such as Gorilla LLM and ToolLLaMA, each having its own advantages and specific use cases. Moreover, those models can be prepared for inference serving on Backend.AI Cloud.

    What is Tool LLM?

    Tool LLM is an LLM which was trained on a dataset with user query and API request with relevant context information such as API code usage and API description documentation. The response from such LLM can be executed as a code. The code execution implies that the LLM can interact with various online services and tools. Such as Cloud Computing Providers, Kubernetes machine learning and Deep Learning libraries and repositories such as HuggingFace, TorchHub, TensorFlowHub.

    The main advantage of such Tool LLM is ability to accurately generate an API response to user query which can be executed to obtain the results.

    Understanding the Types of API

    An Application Programming Interface (API) is a crucial element in modern computing, serving as a set of rules and protocols for how different software applications or hardware systems can communicate and interact.

    Functional APIs are designed to be invoked through function calls within a programming environment. For instance, machine learning and deep learning libraries like HuggingFace and TensorFlow offer various models that can be loaded into memory and utilized through Functional API calls. These APIs are integral in executing specific functions and operations within the software.

    This capability of LLM to generate a code related to an API extends their utility far beyond basic text generation and processing. Tool LLMs can seamlessly integrate with diverse online services and tools, ranging from cloud computing platforms to advanced machine learning libraries. Furthermore, their application is not limited to human queries; they can also be integrated into systems where they interact with other programs or AI agents. This versatility positions Tool LLMs as vital components in complex systems and infrastructures, enhancing their potential for real-world applications.

    In the following sections, we'll delve into how Tool LLM were trained and how they are operated. After that two specific research examples will be covered such as Gorilla LLM and ToolLLaMA.

    Tool LLM Training and Inference Workflow

    Tool LLM training involves several steps which includes setting api database, creating a training dataset, model training and inference.

    The API Database includes descriptions and relevant code samples. To generate a Self-Instruct training dataset there is a need to pre-process API database samples into {Input User Query-API Output} pairs. ChatGPT can help with automatically generating such dataset by covering various scenarios and query complexities which humans might ask. From specific cases to general and abstract cases. After Self-Instruct dataset is generated the model is trained to make accurate prediction in terms of API given user input query.

    For Tool LLM inference, it's crucial that the LLM not only responds with accurate argument parameters but also uses the latest API documentation. Thus, API Document Retriever is used which helps to keep the model with the most recent API changes.

    Figure 1. An overview workflow of Tool LLM training and inference over API instuction dataset

    Case Studies: Gorilla LLM and ToolLLaMA

    Gorilla

    Gorilla, a fine-tuned LLaMA 7 billion-based model that outperforms GPT-4 in writing API calls. The notable aspects of Gorilla are:

    • It addresses the limitations of current LLMs in generating accurate input arguments for APIs and their tendency to hallucinate incorrect API usage.
    • Gorilla integrates with a document API retriever, allowing it to adapt to real-time changes in documentation, a significant advantage considering how frequently APIs get updated.
    • The authors have developed a dataset called APIBench to evaluate the model's abilities, which includes APIs from HuggingFace, TorchHub, and TensorHub totaling 1600+ APIs.
    • Gorilla seems to mitigate hallucination issues and improves the reliability of LLM outputs. Also, Gorilla got updated and extended to work with Cloud providers such as AWS, GCP and managing Kubernetes clusters.

    ToolLLaMA

    ToolLLaMA is a model which was fine-tuned on ToolBench an instruction-tuning dataset for tool based on RapidAPI repository. There are following keypoints of ToolLLaMA:

    • ToolBench covers an impressive range of over 16,000 real-world APIs, offering diverse instruction sets and solution paths.
    • The paper proposes a novel Depth-First Search-Based Decision Tree algorithm (DFSDT) to enhance the reasoning capabilities of LLMs such as multiple tool usage and multi-step reasoning.
    • Finetuned ToolLLAMA on ToolBench matches the performance of ChatGPT and demonstrates the generalization abilities in out-of distribution datasets like APIBench.

    Both papers are significant in pushing the boundaries of LLM’s capabilities in real-world tool use by navigating and utilizing a vast array of APIs. This advancement is crucial for practical applications. Below is a comparative summary table provided.

    Figure 2. A comparative table between two API tuned LLM

    Synergy between Backend.AI and ToolLLM

    The training or model serving of LLM requires a significant computer resource, especially since there is a huge demand for Graphic Processing Units (GPU) with high capacity for RAM and computational speed.

    Backend.AI offers a scalable foundation for building, training, and serving diverse models. Backend.AI includes scaling on demand feature for model inference with adding external node for serving and Load Balance to optimize the workload. Backend.AI has vLLM and TensorRT server which can be used for high performance inference of LLMs. In addition, there is a well-designed user-friendly interface and pipeline maker FastTrack tool to create computing environment sessions of various complexities.

    Conclusion

    The futuristic scenario which can be realized at present day where various AI Assistants and Agents interact with various devices and services are possible through API and Tool LLM specifically fine-tuned on such interactions. Gorilla LLM and ToolLLaMA offer a good opportunity to incorporate them in complex tasks. The workflow of how they trained and served is easy to comprehend. Gorilla LLM could be recommended to use for Machine Learning and cloud administration tasks. While ToolLLaMA for more general API usage, multi-tool, and multi-step cases.

    There is also an advantage of training your own model on your own API documentation or code to have a LLM model which understands your code. Such LLM can be helpful at assisting or interacting with users who want to get the relevant information.

    The Backend.AI can effectively to be a backbone for model training and providing scalable model serving while offering a simplistic GUI. How to set up such models and step by step guide will be explained in other parts.

    Commonly asked questions:

    • Q: What is source of hallucinations and LLM limitations and how it solved in Tool LLM?
    • A: GPT-4, like other Large Language Models, faces limitations such as hallucinations and inaccuracies, which are primarily due to its training on extensive yet potentially outdated or inaccurate datasets from the internet. These 'hallucinations' refer to instances where the model confidently produces information that's either factually incorrect or not based in reality, a challenge stemming from the nature of its purely text-based training data and not directly from its size or lack of interaction with the physical world. To address these issues, Tool LLMs are being developed with a focus on specialization and frequent updates. They are fine-tuned on specific datasets, like API documentation, enabling direct interaction with real-world systems through programmatic interfaces for more accurate and current information. The retraining frequency of Tool LLMs varies, depending on the application and the pace of change in the relevant field, with updates potentially needed monthly, quarterly, or bi-annually to keep the model up-to-date with the latest trends and information.
    • Q: What are example pairs of user Query and API?
    • A: The example pairs are provided below.
    • User Query: "Summarize this article about space exploration."
    • API Output: HuggingFace.summarize(text="Article text here", model="facebook/bart-large-cnn")
    • User Query: "What is the sentiment of this customer review?"
    • API Output: HuggingFace.analyze_sentiment(text="Customer review text", model="distilbert-base-uncased-finetuned-sst-2-english")
    • User Query: "Identify the objects in this photo."
    • API Output: HuggingFace.image_recognition(image_file="path/to/photo.jpg", model="google/vit-base-patch16-224")
    • User Query: "Convert this speech recording to text."
    • API Output: HuggingFace.speech_to_text(audio_file="path/to/recording.wav", model="facebook/wav2vec2-base-960h")
    • Q: How do the GorillaLLM and ToolLLaMA papers differ in their approach to utilizing API documentation during the training and inference of their models?
    • A: GorillaLLM appends relevant API documentation during training and offers two inference modes, while ToolLLaMA employs Sentence-BERT for fine-tuning embeddings in the API domain. GorillaLLM uses BM25 and GPT-Retriever from LLamaIndex for documentation retrieval, whereas ToolLLaMA uses Sentence-BERT for a similar purpose.
    • Q: How frequently should small API models be retrained, and what role does the API Retriever play in handling changes in API documentation?
    • A: Training small API models annually is reasonable, but monthly retraining for API changes isn't practical. The API Retriever, using up-to-date documentation, can mitigate the need for frequent retraining. Evaluating and benchmarking fine-tuned API models and RAG methods is essential for effectiveness.
    • Q: What is the difference between ToolLLM and RAG systems, and how do they function in the context of LLMs?
    • A: ToolLLM is a model fine-tuned on API documentation, focusing on incorporating knowledge. RAG systems, on the other hand, are algorithms for data chunking, storage, search, re-ranking, and synthesis. They can work independently or in combination to enhance LLM efficiency, especially in handling context limits and knowledge updates.

    Reference:

    • Gorilla: Large Language Model Connected with Massive APIs. https://gorilla.cs.berkeley.edu/
    • ToolLLM: Facilitating Large Language Models To Master 16000+ Real-World APIs. https://github.com/OpenBMB/ToolBench

    28 January 2024

  • Raft Consensus algorithm for Backend.AI: Leader election

    By Jeongseok Kang

    High availability (HA) has become an indispensable concept when talking about modern applications. High availability is the ability of an IT system to remain nearly 100% accessible and reliable at all times by eliminating or minimizing downtime^1. Backend.AI, which is developed and serviced by Rableup, also employs various methods to maintain high availability.

    Backend.AI architecture

    Background

    Backend.AI consists of many different components, including managers and agents, storage proxies, and web servers. Each of these components runs as multiple processes in a distributed environment to increase reliability, especially the manager, which is responsible for scheduling session execution and many core functions of Backend.AI. Currently, the manager has an Active-Active HA structure that ensures high availability through load balancing.

    One of the many features of the Backend.AI Manager is event handling. Backend.AI raises various events, such as AgentStartedEvent and DoScheduleEvent, to track the lifecycle of agents and sessions and provide optimal scheduling. For example, when a Backend.AI Agent process runs, it generates an AgentStartedEvent, and the Backend.AI Manager process receives this event and performs a specific action (schedule()). Backend.AI Manager also raises a DoScheduleEvent internally to ensure periodic scheduling. This is where the problem arises. If you are running multiple Backend.AI Manager processes for high availability, having each process raise an event with its own timer adds unnecessary load and can cause the health of the entire system to be unreliable. The Backend.AI Manager implements a GlobalTimer to ensure that only one manager process generates events within the same system. The GlobalTimer uses distributed locks to ensure mutual exclusivity between processes and to ensure that only one process generates events.

    @preserve_termination_log
    async def generate_tick(self) -> None:
        try:
            await asyncio.sleep(self.initial_delay)
            if self._stopped:
                return
            while True:
                try:
                    async with self._dist_lock:
                        if self._stopped:
                            return
                        await self._event_producer.produce_event(self._event_factory())
                        if self._stopped:
                            return
                        await asyncio.sleep(self.interval)
                except asyncio.TimeoutError:  # timeout raised from etcd lock
                    if self._stopped:
                        return
                    log.warn("timeout raised while trying to acquire lock. retrying...")
        except asyncio.CancelledError:
            pass
    

    Currently, Backend.AI provides an interface for distributed locks, [AbstractDistributedLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L33-L44), and we have developed and are using [FileLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L47-L142), [EtcdLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L145-L190) based on the [etcd concurrency API] (https://etcd.io/docs/v3.5/dev-guide/api_concurrency_reference_v3/), and [RedisLock] (https://github.com/lablup/backend.ai/blob/2f90d03c4477eda8e0beeabb7fe4b067c56dae09/src/ai/backend/common/lock.py#L193-L248) based on [Redis Lock] (https://redis.io/docs/manual/patterns/distributed-locks/) as actual implementations.

    etcd is a distributed, open-source key-value store used to store and manage critical information needed to keep distributed systems running^2, most notably in Kubernetes.

    class AbstractDistributedLock(metaclass=abc.ABCMeta):
        def __init__(self, *, lifetime: Optional[float] = None) -> None:
            assert lifetime is None or lifetime >= 0.0
            self._lifetime = lifetime
    
        @abc.abstractmethod
        async def __aenter__(self) -> Any:
            raise NotImplementedError
    
        @abc.abstractmethod
        async def __aexit__(self, *exc_info) -> Optional[bool]:
            raise NotImplementedError
    

    Requirements

    The GlobalTimer does a good job of controlling event generation on a per-process basis in a distributed environment. However, requirements are always changing and the software needs to change with them. This time, the added requirement was to implement a rate limit for requests. With the current load balancing scheme, we can't guarantee that every request is handled by the same manager, which can lead to the following problems because the state of each manager is not shared.

    1. Set the counters for both managers to 0 and the request count limit to 1.
    2. The first request is received by manager 1.
    3. Increase the counter on manager 1 by 1. (C1: 0 -> 1)
    4. The counter reaches the maximum allowed number of requests and the next request is rejected.
    5. Manager 2 receives the second request due to load balancing.
    6. The counter on manager 2 has not reached the maximum allowed number of times because it is still 0. (C2: 0)
    7. Manager 2 processes the request.
    8. The request count limit didn't work!
    

    Therefore, the following issue has been proposed to discuss ways to improve these limitations.

    Issue suggesting improvements to distributed timers (lablup/backend.ai#415)

    To delegate global state management to a single manager process, represented by a leader, we investigated consensus algorithms and decided to use the Raft Consensus Algorithm (hereafter Raft), which is used in projects such as etcd, which is used as a repository in Kubernetes (https://kubernetes.io/docs/concepts/overview/components/#etcd), and which we believe has been well validated.

    Raft consensus algorithm

    The Raft algorithm was proposed in "In Search of an Understandable Consensus Algorithm"^3 submitted to USENIX in 2014. It was created to improve upon Paxos^4, the leading algorithm at the time, which was difficult to understand and implement in practice due to its complex consensus process, hence the title.

    But our most important goal — and most difficult challenge — was understandability.

    • In Search of an Understandable Consensus Algorithm

    A Raft cluster typically consists of five nodes, because a maximum of two nodes can fail and still satisfy a quorum to maintain the system. Each node in a cluster has one of three states: leader, follower, or candidate. In general, there can be at most one leader in each cluster, with the rest of the nodes being followers.

    Glossary #1

    • quorum: The minimum number of people required to make a decision. (N/2+1)
    State transition diagram of a Raft node (Source: In Search of an Understandable Consensus Algorithm)

    The Raft algorithm delegates all power to an elected leader and makes the flow of logs unidirectional, making it easier to understand the overall picture. The Raft algorithm has the following characteristics

    Glossary #2

    • term: The generation of the current leader or candidate. Incremented by 1 each time a leader election begins.
    • index: Refers to the location of a specific value in the log.
    • commit: Indicates that a specific value from the log was applied to the state machine.
    • commitIndex: Highest index that successfully commits
    • Election Safety: Each term has a maximum of one leader.
    • Leader Append-Only: Readers cannot overwrite or delete logs, they can only add new ones.
    • Log Matching: If two logs have values with the same index and term, all values up to that index are the same.
    • Leader Completeness: If a value is committed to the log in a particular term, all subsequent generations of readers are guaranteed to have that value.
    • State Machine Safety: If one server applies a log value from a particular index to its state machine, another server cannot apply a different value from the same index.

    Using the above features, Raft divides the entire consensus process into three independent parts.

    • Leader election: If the existing leader is not working, a new leader must be elected.
    • Log replication: The leader replicates the request logs it receives from clients to other nodes. The other nodes unconditionally accept the leader's logs.
    • Safety: When one server applies a log value from a particular index to its state machine, another server cannot apply a different value from the same index.

    In this article, we'll discuss the different states a Raft node can be in, and implement the leader election process in code.

    Follower

    Followers do not send requests themselves, but only receive and respond to requests from the leader or candidate. The Behavior Spec for a Follower proposed in the paper and the code written based on it is shown below.

    • Handle RPC requests from leaders and candidates.
    async def on_append_entries(
        self,
        *,
        term: int,
        leader_id: RaftId,
        prev_log_index: int,
        prev_log_term: int,
        entries: Iterable[raft_pb2.Log],
        leader_commit: int,
    ) -> Tuple[int, bool]:
        await self._reset_timeout()
        if term < (current_term := self.current_term):
            return (current_term, False)
        await self._synchronize_term(term)
        return (self.current_term, True)
    
    async def on_request_vote(
        self,
        *,
        term: int,
        candidate_id: RaftId,
        last_log_index: int,
        last_log_term: int,
    ) -> Tuple[int, bool]:
        await self._reset_timeout()
        async with self._vote_request_lock:
            if term < (current_term := self.current_term):
                return (current_term, False)
            await self._synchronize_term(term)
    
            async with self._vote_lock:
                if self.voted_for in [None, candidate_id]:
                    self._voted_for = candidate_id
                    return (self.current_term, True)
            return (self.current_term, False)
    
    async def _synchronize_term(self, term: int) -> None:
        if term > self.current_term:
            self._current_term.set(term)
            await self._change_state(RaftState.FOLLOWER)
            async with self._vote_lock:
                self._voted_for = None
    
    • If you don't receive any requests from leaders or candidates for a period of time, you'll be placed in candidate status.
    async def _wait_for_election_timeout(self, interval: float = 1.0 / 30) -> None:
        while self._elapsed_time < self._election_timeout:
            await asyncio.sleep(interval)
            self._elapsed_time += interval
        await self._change_state(RaftState.CANDIDATE)
    

    Leaders must periodically announce their presence by sending heartbeat messages to their followers. If a follower does not receive any messages for a certain amount of time (election_timeout), it assumes that the cluster is leaderless and starts an election by becoming a candidate to become the new leader.

    Candidate

    The candidate's behavior statement and implementation code is as follows

    • Become a follower when you receive the AppendEntries RPC request from the new leader (see on_append_etries() for followers).
    • Start the election with the following procedure
      • Increase term by 1. (term += 1)
      • Vote for yourself.
      • Initialize the election timeout.
      • Send a RequestVote RPC request to the other nodes.
    async def _start_election(self) -> None:
        self._current_term.increase()
        async with self._vote_lock:
            self._voted_for = self.id
    
        current_term = self.current_term
    
        terms, grants = zip(
            *await asyncio.gather(
                *[
                    asyncio.create_task(
                        self._client.request_vote(
                            to=server,
                            term=current_term,
                            candidate_id=self.id,
                            last_log_index=0,
                            last_log_term=0,
                        ),
                    )
                    for server in self._configuration
                ]
            )
        )
    
    • If you receive votes from a majority of nodes, you are the leader.
        for term in terms:
            if term > current_term:
                await self._synchronize_term(term)
                break
        else:
            if sum(grants) + 1 >= self.quorum:
                await self._change_state(RaftState.LEADER)
    
    • If the election timeout occurs, start a new election.
    case RaftState.CANDIDATE:
        while self.__state is RaftState.CANDIDATE:
            await self._start_election()
            await self._reset_election_timeout()
            await self._initialize_volatile_state()
            if self.has_leadership():
                await self._initialize_leader_volatile_state()
                break
            await asyncio.sleep(self.__election_timeout)
    

    Leader

    • Send the first heartbeat message (an empty AppendEntries request) immediately after the election. Send heartbeat messages periodically thereafter.
    async def _publish_heartbeat(self) -> None:
        if not self.has_leadership():
            return
        terms, successes = zip(
            *await asyncio.gather(
                *[
                    asyncio.create_task(
                        self._client.append_entries(
                            to=server,
                            term=self.current_term,
                            leader_id=self.id,
                            prev_log_index=0,
                            prev_log_term=0,
                            entries=(),
                            leader_commit=self._commit_index,
                        ),
                    )
                    for server in self._configuration
                ]
            )
        )
        for term in terms:
            if term > self.current_term:
                await self._synchronize_term(term)
                break
    
    • When it receives a request from a client, it adds a value to the log. After applying that value to the state machine, send a response to the request.
    • If the follower has a log value with an index greater than the value the leader is tracking (nextIndex), replicate the log to the follower starting at nextIndex.
      • If successful, update the leader's nextIndex and matchIndex.
      • If it fails due to an inconsistency, it decrements the leader's nextIndex and tries again.
    • If the value (N) below exists, update the commitIndex to that value.
      • The majority of matchIndexes are greater than or equal to N (matchIndex >= N)
      • The term of the Nth log is the same as the current term

    The leader manages a nextIndex and a matchIndex for each follower.

    • nextIndex: The next index that should be sent to each follower.
    • matchIndex: the highest index that was successfully replicated to each follower

    Conclusion

    In this article, we've briefly covered the Raft algorithm and written code to perform a leader election. The remaining two features (log replication, membership changes) will face a variety of challenges in actual implementation, including timing issues. If you're interested in learning more about the Raft algorithm, we recommend reading the author's (Diego Ongaro) PhD thesis (CONSENSUS: BRIDGING THEORY AND PRACTICE)^6.

    Finally, let's end by checking out how ChatGPT describes the Raft algorithm. Raft algorithm explained by ChatGPT (Source: OpenAI ChatGPT 3.5)

    This article is based on the code in lablup/aioraft-ng. Please also pay attention to lablup/raftify, the next generation Raft project currently under development at Lablup.

    29 November 2023

  • Backend.AI Model Service Hands-on: Running GPT-NeoX

    By Kyujin Cho

    Backend.AI version 23.09 has been officially released to the public. We covered Model Service, a key feature in version 23.09, in our previous Sneak Peek: Backend.AI Model Service preview article. Since then, we have added a variety of new features, including GUI support, authentication token history management, and more, and we are going to walk you through them in a tutorial format to make it easy to understand the Backend.AI Model Service. In this tutorial post, we will show you how to use the Backend.AI Model Service to run GPT-NeoX models on top of Triton Inference Server. Triton Inference Server is an open source model inference framework from NVIDIA that enables easy HTTP and gRPC1 delivery of its TritonRT, FasterTransformer, and TritonRT-LLM models, as well as PyTorch, TensorFlow, vLLM, and many others.

    Create a Model VFolder

    1. Navigate to the Data & Folders tab. Click the "New Folder" button to open the VFolder creation dialog.
    2. Create a new model folder. It does not matter how you name the folder, but make sure to set the "Usage" at the bottom to "Model". Once you have specified all the values, click the "Create" button at the bottom. Your model VFolder has now been created.

    FasterTransformer Format Model Conversion

    1. Navigate to the "Sessions" tab. Click the "Start" button to open the session creation dialog.
    2. Select ngc-pytorch for "Running Environment" and 23.07 for "Version". Once you have made your selections, click the arrow icon in the lower right corner.
    3. The window to select the VFolder to mount in the session. To load the model, select the VFolder you just created under the "Model storage folder to mount" section. Once you have made your selections, click the arrow icon in the lower right corner.
    4. A window to specify the amount of resources to be used by the model session. You should allocate at least 16 CPU cores and 128 GB of RAM to ensure smooth model conversion. Once you have made your selections, click the arrow icon in the lower right corner.
    5. After confirming that all settings have been applied correctly, click the "Start" button below to start the session.
    6. Once the session is created, a popup will appear to select an app, as shown below. Click the "Console" app to access the terminal environment.
    7. Run the following shell script to download the GPT-NeoX 20B model and convert it to the FasterTransformer format. Note that where the script mentions <VFolder name>, you must replace it with the name of the model VFolder you created.
    cd /home/work/<VFolder name>
    pip install -U transformers bitsandbytes
    git clone https://github.com/NVIDIA/FasterTransformer
    git clone https://huggingface.co/ElutherAI/gpt-neox-20b
    cd neo-gptx-20b
    git lfs install
    git lfs pull
    

    The GPT-NeoX 20B model requires at least 40GB of VRAM to run. If the physical GPU you are using has less VRAM than this and you need to split the model across multiple GPUs, adjust the number in the -i_g parameter to match the number of GPUs you are using.

    cd /home/work/<VFolder name>
    mkdir -p triton-deploy/gpt-neox-20b-ft
    python ~/<VFolder name>/FasterTransformer/examples/pytorch/gptneox/utils/huggingface_gptneox_convert.py \
      -i /home/work/<VFolder name>/gpt-neox-20b \
      -o /home/work/<VFolder name>/triton-deploy/gpt-neox-20b-ft \
      -i_g 1 \
      -m_n GPT-NeoX-20B
    

    1. If you followed all the steps up to step 7, you should have the following folders under the VFolder.
    work@main1[PRRLCIqu-session]:~/GPT-NeoX-Triton-FT$ ls -al
    total 62
    drwxr-xr-x  5 work work 11776 Oct 12 12:14 .
    drwxr-xr-x  9 work work  4096 Oct 12 12:29 ..
    drwxr-xr-x 14 work work 12800 Oct 12 11:24 FasterTransformer
    drwxr-xr-x  3 work work 16896 Oct 12 10:18 gpt-neox-20b
    drwxr-xr-x  3 work work 11776 Oct 12 11:56 triton-deploy
    

    Now it's time to add the configuration file for Triton Inference Server. Create the file triton-deploy/gpt-neox-20b-ft/config.pbtxt and add the following contents.

    If you set the value of the -i_g parameter to anything other than 1 in step 7, you must modify the value of tensor_para_size in the settings below to match the value of -i_g.

    name: "gpt-neox-20b-ft"
    backend: "fastertransformer"
    default_model_filename: "gpt-neox-20b-ft"
    max_batch_size: 1024
    
    model_transaction_policy {
      decoupled: False
    }
    
    input [
      {
        name: "input_ids"
        data_type: TYPE_UINT32
        dims: [ -1 ]
      },
      {
        name: "start_id"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "end_id"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "input_lengths"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
      },
      {
        name: "request_output_len"
        data_type: TYPE_UINT32
        dims: [ -1 ]
      },
      {
        name: "runtime_top_k"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "runtime_top_p"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "beam_search_diversity_rate"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "temperature"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "len_penalty"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "repetition_penalty"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "random_seed"
        data_type: TYPE_UINT64
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "is_return_log_probs"
        data_type: TYPE_BOOL
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "beam_width"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "bad_words_list"
        data_type: TYPE_INT32
        dims: [ 2, -1 ]
        optional: true
      },
      {
        name: "stop_words_list"
        data_type: TYPE_INT32
        dims: [ 2, -1 ]
        optional: true
      },
      {
        name: "prompt_learning_task_name_ids"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "top_p_decay"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "top_p_min"
        data_type: TYPE_FP32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      },
      {
        name: "top_p_reset_ids"
        data_type: TYPE_UINT32
        dims: [ 1 ]
        reshape: { shape: [ ] }
        optional: true
      }
    ]
    output [
      {
        name: "output_ids"
        data_type: TYPE_UINT32
        dims: [ -1, -1 ]
      },
      {
        name: "sequence_length"
        data_type: TYPE_UINT32
        dims: [ -1 ]
      },
      {
        name: "cum_log_probs"
        data_type: TYPE_FP32
        dims: [ -1 ]
      },
      {
        name: "output_log_probs"
        data_type: TYPE_FP32
        dims: [ -1, -1 ]
      }
    ]
    instance_group [
      {
        count: 1
        kind: KIND_CPU
      }
    ]
    parameters {
      key: "tensor_para_size"
      value: {
        string_value: "1"
      }
    }
    parameters {
      key: "pipeline_para_size"
      value: {
        string_value: "1"
      }
    }
    parameters {
      key: "data_type"
      value: {
        string_value: "fp16"
      }
    }
    parameters {
      key: "model_type"
      value: {
        string_value: "GPT-NeoX"
      }
    }
    parameters {
      key: "model_checkpoint_path"
      value: {
        string_value: "/models/triton-deploy/gpt-neox-20b-ft/1-gpu"
      }
    }
    parameters {
      key: "enable_custom_all_reduce"
      value: {
        string_value: "0"
      }
    }
    
    1. Finally, you need to add the Backend.AI Model Service definition file to the root of the VFolder, under model-definition.yaml (model-definition.yml is also acceptable). Let's take a closer look at the model definition file for running Triton Inference Server.
    models:
    - name: "GPT-NeoX"
      model_path: "/models/triton-deploy"
    ...
    

    This is where you specify the model name and the path to the model.

    The name and path you set here can be accessed by the model server process as the BACKEND_MODEL_NAME and BACKEND_MODEL_PATH environment variables, respectively.

    ...
      service:
        start_command:
          - tritonserver
          - --model-repository=/models/triton-deploy
          - --disable-auto-complete-config
          - --log-verbose
          - "1"
    ...
    

    This is the part that defines the command line syntax for starting the Model Server process.

    ...
        port: 8000
    ...
    

    This is where you fill in the port for API communication that the model server process exposes. If not specified, Triton Inference Server exposes port 8000 for HTTP API communication by default, so you will also write that port in the model definition file.

    ...
        health_check:
          path: /v2/health/ready
          max_retries: 3
          max_wait_time: 5
          expected_status_code: 200
    

    This is where you enable and set up the Health Check feature. If the Health Check feature is enabled, Backend.AI will continuously send HTTP GET requests to the path to verify that it returns an HTTP response code corresponding to the expected_status_code (can be omitted, defaults to 200). If the model server does not respond, or returns an undefined response code, Backend.AI determines that the session is unhealthy and excludes it from the service. When a session is excluded from the service, it is not automatically terminated and the Model Service administrator must manually take the appropriate action by checking container logs, etc. The Health Check feature can be disabled by omitting the syntax entirely. If you do this, Backend.AI will not check the health of the model server and will always assume it is in a healthy state. The max_wait_time is the part that defines the API response timeout. It must be a number in seconds. The max_retries is the number of times the request is retried before the model server is judged to be unhealthy.
    The finished model definition file looks like this.

    models:
    - name: "GPT-NeoX"
      model_path: "/models/triton-deploy"
      service:
        start_command:
          - tritonserver
          - --model-repository=/models/triton-deploy
          - --disable-auto-complete-config
          - --log-verbose
          - "1"
        port: 8000
        health_check:
          path: /v2/health/ready
          max_retries: 3
          max_wait_time: 5
    

    More information about model definition files can be found in the Backend.AI WebUI documentation.

    Now you're all set to run the Model Service.

    Create a Model Service

    1. Navigate to the "Model Serving" tab. Click the "Start Service" button to open the Create Model Service window. Let's take a look at each section in a little more detail.
      • Service name: This is where you specify the name of the Model Service. The name of the Model Service can be used as a subdomain of the Model Service Endpoint (coming soon).
      • Resource Group: This is the field to select the resource group where the Inference Session for the Model Service will be created.
      • Open your app to the outside world: When this feature is enabled, all API requests to the model server must be accompanied by an authentication header before they can be made. For more information about Model Service authentication, see the Backend.AI WebUI documentation.
      • Desired number of routes: A field to specify the number of inference sessions the Model Server process runs in. Setting this value to a number greater than 1 creates multiple identical sessions and enables the round-robin load balancer feature, which distributes API requests evenly among these sessions. This value can be modified at any time after Model Service creation.
      • A panel that specifies the amount of resources for the inference session.

    The GPT-NeoX 20B model requires a minimum of 40 GB of vRAM to run. The relationship between fGPU units and vRAM in Backend.AI may apply differently depending on the settings of your Backend.AI. Consult with the administrator of your Backend.AI for more information. If you have set all the values correctly, press the "OK" button to create the Model Service.

    1. the Model Service has been created. If the Model Service is not yet ready for the model process in the reasoning session, the status will remain "PROVISIONING". Click on the "INFERENCE" section of the "Sessions" tab and you'll see that an inference session has been created corresponding to the Model Service you created in 1. Model Service administrators can click the clipboard icon in the "Control" row to view logs related to the model server processes in an inference session.
    2. When the Model Server process is running normally, the status of the route at the bottom and the status at the top will both change to "HEALTHY", and the address to access the Model Service will appear under "Service Endpoints". You can now access the Triton Inference Server that ran the inference session through that address.

    Conclusion

    In this article, you've learned how to start serving LLM models using the Backend.AI Model Service. The Model Service feature is available in Backend.AI's Cloud Beta. Start serving your own models today!

    1: Not supported by Backend.AI Model Service

    This post is automatically translated from Korean

    21 November 2023

  • 23.09: September 2023 Update

    By Lablup

    23.09: September 2023 update

    In the second half of 2023, we released 23.09, a major release of Backend.AI. In 23.09, we've significantly enhanced the development, fine-tuning, and operational automation of generative AI. We've automatically scaled and load-balanced AI models based on workload, expanded support for various GPUs/NPUs, and increased stability when managing a single node as well as 100-2000+ nodes. The team is working hard to squeeze every last bit out of it. Here are the main improvements since the last [23.03 July update] (/posts/2023/07/31/Backend.AI-23.03-update).

    Backend.AI Core & UI

    • The Backend.AI Model Service feature has been officially released. You can now use Backend.AI to more efficiently prepare environments for inference services as well as training of large models such as LLM. For more information, see the blog Backend.AI Model Service sneak peek.
    • Added the ability to sign in to Backend.AI using OpenID single sign-on (SSO).
    • If your kernel image supports it, you can enable the sudo command without a password in your compute session.
    • Support for Redis Sentinel without HAProxy. To test this, we added the --configure-ha setting to the install-dev.sh file.
    • Added the ability to use the RPC channel between Backend.AI Manager and Agent for authenticated and encrypted communication.
    • Improved the CLI logging feature of Backend.AI Manager.
    • Fixed an issue where Manager could not make an RPC connection when Backend.AI Agent was placed under a NAT environment.
    • The Raft algorithm library, riteraft-py, will be renamed and developed as raftify.
    • Support for the following new storage backends
      • VAST Data
      • KT Cloud NAS (Enterprise only)

    Backend.AI FastTrack

    • Improved UI for supporting various heterogeneous accelerators.
    • Deleting a VFolder now uses an independent unique ID value instead of the storage name.
    • Upgraded Django version to 4.2.5 and Node.js version to 20.
    • Added pipeline template feature to create pipelines in a preset form.
    • If a folder dedicated to a pipeline is deleted, it will be marked as disabled on the FastTrack UI.
    • Improved the process of deleting pipelines.
    • Added a per-task (session) accessible BACKENDAI_PIPELINE_TASK_ID environment variable.
    • Actual execution time per task (session) is now displayed.

    Contribution Academy

    Especially in the past period, the following code contributions were made by junior developer mentees through the 2023 Open Source Contribution Academy organized by NIPA.

    • Created a button to copy an SSH/SFTP connection example to the clipboard.
    • Refactored several Lit elements of the existing WebUI to React.
    • Wrote various test code.
    • Found and fixed environment variable and message errors that were not working properly.

    Backend.AI is constantly evolving to provide a more powerful and user-friendly experience while supporting various environments in the AI ecosystem. Stay tuned for more updates!
    Make your AI accessible with Backend.AI!

    This post is automatically translated from Korean

    26 September 2023

  • 23.03: July 2023 Update

    By Lablup

    23.03: July 2023 update

    A wrap-up of the ongoing updates to Backend.AI 23.03 and 22.09. The development team is working hard to squeeze every last bit out.

    Here are the most important changes in this update

    • Enhanced storage manageability: Added per-user and per-project storage capacity management (quotas) with VFolder v3 architecture.
    • Expanded NVIDIA compatibility: Support for CUDA v12 and NVIDIA H100 series.
    • Extended hardware compatibility: Support for WARBOY accelerators from FuriosaAI.

    Backend.AI Core & UI

    • Supports CUDA v12 and NVIDIA H100 series.
    • Supports the WARBOY accelerator, the first NPU from FuriosaAI company.
    • Added storage capacity management function (Quota) by user and project by applying VFolder v3 architecture.
      • However, it is limited to storage that supports Directory Quota.
    • Fixed an error that caused multi-node cluster session creation to fail.
    • Fixed an error where a compute session in the PULLING state was incorrectly labeled as PREPARING.
    • Fixed an error in which the CLONING state was incorrectly displayed when cloning a data folder with the same name when multiple storage devices have the same folder.
    • Improved the web terminal of a compute session to use zsh as the default shell if the zsh package is installed in the kernel image.
    • Added the ability to know the health status of the (managed) storage proxy and event bus.

    Backend.AI FastTrack

    • Added the ability to set multi-node cluster mode by task.
    • Fixed an error where environment variables set in .env were not applied to the frontend.
    • Fixed an error recognizing out-of-date when accessing with a mobile browser.
    • Added a field to show the cause message when a task-specific error occurs.
    • Fixed other editor-related issues.

    Backend.AI is constantly evolving to provide a more powerful and user-friendly experience while supporting various environments in the ever-changing AI ecosystem. Stay tuned for more updates!
    Make your AI accessible with Backend.AI!

    31 July 2023

  • Digging bitsandbytes issue

    By Jeongseok Kang

    Backend.AI is a popular choice for developing these LLMs because of its ease of use in running large clusters and distributed processing. In fact, we get a lot of feedback and requests from customers, and today I'd like to share how we solved one of them.

    On April 4, 2023, we received a report of an issue where an error occurs when running certain packages in the container environment provided by the NGC Catalog[^1] (NVIDIA GPU Cloud). The NGC Catalog is a list of containers[^2] with optimized environments for developing AI/ML, metaverse, and high-performance computing applications, and because it is operated and distributed directly by NVIDIA, it is highly trusted and considered the standard for CUDA environments in particular. Therefore, an issue with this environment represents a potential risk that many users will face in the future, and we have decided to address this issue as a high priority.

    Reproducing the problem

    I first went through the process of reproducing the issue to determine the exact cause. In this case, I was running ViperGPT[^3] developed by Columbia University and encountered an error in a package called bitsandbytes. ViperGPT has a dependency on bitsandbytes as shown below.

    accelerate==0.18.0
    backoff==2.2.1
    // highlight-next-line
    bitsandbytes==0.38.1
    cityscapesscripts==2.2.1
    git+https://github.com/openai/CLIP.git
    decord==0.6.0
    dill==0.3.6
    ...
    

    I was able to reproduce the problem by simply importing bitsandbytes.

    The execution environment used the nvcr.io/nvidia/pytorch:22.05-py3 image.

    $ pip install bitsandbytes  # 0.37.1
    $ python
    >> import bitsandbytes
    ===================================BUG REPORT===================================
    Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
    ================================================================================
    CUDA exception! Error code: OS call failed or operation not supported on this OS
    CUDA exception! Error code: initialization error
    CUDA SETUP: CUDA runtime path found: /home/work/data/miniconda3/envs/vipergpt/lib/libcudart.so
    /home/work/data/miniconda3/envs/vipergpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
      warn(msg)
    CUDA SETUP: Detected CUDA version 116
    CUDA SETUP: Loading binary /home/work/data/miniconda3/envs/vipergpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
    /home/work/data/miniconda3/envs/vipergpt/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
      warn("The installed version of bitsandbytes was compiled without GPU support. "
    

    The bitsandbytes traverses all the CUDA devices installed in the execution environment and checks their Compute Capability [^4]. We were supposed to check the number of CUDA devices installed in the execution environment using libcuda.so in the following way. We noticed that an error occurs when we call cuDeviceGetCount()[^5]. The error was 304 CUDA_ERROR_OPERATING_SYSTEM.

    def get_compute_capabilities(cuda):
        """
        1. find libcuda.so library (GPU driver) (/usr/lib)
           init_device -> init variables -> call function by reference
        2. call extern C function to determine CC
           (https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__DEVICE__DEPRECATED.html)
        3. Check for CUDA errors
           https://stackoverflow.com/questions/14038589/what-is-the-canonical-way-to-check-for-errors-using-the-cuda-runtime-api
        # bits taken from https://gist.github.com/f0k/63a664160d016a491b2cbea15913d549
        """
    
        nGpus = ct.c_int()
        cc_major = ct.c_int()
        cc_minor = ct.c_int()
    
        device = ct.c_int()
    
        # highlight-next-line
        check_cuda_result(cuda, cuda.cuDeviceGetCount(ct.byref(nGpus)))
        ccs = []
        for i in range(nGpus.value):
            check_cuda_result(cuda, cuda.cuDeviceGet(ct.byref(device), i))
            ref_major = ct.byref(cc_major)
            ref_minor = ct.byref(cc_minor)
            # 2. call extern C function to determine CC
            check_cuda_result(cuda, cuda.cuDeviceComputeCapability(ref_major, ref_minor, device))
            ccs.append(f"{cc_major.value}.{cc_minor.value}")
    
        return ccs
    

    What is bitsandbytes?

    Since the advent of Transformer, language models have shown high performance gains, and it has become a trend to increase the size of the model by stacking more Transformer blocks. This has led to a large number of GPU resources being required not only to train the model but also to service it. For example, to service GPT-3 with 175B parameters, eight 80GB A100 GPUs costing about $15,000 are required. This is a huge burden not only for individuals, but also for enterprises or research institutes, which is why there is a lot of research on lightweighting inference models for servicing.

    Image source: A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes (Hugging Face)

    bitsandbytes has open-sourced LLM.int8()[^6], a work by Tim Dettmers, a PhD candidate at the University of Washington, with Facebook AI Research (now Meta AI). It has shown to reduce the size of the model while maintaining performance by applying a vector-wise quantization method that treats each vector independently when computing matrix products, and by using a mix of 8-bit and 16-bit techniques to minimize losses by representing important vectors in 16-bit. It has been merged into Hugging Face's Transformer implementation and is used in a variety of models including [Llama2] (https://github.com/facebookresearch/llama-recipes/blob/cd82118b74d2fd739bd6227af33b661d04a97406/requirements.txt#L6), [QLoRA] (https://github.com/artidoro/qlora/blob/6c6fc4653abd17ce550f48878a24c7bd8772e98a/requirements.txt#L1), [KoAlpaca] (https://github.com/Beomi/KoAlpaca/blob/4596f882957d286b4d60559b97dcf783822d23f5/webui/requirements.txt#L5), and [KULLM] (https://github.com/nlpai-lab/KULLM/blob/b7a78b62ed6cd9d83c51ad5a92a9dd40b9f35998/requirements.txt#L4).

    Identify the cause

    Now that we've located and reproduced the problem, it's time to get to the bottom of it. I looked to see if there were any similar cases, but I couldn't find any. Also, cuInit() was called normally, making it even more difficult to pinpoint the cause.

    import ctypes
    
    count = ctypes.c_int()
    
    libcuda = ctypes.CDLL("libcuda.so")
    libcuda.cuInit(0)  # 0 (CUDA_SUCCESS)
    libcuda.cuDeviceGetCount(ctypes.byref(count))  # 304 (CUDA_ERROR_OPERATING_SYSTEM)
    
    libcudart = ctypes.CDLL("libcudart.so")
    libcudart.cudaGetDeviceCount(ctypes.byref(count))  # 304 (CUDA_ERROR_OPERATING_SYSTEM)
    

    I filed an issue on the GitHub repo (TimDettmers/bitsandbytes#264) for advice, and was told to update the package to the latest version and try again. After updating to version 0.38.0.post1, which was the latest at the time, I tested again, and the same problem occurred. I couldn't afford to lose too much time, so I decided to switch gears and remove the offending part.

    Image source: Greco-Roman Mythology in Comics (Ghana Publishers)

    Troubleshooting

    My first approach was to use CUDA-Python[^7]. CUDA-Python is the CUDA Python Low-Level Bindings package officially distributed by NVIDIA. I had used it before and found it useful, so I immediately thought of it and decided to install and test it.

    $ pip install cuda-python
    
    from cuda import cuda
    from cuda import cudart
    
    cuda.cuInit(0)  # (<CUresult.CUDA_SUCCESS: 0>,)
    cudart.cudaGetDeviceCount()  # (<cudaError_t.cudaSuccess: 0>, 1)
    

    Fortunately, cudart.cudaGetDeviceCount() worked fine, and I proceeded to test integrating it into bitsandbytes. However, calling torch.cuda.is_available() after calling cuda.cuInit(0) resulted in an error. This was because I called cudaGetDeviceCount() inside torch.cuda.is_available().

    from cuda import cuda, cudart
    
    cuda.cuInit(0)  # <CUresult.CUDA_SUCCESS: 0>,)
    cuda.cudaGetDeviceCount()  # (<cudaError_t.cudaSuccess: 0>, 1)
    
    import bitsandbytes
    
    # ...
    # /opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:82: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 304: OS call failed or operation not supported on this OS (Triggered internally at /opt/pytorch/pytorch/c10/cuda/CUDAFunctions.cpp:109.)
    #   return torch._C._cuda_getDeviceCount() > 0
    # ...
    

    The problem seemed to be back to square one. I took a breath and calmly reread the error log above. Then something caught my eye.

    torch._C._cuda_getDeviceCount() > 0

    Note that bitsandbytes was already using PyTorch internally, which means it had a dependency on PyTorch. To be precise, `bitsandbytes' had a dependency on lion-pytorch, which had a dependency on PyTorch. And PyTorch already had an interface to CUDA functions, which I decided to take advantage of this time.

    Fortunately, all of the CUDA functions used by bitsandbytes existed in PyTorch. I made the following changes to the functions that were previously called via libcuda.so and libcudart.so.

    libcuda/libcudarttorch
    libcuda.cuDeviceGetCount()torch.cuda.device_count()
    libcuda.cuDeviceGet()torch.cuda.device()
    libcuda.cuDeviceComputeCapability()torch.cuda.get_device_capability()
    libcudart.cudaRuntimeGetVersion()torch.version.cuda

    After verifying that it worked after the change, I registered a PR in the GitHub repository (TimDettmers/bitsandbytes#375) to apply to the distribution package version.

    Postscript

    On July 14, 2023, about two months after registering the PR, the patch was merged into the main branch and included in version 0.40.1.

    I was also able to get some feedback from the author, Tim Dettmers, whose thoughts and philosophy are evident in this short article. Through this opportunity, I was able to learn more about LLM's ecosystem. It was also the first time in a long time that I was able to feel the fun of open source activities. I think the appeal of open source activities is that we can collaborate beyond spatial constraints and learn from each other's ideas. We run an open source version of Backend.AI alongside an enterprise version. We will always strive to provide a better user experience and a better developer experience.

    [^1]: NVIDIA GPU Cloud [^2]: The NGC catalog hosts containers for AI/ML, metaverse, and HPC applications and are performance-optimized, tested, and ready to deploy on GPU-powered on-prem, cloud, and edge systems. [^3]: ViperGPT: Visual Inference via Python Execution for Reasoning, March 14, 2023. [^4]: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capability [^5]: https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__DEVICE.html#group__CUDA__DEVICE_1g52b5ce05cb8c5fb6831b2c0ff2887c74 [^6]: LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale, November 10, 2022. [^7]: https://developer.nvidia.com/cuda-python

    This post is automatically translated from Korean

    28 July 2023

  • 23.03: May 2023 Update

    By Lablup

    A recap of the ongoing updates to Backend.AI 23.03 and 22.09. The development team is working hard to squeeze every last bit out.

    Here are the most important changes in this update:

    • Expanded hardware compatibility: Expanded hardware compatibility with support for ATOM accelerator idle checking and Dell EMC storage backends from Rebeillons.
    • High-speed upload enhancements: Introduced SFTP functionality to support high-speed uploads to storage.
    • Development Environment Enhancements: Enhanced the development environment by allowing sessions to be accessed in remote SSH mode from local Visual Studio Code.
    • Increased manageability: Improved the user interface for administrators to make it easier to set up AI accelerators and manage resource groups.

    Backend.AI Core & UI

    • Added support for idle state checking of ATOM accelerators.
    • Introduced SFTP functionality to support high-speed uploads directly to storage.
    • Added ability to force periodic password updates based on administrator settings.
    • Added an upload-only session (SYSTEM) tab.
    • Added Inference type to the allowed session types.
    • Added the ability to connect to a session in remote SSH mode from local Visual Studio Code.
    • Added support for uploading folders from Folder Explorer.
    • Improved the display of the amount of shared memory allocated when creating a session.
    • Added support for Dell EMC storage backend.
    • Improved the accuracy of container memory usage measurement.
    • Improved the ability to run multiple agents concurrently on a single compute node.
    • Added project/resource group name filter for administrators.
    • Added user interface for administrators to set various AI accelerators, including GPUs, in resource presets/policies.
    • Added a user interface for administrators to display the allocation and current usage of various accelerators, including GPUs.
    • Added a user interface for administrators to set the visibility of resource groups.
    • Provided a user interface for administrators to view the idle-checks value per session.
    • Added recursion option when uploading vfolders in the CLI, and improved relative path handling.
    • Added a recursive option in the CLI to terminate sessions with dependencies on specific session termination at once.
    • Added a new mock-accelerator plugin for developers, replacing the old cuda-mock plugin.
    • Added status and statistics checking API for internal monitoring of the storage proxy for developers.

    Backend.AI FastTrack

    • Improved searching for vfolders by name when adding pipeline modules.
    • Added an indication to easily recognize success/failure after pipeline execution.

    Backend.AI Forklift

    • Bug fixes and stability improvements.
    • Support for deleting build job history.
    • Supports pagination of the build task list.

    Backend.AI is constantly evolving to support a variety of environments in the ever-changing AI ecosystem, while providing a more robust and user-friendly experience. Stay tuned to see what's next!

    Make your AI accessible with Backend.AI!

    This post is automatically translated from Korean

    31 May 2023

  • Sneak Peek: Backend.AI Model Service Preview

    By Kyujin Cho

    Introduction

    As super-sized AI models flood the market, there is a growing concern about not only developing the models, but also how to deliver them "well" and "efficiently" to users. Prior to Large Language Models (LLMs), the computing power of AI models was focused on training rather than inference, as the hardware requirements for attempting to make inferences with a trained model were much smaller than the computing power needed to train the model. Deployers of models could get enough power for inference from the NPU of a real user's end device (such as a smartphone). However, with the advent of LLMs, the tables were turned.

    Take Meta's [OPT 175b] (https://github.com/facebookresearch/metaseq) as an example: OPT-175b, as its name implies, has 175 billion parameters and requires roughly 320+ GB of GPU memory just to load them onto the GPU to perform inference tasks. That's a huge difference from the 4GB that pre-LLM image processing models used to require.
    With this change in AI model behavior, efficiently managing service resources has become paramount to keeping your service running reliably. In this article, we'll preview Backend.AI's upcoming model service feature, Backend.AI Model Service, and show you how it will allow you to efficiently run your AI model from training to serving with a single infrastructure.

    Backend.AI Model Service

    Backend.AI Model Service is a model serving system that runs on top of the existing Backend.AI solution. It takes Backend.AI's tried-and-true container management technology and container app delivery system, AppProxy[^1], to the next level, enabling both AI training and model service in one infrastructure without installing additional components and by simply upgrading the existing Backend.AI infrastructure. It also supports an auto-scaling feature that automatically scales up and down inference sessions based on per-session GPU usage, number of API calls, or time of day, allowing you to effectively manage AI resources used for inference.

    Inference Sessions

    Inference sessions in Backend.AI are conceptually the same as traditional training sessions. You can use the same execution environment you've been using for training for inference sessions, or you can deploy a dedicated execution environment just for inference sessions. Inference sessions are volatile and stateless, so you can terminate them at any time if the session is not performing well. In this case, Backend.AI will attempt to recover the original state by creating a new inference session, while simultaneously forwarding inference requests to other living inference sessions to minimize downtime for the inference service.

    Model storage

    Models to be served through Backend.AI are managed as "model storage" units. Model storage consists of model files, code for model services, and model definition files.

    Model definition file

    The model definition file is where you define the information for running a service provider's model in the Backend.AI Model Service. The model definition file contains information about the model, the ports exposed by the model service, and a set of tasks that must be executed to run the model service. If your model service provides a health check feature that reports its own health, you can use that information to take action, such as excluding sessions from the service if they are in bad health.

    models:
      - name: "KoAlpaca-5.8B-model"
        model_path: "/models/KoAlpaca-5.8B"
        service:
          pre_start_actions:
            - action: run_command
              args:
                command: ["pip3", "install", "-r", "/models/requirements.txt"]
          start_command:
            - uvicorn
            - --app-dir
            - /models
            - chatbot-api:app
            - --port
            - "8000"
            - --host
            - "0.0.0.0"
          port: 8000
          health_check:
            path: /health
            max_retries: 10
    

    Here is an example of a well-defined model definition file, which contains a set of steps to run the KoAlpaca 5.8B model as a model service.

    Tutorial: Model Service with Backend.AI Model Service

    In this tutorial, we'll actually use Backend.AI to service a KoAlpaca 5.8B model quantized to 8 bits.

    Write the API server code

    Write a simple API server to serve the model.

    import os
    from typing import Any, List
    
    from fastapi import FastAPI, Response
    from fastapi.responses import RedirectResponse, StreamingResponse, JSONResponse
    from fastapi.staticfiles import StaticFiles
    import numpy as np
    from pydantic import BaseModel
    import torch
    from transformers import pipeline, AutoModelForCausalLM
    import uvicorn
    
    URL = "localhost:8000"
    KOALPACA_MODEL = os.environ["BACKEND_MODEL_PATH"]
    
    torch.set_printoptions(precision=6)
    
    app = FastAPI()
    
    model = AutoModelForCausalLM.from_pretrained(
        KOALPACA_MODEL,
        device_map="auto",
        load_in_8bit=True,
    )
    
    
    pipe = pipeline(
        "text-generation",
        model=model,
        tokenizer=KOALPACA_MODEL,
    )
    
    
    class Message(BaseModel):
        role: str
        content: str
    
    
    class ChatRequest(BaseModel):
        messages: List[Message]
    
    
    BASE_CONTEXTS = [
        Message(role="맥락", content="KoAlpaca(코알파카)는 EleutherAI에서 개발한 Polyglot-ko 라는 한국어 모델을 기반으로, 자연어 처리 연구자 Beomi가 개발한 모델입니다."),
        Message(role="맥락", content="ChatKoAlpaca(챗코알파카)는 KoAlpaca를 채팅형으로 만든 것입니다."),
        Message(role="명령어", content="친절한 AI 챗봇인 ChatKoAlpaca 로서 답변을 합니다."),
        Message(role="명령어", content="인사에는 짧고 간단한 친절한 인사로 답하고, 아래 대화에 간단하고 짧게 답해주세요."),
    ]
    
    
    def preprocess_messages(messages: List[Message]) -> List[Message]:
        ...
    
    
    def flatten_messages(messages: List[Message]) -> str:
        ...
    
    
    def postprocess(answer: List[Any]) -> str:
        ...
    
    
    @app.post("/api/chat")
    async def chat(req: ChatRequest) -> StreamingResponse:
        messages = preprocess_messages(req.messages)
        conversation_history = flatten_messages(messages)
        ans = pipe(
            conversation_history,
            do_sample=True,
            max_new_tokens=512,
            temperature=0.7,
            top_p=0.9,
            return_full_text=False,
            eos_token_id=2,
        )
        msg = postprocess(ans)
    
        async def iterator():
            yield msg.strip().encode("utf-8")
    
        return StreamingResponse(iterator())
    
    
    @app.get("/health")
    async def health() -> Response:
        return JSONResponse(content={"healthy": True})
    
    
    @app.exception_handler(404)
    async def custom_404_handler(_, __):
        return RedirectResponse("/404.html")
    
    
    app.mount(
        "/",
        StaticFiles(directory=os.path.join(KOALPACA_MODEL, "..", "chatbot-ui"), html=True),
        name="html",
    )
    

    Create a model definition file

    Create a model definition file for your API server.

    models:
      - name: "KoAlpaca-5.8B-model"
        model_path: "/models/KoAlpaca-Ployglot-5.8B"
        service:
          pre_start_actions:
            - action: run_command
              args:
                command: ["pip3", "install", "-r", "/models/requirements.txt"]
          start_command:
            - uvicorn
            - --app-dir
            - /models
            - chatbot-api:app
            - --port
            - "8000"
            - --host
            - "0.0.0.0"
          port: 8000
          health_check:
            path: /health
            max_retries: 10
    

    In a session of the model service, model storage is always mounted under the /models path.

    Prepare model storage

    Add the model API server code you wrote, the model definition file, and the KoAlpaca model to your model storage.

    Create a model service

    With both the model file and the model definition file ready, you can now start the Backend.AI Model Service. The Model Service can be created using the backend.ai service create command in the Backend.AI CLI. The arguments accepted by service create are almost identical to the backend.ai session create command. After the image to use, you pass the ID of the model storage and the number of inference sessions to initially create.

    Using backend.ai service info, you can check the status of the model service and the inference sessions belonging to the service. You can see that one inference session has been successfully created.

    Use the Reasoning API

    You can use the backend.ai service get-endpoint command to see the inference endpoint of a created model service. The inference endpoint continues to have a unique value until a model service is created and removed. If a model service belongs to multiple inference sessions, AppProxy will distribute requests across the multiple inference sessions.

    Restricting access to the Reasoning API

    If you want to restrict who can access the inference API, you can enable authentication for the inference API by starting the model service with the --public option removed. Authentication tokens can be issued with the backend.ai service generate-token command.

    Scaling inference sessions

    The backend.ai service scale command allows you to change the scale of inference sessions belonging to the model service.

    Closing thoughts

    So far, we've learned about Backend.AI Model Service and how to actually deploy a model service with the Model Service feature. Backend.AI Model Service is targeted for general availability in Backend.AI 23.03. We're working hard to make the Model Service feature publicly available in the near future, so stay tuned.

    ---]

    [^1]: Available from Backend.AI Enterprise.

    This post is automatically translated from Korean

    30 May 2023

  • 23.03: March 2023 Update

    By Lablup

    23.03: 2023년 3월 업데이트

    We're excited to announce version 23.03.0, the first major release of Backend.AI for 2023. Some features will continue to be rolled out in subsequent updates.

    Specifically in this update:

    • Support for the 'inference' service with a new computation session type.
    • Support for 'model' management with a new storage folder type.
    • Support for managing storage capacity on a per-user and per-project basis.
    • Significant improvements to FastTrack's pipeline versioning and UI.

    Backend.AI Core & UI (23.03)

    • Added model management and inference session management capabilities.
      • More advanced inferential endpoint management and network routing layers will be added in subsequent updates.
    • The codebase has been updated to be based on Python 3.11.
    • Introduced React components to the frontend and leveraged Relay to introduce a faster and more responsive UI.
    • Full support for cgroup v2 as an installation environment, starting with Ubuntu 22.04.
    • Updated the vfolder structure to v3 for storage capacity management on a per-user and per-project basis.
    • Kernel and sessions are now treated as separate database tables, and the state transition tracking process has been improved to work with less database load overall.
    • Improved the way the agent displays the progress of the image download process when running a session.
    • Improved the display of GPU usage per container in CUDA 11.7 and later environments.
    • Scheduling priority can be specified by user and project within each resource group.
    • Supports two-factor authentication (2FA) login based on one-time password (TOTP) to protect user accounts.
    • Support for users to register their own SSH keypair for session access.
    • Supports user interfaces for Graphcore IPUs and Rebellions ATOM devices.

    Backend.AI Forklift (23.03)

    • Added Dockerfile templates and advanced editing capabilities.
    • Support for creating container images for inference.
    • Extended image management capabilities to work with the Harbor registry.

    Backend.AI FastTrack (23.03)

    • Storage folder contents can be viewed directly from the FastTrack UI.
    • Improved session state synchronisation with Core to event-based.
    • You can set the maximum number of iterations for a pipeline schedule.
    • If a task fails to execute, the pipeline job is automatically cancelled instead of waiting.
    • Added pipeline versioning. You can track the shape history of your pipeline, and you can recall the contents at a specific point in time to continue working on it.
    • You can modify pipelines in YAML format directly through the code editor.

    개발 및 연구 프레임워크 지원

    • Supports TensorFlow 2.12, PyTorch 1.13
    • Support for NGC (NVIDIA GPU Cloud) TensorFlow 22.12 (tf2), NGC PyTorch 22.12, NGC Triton 22.08
    • Added python-ff:23.01 image, which provides the same libraries and packages as Google Colab

    In addition to what we've listed above, we've included many bug fixes and internal improvements.
    Stay tuned for more to come!

    This post is automatically translated from Korean

    31 March 2023

  • Concurrent React Changed Everything: Distinguishing Renders That Aren't Rushed

    By Jongeun Lee

    Backend.AI's MLOps platform, FastTrack is using React 18. We will explore the differences between rushed and non-rushed renders enabled by the Concurrent Renderer in React 18.

    The Concurrent feature in React, initially introduced as Async Rendering at JSConf Iceland in 2018, was not fully integrated into React 18 until the year 2022. As you might expect from this time period, the Concurrent Renderer is the biggest and most significant change in React 18. Even though the renderer has been updated, React developers can still run code written for versions before React 18 with minimal changes. It is possible to build user interfaces with React without knowledge of React's Concurrent Renderer. Understanding the Concurrent Renderer and its applications can simplify the complexities in your mind during React development, enabling you to create user interfaces(UI) that provide an enhanced user experience(UX). This article will not delve into the inner workings of the Concurrent Renderer. Instead, it will focus on defining what the Concurrent Renderer is and how it can transform the mindset of React developers, which is crucial for those creating applications using React.

    To summarize the content of this article, here's what you need to know:

    Because of the Concurrent Renderer,

    • Component rendering can be interrupted.
    • Parts of the tree can be rendered even when they are not visible on the screen.
    • This allows React developers to distinguish between non-rush renders like never before.

    “React components are abstractly pure functions.”
    React components are actually created as JavaScript functions. (You can also create it as a class, although this is generally not advised in most situations.) A function generates an output based on the provided input. Changing the input can alter the output, hence it is necessary to execute the function again to generate a new result. (A pure function consistently returns the same output for identical inputs.)

    What are the inputs and outputs of a React component?
    The inputs of a React component are known as properties, or 'props', which the component receives as a function. The outputs are the React elements that are returned by the function.

    Is state via hooks also an input?
    'hooks' can be conceptually understood as inputs to a function. Similar to React props, they act as triggers that prompt a re-render when their values change, leading to variations in the output of our React component.

    Now, back to the topic of rendering.

    Component rendering can be interrupted.

    The essence of Concurrent React lies in the ability to interrupt rendering, a feature unavailable before React 18(except in experimental form). In previous versions, when a React component began rendering, all JavaScript operations were blocked until the rendering completed. This means that if the rendering function takes a long time, the event handler function that handles the user's click cannot be executed until the element is returned. However, with React 18, rendering can now be interrupted.

    const A = ({ count }) => {
      return (
        <div>
          <span>{count}</span>
          <B/>
          <C/>
        </div>
      );
    };
    
    const B = () => {
      const [text, setText] = useState("");
      return (
        <div>
          <input value={text} onChange={(e) => setText(e.target.value)} />
          <D/>
        </div>
      );
    };
    
    const C = () => {
      return <span>C</span>;
    };
    
    const D = ({ text }) => {
      verySlowFunction(text); //Consider this function that takes a few seconds to compute.
      return <span>D</span>;
    };
    

    In earlier versions of React 18, rendering component A necessitated the rendering of components B and C, and component D had to be rendered for B. No other JavaScript operations could be performed until A's return value, a React element, was returned. The component tree that A returned was rendered as a single block, and it was not possible to interrupt A's rendering once it had begun.

    In Concurrent React, it is possible to interrupt the rendering process. Why is it necessary to interrupt rendering? You can think of the following:

    • When the current render in progress is no longer valid(stale)
      • For example, consider the situation in the code above where A's count prop is rendering with a value of 1. Before this render completes, count changes to 2, resulting in a render request for A on day 2. Consequently, the rendering result from day 1 becomes obsolete as it does not reflect the most recent value. By halting the rendering of day 1 and promptly beginning the rendering of day 2, you can present the user with the most recent value quickly.
    • When you have something you want to do before the ongoing render updates the screen you want to show.
      • When a user event occurs during rendering, it's possible to halt the ongoing render and give precedence to the event handler for an immediate response.

    These are all cases where you're improving the UX by stopping rendering that component so it can do something else.

    It is possible to render sections of the tree that are not visible on the display.

    Concurrent React enables you to render components corresponding to specific screen areas separately, in addition to what is currently visible on the screen. This feature allows the existing render to remain visible and functional while independently rendering a future screen update in advance, swapping it in once rendering is complete. Concerns may arise about rendering more than necessary and reducing usability. Yet, thanks to the Concurrent Renderer, this separate rendering process can be halted at any moment, ensuring it does not disrupt user interactions. Ultimately, this capability can enhance the user experience.

    So far, we've seen two features of the Concurrent Renderer, and now we'll see how they are utilized to “distinguish between non-rush renders”.

    Distinguish between non-rush renders

    Examples of rushed and non-rushed renders
     
    Consider the experience of visiting your website for the first time via a browser. What's the most urgent thing you need to do when you're faced with a white, blank screen? The most critical action is to display your site's content promptly. If the screen remains white for an extended period, users may not wait and leave. Therefore, it's essential to prioritize rendering the initial content quickly.
     

    On the left sidebar of your homepage, there is a set of menus for navigation. If a user intends to select Menu A but accidentally selects Menu B, and then attempts to select Menu A again while Menu B is still loading, the screen for Menu B will complete rendering before the screen for Menu A appears.
     

    If we consider such user pressed menu B and then pressed menu A immediately. it is more urgent to render the screen for Menu A than it is to render the screen for Menu B, because the screen for B is now invalid.

    As a React developer, you can inform React about non-rush renders by specifying which input changes that trigger a render are not pressing. The hooks that facilitate this for developers are useDeferredValue and useTransition. Both APIs, introduced in React 18, serve to defer non-rush rendering. We will examine these two hooks individually to grasp the distinctions between them.

    useDeferredValue: Separate using a changed input value

    It is used by components that use a specific value and want to handle changes to that specific value in a non-rush manner.

    function App() {
      const [text, setText] = useState('');
      return (
        <>
          <input value={text} onChange={e => setText(e.target.value)} />
          <SlowList text={text} />
        </>
      );
    }
    

    The example code above is one of the useDeferredValue examples from beta.reactjs.org.

    In this scenario, text serves as a state variable; thus, any changes to text will cause the App to re-render. The same text is also passed as a prop to both <input> and <SlowList>. Consequently, when text is modified, it initiates a re-render of App, and as part of this process, both input and SlowList will update to reflect the new text. However, if SlowList has a lengthy render time, the user's input will not appear until the rendering is fully completed, regardless of how fast the user types.

    In this scenario, input represents the user's keyboard input, which is rendered quickly, while SlowList is a result of the user's input and is rendered more slowly than input. We can utilize useDeferredValue to generate a deferredText, which will be displayed during a non-rush render, with text initiating an rush render.

    function App() {
      const [text, setText] = useState('');
      const deferredText = useDeferredValue(text);
      return (
        <>
          <input value={text} onChange={e => setText(e.target.value)} />
          <SlowList text={deferredText} />
        </>
      );
    }
    

    In this manner, when the text value changes, deferredText immediately retains the previous text value. Concurrently, deferredText undergoes a separate offscreen rendering with the new value. Only after this rendering is complete, both text and deferredText update to the latest value. The rendering of deferredText is not a rushed process and can be halted.

    If there be successive non-rushed render requests for the same component, the initial non-rushed render will cease and commence rendering the most recent change, provided it has not concluded. For instance, with text, if a user inputs 'A' followed by 'B' in quick succession into an empty input field, the render for 'A' will initiate. If 'B' is entered before the rendering of 'A' concludes, the render for 'A' will stop, and the rendering for 'AB' will begin.

    useTransition: Separate using a function that changes the input

    Previously, we discussed how both useTransition and useDeferredValue help manage non-urgent renderings. Now, let's explore the distinctions between the two and delve into useTransition.

    :warning: CAUTION

    To clarify the distinction, the example of useDeferredValue has been altered to demonstrate useTransition. It's important to note that useTransition is not compatible with input, as it necessitates synchronous updates. For an explanation of this limitation, refer to the Troubleshooting section on the useDeferredValue page at beta.reactjs.org.

    function App() {
      const [text, setText] = useState("");
      const [isPending, startTransition] = useTransition();
      return (
        <>
          <button
            onClick={(e) => {
              startTransition(() => setText((v) => v + "a"));
            }}
          >
            a 키
          </button>
          <SlowList text={text} />
        </>
      );
    }
    

    Different things:

    If useDeferredValue utilizes its value, text, to specify a non-urgent render, then it employs setText, which alters the value and triggers a render. In cases where text is not available, understanding setText alone is sufficient.

    It is not possible to instantly display the change in text as it occurs within startTransition. A distinct render will initiate for the updated text, but as it is a separate process, the actual screen render won't recognize the updated value, though it will be aware that the separate render is underway through isPending. The useTransition hook delays the change in state, and the useDeferredValue hook postpones certain renderings based on the altered state.

    Common things:

    If multiple non-rush render requests for the same component are made through startTransition, the initial render—similar to useDeferredValue—will be canceled if it's still ongoing, and a new render with the latest value will commence.

    Wrapping

    React 18's Concurrent Renderer introduces the ability to "distinguish between non-rush renders." Utilizing useTransition and useDeferredValue, it allows for updates to complex structures without compromising usability. Prior to React 18, achieving such seamless usability demanded significant development work. Now, with the streamlined process of "distinguishing between non-rush renders," developers can offer users a smooth user experience.

    29 January 2023

  • Introducing FastTrack: Backend.AI MLOps Platform

    By Jihyun Kang

    Introducing FastTrack, the MLOps Platform of Backend.AI. FastTrack allows you to organize each step of data preprocessing, training, validation, deployment, and inference into a single pipeline. FastTrack makes it easy for you to customize each step as you build your pipeline. In this article, we'll explain why you need an MLOps platform, how Backend.AI FastTrack came to be, and what makes itself really unique.

    Rise of MLOps Platforms

    Over the past few years, the IT industry, as well as most industries undergoing digital transformation, has been working hard to adopt AI to make meaningful predictions from scattered data and respond to rapidly changing markets. In order to make a good use of AI in this process, it is necessary to respond to various stages such as model training and optimization, hardware introduction considering data I/O, model version management, etc. The concept of MLOps (Machine Learning Operations) emerged from this. If you are unfamiliar with the concept, we recommend that you skim our 'MLOps series' before reading this article.

    FastTrack: History

    In 2019, we added the Backend.AI pipeline as a beta release to address the demand for DevOps pipelines. We developed and tested the ability to simplify the process of creating and managing complex pipelines, and to operate unidirectional pipelines that split into two or more paths in the middle. However, with the rise of the MLOps concept and the proliferation of various pipeline solutions such as AirFlow, MLFlow, and KubeFlow, we shifted our development direction to integrating and supporting open source pipeline tools instead of developing pipeline features as a full-fledged feature.

    Meanwhile, AI development pipelines have became increasingly complexed, and became clear that open-source MLOps pipeline tools were unable to meet the diverse needs of the users. At this point, we decided to revive the pipeline feature of Backend.AI. During the process of revitalizing and prototyping the Backend.AI pipeline, we changed the direction of development to a MLOps pipeline solution that works with the Backend.AI cluster, but stands independently, so that we could directly address our user requests.

    With such a colorful history, Lablup's AI/MLOps solution is called 'FastTrack'. This name came from the airport or logistics, a lane which expedites passenger or custom clearance. FastTrack became available with Backend.AI 22.09, and still being tested to meet our customer standards.

    FastTrack: What it is

    FastTrack is a machine learning workflow platform that enables users to tailor multiple work units based on Backend.AI clusters and execute them as a Directed Acyclic Graph (DAG). Users can run sessions for each stage of the machine learning pipeline, linked through pre- and post-relationships, allowing them to integrate steps like data preprocessing, training, validation, deployment, monitoring, and optimization into a unified workflow as needed. This means users can more efficiently build and reuse models by structuring sessions into workflows and automatically scheduling them after each phase, rather than manually crafting them in a conventional Backend.AI cluster.

    FastTrack: Structure and features

    FastTrack categorizes workflow templates as pipelines, executes workflows to pipeline jobs, divides the units of work in a workflow into tasks, and the units of work that are to be executed into task instances. The flowchart following outlines the step-by-step progression of work within FastTrack.

    Pipeline

    A pipeline is a structured collection of data and tasks, represented by a Directed Acyclic Graph (DAG). In setting up an AI workflow, constructing a pipeline prompts FastTrack to create a specific folder in your Backend.AI cluster dedicated to pipelines. This setup facilitates the monitoring of training progress via artifacts. FastTrack streamlines the modification of task relationships with an intuitive drag-and-drop interface, allowing for immediate visual feedback in the form of a schematic flow and verification through a YAML file. Moreover, managing pipelines in YAML format allows for easy export, import, and sharing among users.

    Pipeline Job

    Within the FastTrack GUI, the progress of job units is indicated by the color of the nodes associated with each unit. Similar to pipelines, the information and relationships of the task instances being configured in YAML are managed. Upon completion of all task instances, the pipeline job's status is displayed as either successful or failed.

    Task

    A task is the smallest unit of execution in a pipeline that allows you to allocate resources by purpose. For example, sole task for model training can dedicate a lot of GPU resources, to use resources more efficiently, as opposed to preprocessing. You can also specify the execution environment. Based on the images supported by the Backend.AI cluster, you can use images such as TensorFlow, PyTorch, Python 3.x, NGC TensorFlow, NGC PyTorch, etc. without Docker building process. You can also mount virtual folders created by the Backend.AI cluster on a per-task basis as needed.

    Task Instance

    Task instances are physical objects created when a pipeline job is created, based on the task informations that makes up the pipeline. Executing an AI workflow means that the task instances that make up the pipeline job are executed according to the specified preceding and following relationships. Task instances currently have a 1:1 correspondence with Sessions in the Backend.AI cluster, equating the state of a session with the state of a task instance, but we plan to expand beyond sessions to other units of execution in the near future.

    Wrap up

    So far, we've covered MLOps with an introduction to FastTrack, the Backend.AI MLOps platform. The latest release of Backend.AI FastTrack is version 22.09 (in the time of Nov.2022). Our development plans include a range of user-friendly features, such as debugging pipelines, creating dependencies between pipelines, optimizing the usage of resources for tasks, and providing support for GitHub-based model and data repositories. True to Lablup's vision of empowering anyone to develop and use AI models from anywhere, FastTrack will simplify the process of building automated models. We look forward to your interest in our future endeavors.

    29 November 2022

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

Namyoung Bldg. 4F/5F, 34, Seolleung-ro 100-gil, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.