Tag : Event

  • 2024 GTC Event Live Rankings: How to Utilize GraphQL Subscription

    By Sujin Kim

    Lablup commemorated the 2024 GTC event by hosting a special event. Participants created images similar to the given image using the LLM model provided by Lablup, and among those who scored high, an NVIDIA RTX 4090 graphics card was awarded through lottery. 🫢
    In this post, we aim to highlight the subscription feature of GraphQL, which was used in the leaderboard page of the event, allowing participants to monitor their scores in real time.

    GTC24 event page

    What is a Subscription?

    It is a mechanism that allows the client to query data in response to a server side event stream. In cases where data changes in real time, for example when implementing real-time logs or chat applications, updates can be immediately reflected when pushed from the server.

    Subscription sends data only when the required information changes on the server. Therefore, in the case where data changes are not frequent, Subscription can reduce data traffic, which can also lead to cost savings.

    A similar concept is setting the fetchPolicy of GraphQL's Query to network-only to always get the latest results, but it’s different from the features of subscriptions. This ensures the latest data by always requesting the server whenever the client needs data. However, network costs accompany each request. Thus, while it is okay to set fetchPolicy to network-only to guarantee the latest results whenever a button is clicked, if it is used to retrieve data where updates are frequent like a stock trading window, network costs would be significant.

    How to Use

    Defining Subscription

    The usage is similar to Query, just use the keyword subscription.

      const leaderboardSubscriptions = graphql`
        subscription Ranking_leaderboardSubscription {
          leaderboard {
            submissions {
              id
              name
              score
              imageUrl
            }
            lastUpdatedAt
          }
        }
      `;
    

    When an event occurs in the leaderboard stream, a notification is sent to the application, and the client can get the updated result.

    Then the following result can be obtained.

    leaderboard: {
    	submissions: [
    		{
        	"id": "76293167-e369-4610-b7ac-4c0f6aa8f699",
    	    "name": "test",
        	"score": 0.5910864472389221,
    	    "imageUrl": "<IMAGE_URL>"
    		},
        ],
    	lastUpdatedAt: 1710176566.493705
    }
    

    subscribe

    To display real-time rankings, when entering the relevant page, call subscribe, and when moving to other pages, call dispose to unsubscribe using useEffect.

    import { useEffect } from 'react';
    import { requestSubscription } from 'react-relay';
    
    useEffect(() => {
      const subscriptionConfig = {
        subscription: leaderboardSubscriptions,
        variables: {},
        onNext: (response: any) => {
          setLeaderboard(response.leaderboard.submissions); // 미리 정의된 state
        },
        onError: (error: any) => {
          console.error('Leaderboard subscription error', error);
        },
      };
      const { dispose } = requestSubscription(
        RelayEnvironment, // refer 'How to Configure' below
        subscriptionConfig,
      );
      return () => {
        dispose();
      };
    }, []); //  Executing this part only when the component is mounted or unmounted by setting an empty dependency array
    

    requestSubscription

    • Provides a Disposable object as a return value.
    • This Disposable object includes a `dispose method to cancel the subscription.

    onNext

    • As data is updated through subscription, it updates the pre-defined state to display real-time rankings.
    • In addition to onNext, onError, there are various configurations such as onCompleted called when the subscription ends and updater to update the in-memory relay storage based on server response. For detailed descriptions, refer to this link.

    dispose

    • A cleanup function is returned in the useEffect hook and the dispose method is called to end the subscription when the component is unmounted.

    How to set up (+Relay)

    According to the Relay documentation, GraphQL subscriptions communicate with WebSockets, and you can set up a network using graphql-ws. (There is also a way to use subscriptions-transport-ws, but it's deprecated, so we'll pass on that).

    import { ExecutionResult, Sink, createClient } from 'graphql-ws';
    import {
      Environment,
      Network,
      RecordSource,
      Store,
      SubscribeFunction,
      RelayFeatureFlags,
      FetchFunction,
      Observable,
      GraphQLResponse,
    } from 'relay-runtime';
    import { RelayObservable } from 'relay-runtime/lib/network/RelayObservable';
    import { createClient } from 'graphql-ws';
    
    const wsClient = createClient({
      url: GRAPHQL_SUBSCRIPTION_ENDPOINT,
      connectionParams: () => {
        return {
          mode: 'cors',
          credentials: 'include',
        };
      },
    });
    
    const subscribeFn: SubscribeFunction = (operation, variables) => {
      return Observable.create((sink: Sink<ExecutionResult<GraphQLResponse>>) => {
        if (!operation.text) {
          return sink.error(new Error('Operation text cannot be empty'));
        }
        return wsClient.subscribe(
          {
            operationName: operation.name,
            query: operation.text,
            variables,
          },
          sink,
        );
      }) as RelayObservable<GraphQLResponse>;
    };
    
    // Export a singleton instance of Relay Environment
    // configured with our network function:
    export const createRelayEnvironment = () => {
      return new Environment({
        network: Network.create(fetchFn, subscribeFn),
        store: new Store(new RecordSource()),
      });
    };
    
    export const RelayEnvironment = createRelayEnvironment();
    

    wsClient

    • For url, enter the websocket URL of the GraphQL server.
    • credentials can be set via connectionParams.

    subscribeFn

    • Defines the subscription behavior of the Observable.
    • Validate the query string in if (!operation.text) { ... } and if it is invalid, raise an error and abort the execution.
    • Finally, the return wsClient.subscribe( ... ) code actually subscribes to the subscription using the WebSocket client and passes the payload of the GraphQL operation to the sink (i.e., the Observer).
    • In short, this function is responsible for handling the GraphQL subscription request and pushing the result to the Observable stream whenever a subscription event occurs.

    createRelayEnvironment

    • Create and return a new Relay Environment.
    • A Relay environment is a container that manages other high-level Relay objects, network layer, cache, etc.
    • We have assigned functions to fetchFn to handle GraphQL query/mutation requests and subscribeFn to handle subscription requests.
    • To create a Relay Store to store and manage cache data, we used the RecordSource store.

    RelayEnvironment

    • The createRelayEnvironment function is called to initialize the RelayEnvironment and export it for later import and use elsewhere.
    • This configured RelayEnvironment is mainly used by QueryRenderer, useLazyLoadQuery, commitMutation, etc.

    CORS error

    Initially, I read the config.toml file used on the server side to set the websocket URL of the GraphQL server and set the address. However, I kept getting CORS errors and Unauthorized every time I sent a request. So I did a lot of shoveling around, and with the help of my colleague, I was able to solve it. (Thank you so much 🥹🙏)

    The solution is to use http-proxy-middleware to set up setupProxy!

    As you can see in the create-react-app manual, you can set up a setupProxy to proxy requests from your development server to a specific path on your real server, usually to prevent CORS issues in development environments where the frontend and backend are separated, or to proxy requests from your development server to a specific path on your real server.

    The code looks like this

    const { createProxyMiddleware } = require('http-proxy-middleware');
    
    module.exports = function (app) {
      app.use(
        createProxyMiddleware('/graphql', {
          target: 'http://127.0.0.1:9220',
          changeOrigin: true,
          followRedirects: true,
          ws: true,
        }),
      );
    };
    

    createProxyMiddleware('/graphql', { ... })

    • Sets the middleware to handle all HTTP requests originating from '/graphql'.

    target: 'http://127.0.0.1:9220'

    • Set the address of the server to which proxied requests will be forwarded. Here we set it to port 9220.

    changeOrigin: true

    • Change the host header of the request to the host of the target. Use this to work around CORS issues.

    followRedirects: true

    • This setting causes the proxy to follow redirects when the server sends a redirect response to a request.

    ws: true

    • This setting enables the WebSocket proxy. The websocket connection between the client and server is also passed through this proxy, which we set to true for subscribe.

    Leaderboard page

    After a lot of digging, we've finally finished the leaderboard page! 🎉 A big thank you to everyone who participated. 🙇🏻‍♀️

    Conclusion

    Using GraphQL subscriptions, we were able to implement features like real-time rankings. Although I struggled with how to set it up because of CORS, it was not difficult to use because it is not much different from writing a query.

    I think the biggest advantages of subscriptions are real-time updates and efficiency. Because it receives data from the server in real time, users always see the latest status, and because it only gets updates when the data it needs changes, it can minimize server requests for data that doesn't change often.

    However, it is complex as it requires an implementation of websockets or similar real-time protocols, as well as logic to manage the connection state between the client and server. Although not covered in this article, subscription requires additional work on the server side. And because it requires a real-time connection, it can consume server resources and client resources.

    Therefore, which method is more cost or performance efficient depends on many factors, including the nature of your application, the frequency of data updates, and the number of concurrent users, so use your best judgment.

    references

    • https://relay.dev/docs/v10.1.3/subscriptions/
    • https://relay.dev/docs/guided-tour/updating-data/graphql-subscriptions/#configuring-the-network-layer
    • https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API
    • https://github.com/enisdenjo/graphql-ws
    • https://github.com/apollographql/subscriptions-transport-ws
    • https://graphql.org/blog/subscriptions-in-graphql-and-relay
    • https://create-react-app.dev/docs/proxying-api-requests-in-development

    This post is automatically translated from Korean

    28 March 2024

  • Meet Lablup at NVIDIA GTC 2024: Pushing the Frontiers of AI Technology

    By Lablup

    Greetings from Lablup! We are thrilled to announce our participation in the upcoming NVIDIA GTC 2024 conference, taking place from March 18th to 21st in San Jose, California. As a Silver Sponsor, Lablup is gearing up to showcase our cutting-edge AI technologies and products at this premier event, which is making a comeback as an in-person gathering after a five-year hiatus.

    About GTC 2024

    GTC is the world's largest AI conference, hosted by NVIDIA. With over 300,000 attendees expected to join both online and in-person, this year's event promises an unparalleled opportunity to explore the latest AI tech trends. From the highly anticipated keynote by NVIDIA CEO Jensen Huang to more than 900 sessions, 300+ exhibits, and 20+ technical workshops covering generative AI and beyond, GTC 2024 is set to be a game-changer for anyone interested in the future of AI.

    Lablup at GTC 2024

    At GTC, Lablup will be running an exhibition booth (#1233) where we will demonstrate Backend.AI Enterprise, the only NVIDIA DGX-Ready software in the APAC region. Backend.AI is an AI infrastructure management platform that maximizes the performance of NVIDIA DGX systems and other GPU infrastructures while enhancing usability.

    We will also be introducing FastTrack, our MLOps solution that streamlines and automates the entire development process for generative AI models. Prepare to be amazed by our demo showcasing how FastTrack can automatically fine-tune foundation models for various industries and transform them into chatbots and other practical applications.

    Sessions at GTC

    Lablup will be presenting two sessions at GTC.

    The first session, titled "Idea to Crowd: Manipulating Local LLMs at Scale," will delve into the techniques and use cases for fine-tuning and operating local LLMs across various scales, from personal GPUs to large-scale data centers. We will share how we optimize resource usage through quantization and lightweight techniques, and illustrate the expansion process of personalized LLMs through concrete examples.

    Our second session, "Personalized Generative AI," will explore how to effortlessly run and personalize generative AI models on small-scale hardware such as personal GPUs, PCs, or home servers. We will introduce automated methods for operating and fine-tuning generative AI in compact form factors, offering a glimpse into a future where personalized AI assistants become an integral part of our daily lives.

    Hope to meet you soon!

    We've given you a sneak peek into the exciting technologies and vision Lablup will be presenting at GTC 2024. If you're attending the event in San Jose this March, be sure to visit our booth (#1233) to experience the latest AI tech firsthand and engage with the Lablup team.

    For those joining online, our session presentations will provide valuable insights into the present and future of local LLMs and personalized generative AI. Lablup remains committed to pushing the boundaries of AI technology, making it more accessible and user-friendly for businesses and individuals alike.

    Don't miss this incredible opportunity to witness the power of AI and its potential to revolutionize our world. Join us at GTC 2024 and let's embark on this exciting journey together. See you there!

    15 March 2024

  • 2023 Lablup DevOps Summer Retrospect

    By Gyubong Lee

    In this post, I'll share my experience as a developer at Lablup over the past 9 months.

    Table of Contents

    • Motivation to apply
    • From Intern to DevOps!
    • rraft-py Development
    • Open Source Contribution Academy Regional Sprint Backend.AI Mentoring
    • Attending various conferences
    • 2023 Open Source Contribution Academy
    • Presenting at PyCon
    • Conclusion

    Motivation to apply

    Even before I joined Lablup, I knew that I wanted to have a career where I could continue to help others through the programs I develop, whether as a hobby or during work hours.

    Open source was particularly appealing to me because it meant that not only could my code help others, but that they could freely modify and utilize it if they wanted to.

    One of the things I realized after working on my own project, Arvis, for my graduation project, is that it's not really easy to keep a project going simply because it's something I love to do, as it keeps growing in size. I tried to plan and execute the project carefully from the beginning, but in the end, I realized that I underestimated the time and effort required to maintain the project.

    In that regard, Lablup, which actively encourages and supports open source-related activities and even develops core parts of its source code as open source, was the company of my dreams.

    From Intern to DevOps!

    The last three weeks of My internship at OSSCA Lablupwere spent studying and researching distributed systems, specifically implementing the Raft algorithm. Although my job title changed from intern to DevOps, I still felt like I was expanding on my internship learning, including Raft, to solve issues I worked on during my internship.

    I've been involved in a variety of other activities that I'll mention below, but my main work at the company to date has been writing a Python binding of the Raft algorithm implementation to replace the existing distributed lock structure, including writing rraft-py, and thinking about how to integrate it with Backend.AI.

    rraft-py Development

    rraft-py is a Python binding implementation of tikv/raft-rs, and you can read more about it in the GitHub Readme / Wiki. I'll also be presenting some technical details on the topic in my PyCon 2023 KR talk next month, if you're interested.

    For now, I'm going to focus on my experience as a Lablup developer, leaving aside the technical details of what I learned while developing rraft-py.

    I had to think a lot about rraft-py because it was not just about fixing an issue in Backend.AI, but also about creating a separate project and integrating that project with Backend.AI.

    Overall, there were several mile stones in the project, and I feel like I was able to move forward with the project with a little more stability after each mile stone. There was definitely a high sense of accomplishment each time, but there were also many times when I was frustrated because I realized later that the code I had initially written didn't work the way I intended. But Lablup allowed me the time to do these shoveling sessions, and I think I've gotten to where I am today because of the things I've learned that I would have otherwise dismissed as "shoveling".

    Results of running the rraft-py example code

    There's still a long way to go to integrate rraft-py into Backend.AI, but the bottom line is that it's great to have the experience of thinking for yourself and making your own decisions as you continue to evolve your project, and for developers who like this kind of experience, Lablup could be one of the best options out there.

    Open Source Contribution Academy Regional Sprints Backend.AI Mentoring

    While rraft-py development was my main focus, as it required more time than I had anticipated, I also had the opportunity to work on a variety of other projects.

    One of the most memorable experiences was participating in the 1st Daegu Open Source Contribution Academy Regional Sprint as a Backend.AI mentor.

    In fact, I participated as a mentor without a deep understanding of Backend.AI, and to make matters worse, the sprint period was only 2 days, so I was worried about many things.

    In order to make sure that the mentees learn at least one thing and go home as satisfied as possible, I had to think about how to explain Backend.AI to those who don't know it at all, and how to build a development environment on different platforms (personally, I usually only develop on macOS + docker desktop environment, but some of the mentees were working on Windows environment, so I had to shovel while building the development environment). I had to think about a lot of things and prepare.

    In conclusion, I was able to learn a lot more than I thought because I was unfamiliar with these processes, and the mentees followed along better than I thought, so I think it was a meaningful time for everyone to create more than one PR.

    The 1st Daegu Open Source Contribution Academy Regional Sprint

    Participation in various conferences

    We had the opportunity to participate in various conferences and exhibitions such as AI Expo, AWS Summit, and Next Rise. It was great to learn how to explain Backend.AI to different types of people, and it was also interesting to see the different technologies of other companies.

    AI EXPO KOREA 2023

    2023 Open Source Contribution Academy

    As a company with an open source culture, Lablup actively participates in the Open Source Contribution Academy every year. This year, I also participated in the Open Source Contribution Academy, which encourages participation in various other projects besides the Backend.AI team, so I've been working on GlueSQL as a mentee.

    I think this culture of freedom is very attractive to developers with a strong desire to grow.

    (In addition to myself, there are two other people involved in other projects in the 2023 Contribution Academy).

    PyCon announcement

    Based on my experience in developing rraft-py at my company, I was also given the opportunity to present at 2023 PyCon KR.

    Personally, I'm a bit nervous because it's my first time presenting in public, but I'm doing my best to prepare. For those who are interested in presenting, I am looking forward to sharing not only the presentation materials but also the source code and work history through GitHub.

    Conclusion

    Lablup is a company with a strong open source culture, encouraging participation in various open source and community-related events such as the Open Source Contribution Academy (https://www.oss.kr/contribution_academy) and PyCon, and giving developers the opportunity to take initiative in their work.

    I hope to continue to participate, learn, grow, and contribute to open source activities of various nature at Lablup.

    This post is automatically translated from Korean

    18 July 2023

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

Namyoung Bldg. 4F/5F, 34, Seolleung-ro 100-gil, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.