Tag : Event

  • Lablup with PyCon Korea 2024: lambda submit: Starbucks if submit == "duck" else None

    By Jinho Heo

    Hello, I'm Jinho Heo, a Technical Writer at Lablup.

    Lablup participated in PyCon Korea 2024 as a platinum sponsor from October 26th to October 27th, at the Suwon Convention Center.

    Lablup's founding philosophy is strongly tied to open source. It's not a stretch to say that open source is in Lablup's blood. There are a variety of open sources, but Lablup has a particularly strong relationship with Python. Lablup is an active contributor to open source, such as aiohttp, which was developed using asyncio, and also contributes to Python itself. Our deep ties go beyond Python to PyCon. Members of Lablup have delivered presentations on diverse topics at PyCons globally and have been a sponsor partner of PyCon Korea five times.

    This year's PyCon was a big one for Lablup. This is because Lablup's CEO Jeonkyu Shin and CTO Joongi Kim were keynote speakers on both days of PyCon. Jeongkyu Shin gave a talk on 'Python, PyCon, the Dinosaur Age, and the Planet of Chickens', which is a confusing title at first glance, but he likened two qualities of Python to dinosaurs, the first evolutionary victors: Python's rapid growth and ascension to the number one language, and its status as a key language in the era of massive AI: which creates innovations with massive computation. This leads to 'chicken', the most consumed meat in the modern world, which he said of its accessibility and universality, making it easy for anyone to learn and utilize.Shin's presentation was well-received by the audience, as he entertained the audience with his witty titles, verbal skills, and various AI-generated illustrations.

    Joongi Kim, the CTO, delivered a presentation entitled "10 Years of PyCon and Me." Spanning four chapters, he reflected on his PyCon presentations worldwide, beginning with asyncio's development, the company's expansion, managing code scaling, and motivating fellow Python enthusiasts. His clear explanation of the challenges developers encounter or may encounter garnered considerable applause from the audience.

    Kyujin Cho, Senior Software Engineer | Sergey Leksikov, Researcher | Joongi Kim, Chief Technology Officer

    In addition, Kyujin Cho, Senior Software Engineer, Sergey Leksikov, Researcher, and Joongi Kim, CTO, each presented a session. Kyujin presented “Automated Python Web Framework API Schema Creator: The long way around', where he shared with the audience the series of shovelfuls he went through to overcome the challenges of automatically generating API documentation using aiohttp. Sergey presented 'Automating CLI commands execution with LLM and LangGraph: A new frontier in Python automation', where he talked about how complex CLIs can be transformed into user-friendly tools using LLM and LangGraph frameworks to improve user experience and operational efficiency. Both talks were well attended and received great interest from the audience. Joongi presented 'Engineering Python for enterprise delivery', where he shared his experience in developing packaging/installers for Python app delivery.

    👨‍🏫 Jinho: We'd love to hear your PyCon session recaps.

    👨‍💻 Sergey: Although my presentation was in English, the PyCon Korea organizers offered a real-time translation service, enabling me to deliver my talk smoothly. The highlight of the session was the interaction with eager audience members who approached me with questions about my presentation topic afterwards.

    'AI Score Reader' event (Drawing a duck and swan)

    Lablup organized an 'AI Score Reader' event at PyCon Korea 2024. Attendees could join the event by scanning a QR code on their mobile devices, tablets, or laptops and were invited to draw "ducks and swans" for a chance to win prizes. Every participant had the opportunity to receive a Lablup Folding Pouch, while the grand prize for the best drawing was a Starbucks gift card.

    The stickers Lablup carries to every exhibition

    Why ducks and why swans? They are the ones in stickers we always bring to exhibitions. While they appear to glide smoothly across the water, they are paddling hard beneath the surface. This imagery serves as a metaphor for our desire to have our customers enjoy seamless AI services on the front end while Backend.AI handles the complex processes in the background.

    The concept of the event is straightforward. If you've ever used generative AI tools such as Stable Diffusion or Dall-E for image creation, you're aware that the structure of generative AI is highly sensitive to its input. The output may vary with each attempt, even with identical inputs, or change drastically with minor input modifications. This method of instructing generative AI to yield specific results is known as 'prompt engineering.' All participants at our booth had the opportunity to engage in some prompt engineering themselves.

    We asked Kyujin Cho, a senior software engineer who was responsible for developing the backend of our “AI Score Reader” event, to give us an overview.

    👨‍🏫 Jinho: Can you explain how the “AI Picture Reader” event page was created?

    👨‍💻 Kyujin: The "AI Score Reader" event page's backend was built on three microservices: the Web Application Server (WAS), the image generation pipeline, and the image similarity pipeline. The WAS includes a user database and an API to manage image generation requests. These microservices were all deployed on Backend.AI using the 'Model Service' feature. Similar to the Visutale demo by Sergey from our research team, this serves as a testament to the versatility of Backend.AI in developing AI services.

    👨‍🏫 Jinho: Specifically, what process leads to the similarity between a user's drawing and a given image?

    👨‍💻 Kyujin: When a user submits a prompt through a page accessed by scanning a QR code to generate an image, the Web Application Server (WAS) forwards the text to an image generation service and retrieves the created image. The WAS then sends this image to the similarity discriminator, receives a similarity score as a percentage, and delivers both the score and the generated image to the user.

    👨‍🏫 Jinho: What criteria should we draw by to get a high score?

    👨‍💻 Kyujin: I have no idea what the image similarity pipeline is determining similarity. Perhaps asking the AI directly would be a better way to get an answer.

    'Massive' engagement

    Our members perceived it as a "minor event." However, it took less than an hour for this perception to be completely overturned.

    Our booth started to fill up, and the front of the booth became a scene of people staring hungrily at the leaderboard, eager to see where they stood and defend their top placement.

    There were even reports of people sitting in lounges staring at their phone to draw swans.

    Let's take a look at the stats.

    On the 26th and 27th of October, the event saw the participation of 428 individuals. Throughout this period, we received 11,639 image creation requests, with the most surprising contribution from a single participant who submitted over 1,000 images, a detail we'll delve into later.

    Unintended Side Effects Raised

    Seated in the booth and observing the leaderboard, I noticed two familiar nicknames: "cloudshin" and "achimnol," which belong to CEO Shin Jeongkyu and CTO Kim Joongi, respectively. They were on a streak, achieving similarity scores above 90%. They kept drawing ducks non-stop, even while proceeding to lunch.

    DevRel Lead WooYoung Yoo attempted to stop them, yet it is said that their enthusiasm was difficult to diminish. (P.S.1: Of course, we did forcefully erase their data when finalizing the scores).
    (P.S.2: Indeed, the participants performed so well that they effortlessly exceeded the scores of the latter two, resulting in a dominant podium presence...)

    In addition to the internal side effects, there were also external issues. While running a booth on Day 1, adeveloper screamed in the distance.

    "I believe someone is executing a macro."

    We discovered that duplicate submission requests were being sent to the same prompt every 1-5 seconds, due to a macro exploiting the AI's generative capabilities to yield varied results with identical prompts. However, addressing the issue was challenging at the moment of detection, because the backend developer was slated to present at PyCon on the second day. The situation was deteriorating.

    “I can't submit” ‘The button doesn't work’ ”I get a gray blank space instead of an image”

    An attendee opened our event website on their laptop, connected the developer tools, and pointed out that no requests were coming through. We found a developer tucked away in a corner preparing to present at a stage, and discovered that the earlier macro was bombarding our NVIDIA H100 with concurring requests.

    Day 1 concluded with a trickle of image submissions. While tidying up the booth and reflecting on the day, we recognized the need to avert this issue on Day 2.

    Poor developer pulled an all-nighter to prepare for Day 2, putting off his presentation and adding a couple new features to prevent the accident.

    First, added same-prompt submission protection

    We added the ability to validate submitted prompts on the server side. If a prompt has been submitted previously, the response has been altered to provide the initially generated image and its similarity score rather than creating a new image.

    Second, added captcha when submitting images

    To deter macros, we implemented a captcha for submitting responses. Although this may have been slightly inconvenient for participants, the introduction of the captcha successfully prevented random macros on the second day of the event.

    Other minor improvements We were receiving data with quite a few decimal places to calculate participants' scores, but we were truncating them in the GUI to the first decimal place for user convenience. However, as the competition heated up, people started to show up with identical scores to the first decimal place, so we patched the GUI to show the second decimal place to reduce user confusion.

    Thanks to Kyujin Cho, Senior Engineer, and Soojin Kim, Frontend Engineer, who worked tirelessly on the backend to make the event a success 🙂 .

    Wrapping up PyCon Korea 2024

    Numerous Python enthusiasts attended PyCon Korea 2024, where Lablup also engaged with many attendees. Lablup is committed to ongoing contributions and growth within the open source community. Echoing Chef Jeong Ji-sun's words from the hit show “Culinary Class War,” “Opening up recipes leads to more recipes. With many contributing ideas, it creates a larger tapestry.”

    Here at Lablup, we will continue to think big and never lose sight of the open source spirit that has always been the cornerstone of our foundation.

    20 November 2024

  • 2024 GTC Event Live Rankings: How to Utilize GraphQL Subscription

    By Sujin Kim

    Lablup commemorated the 2024 GTC event by hosting a special event. Participants created images similar to the given image using the LLM model provided by Lablup, and among those who scored high, an NVIDIA RTX 4090 graphics card was awarded through lottery. 🫢
    In this post, we aim to highlight the subscription feature of GraphQL, which was used in the leaderboard page of the event, allowing participants to monitor their scores in real time.

    GTC24 event page

    What is a Subscription?

    It is a mechanism that allows the client to query data in response to a server side event stream. In cases where data changes in real time, for example when implementing real-time logs or chat applications, updates can be immediately reflected when pushed from the server.

    Subscription sends data only when the required information changes on the server. Therefore, in the case where data changes are not frequent, Subscription can reduce data traffic, which can also lead to cost savings.

    A similar concept is setting the fetchPolicy of GraphQL's Query to network-only to always get the latest results, but it’s different from the features of subscriptions. This ensures the latest data by always requesting the server whenever the client needs data. However, network costs accompany each request. Thus, while it is okay to set fetchPolicy to network-only to guarantee the latest results whenever a button is clicked, if it is used to retrieve data where updates are frequent like a stock trading window, network costs would be significant.

    How to Use

    Defining Subscription

    The usage is similar to Query, just use the keyword subscription.

      const leaderboardSubscriptions = graphql`
        subscription Ranking_leaderboardSubscription {
          leaderboard {
            submissions {
              id
              name
              score
              imageUrl
            }
            lastUpdatedAt
          }
        }
      `;
    

    When an event occurs in the leaderboard stream, a notification is sent to the application, and the client can get the updated result.

    Then the following result can be obtained.

    leaderboard: {
    	submissions: [
    		{
        	"id": "76293167-e369-4610-b7ac-4c0f6aa8f699",
    	    "name": "test",
        	"score": 0.5910864472389221,
    	    "imageUrl": "<IMAGE_URL>"
    		},
        ],
    	lastUpdatedAt: 1710176566.493705
    }
    

    subscribe

    To display real-time rankings, when entering the relevant page, call subscribe, and when moving to other pages, call dispose to unsubscribe using useEffect.

    import { useEffect } from 'react';
    import { requestSubscription } from 'react-relay';
    
    useEffect(() => {
      const subscriptionConfig = {
        subscription: leaderboardSubscriptions,
        variables: {},
        onNext: (response: any) => {
          setLeaderboard(response.leaderboard.submissions); // 미리 정의된 state
        },
        onError: (error: any) => {
          console.error('Leaderboard subscription error', error);
        },
      };
      const { dispose } = requestSubscription(
        RelayEnvironment, // refer 'How to Configure' below
        subscriptionConfig,
      );
      return () => {
        dispose();
      };
    }, []); //  Executing this part only when the component is mounted or unmounted by setting an empty dependency array
    

    requestSubscription

    • Provides a Disposable object as a return value.
    • This Disposable object includes a `dispose method to cancel the subscription.

    onNext

    • As data is updated through subscription, it updates the pre-defined state to display real-time rankings.
    • In addition to onNext, onError, there are various configurations such as onCompleted called when the subscription ends and updater to update the in-memory relay storage based on server response. For detailed descriptions, refer to this link.

    dispose

    • A cleanup function is returned in the useEffect hook and the dispose method is called to end the subscription when the component is unmounted.

    How to set up (+Relay)

    According to the Relay documentation, GraphQL subscriptions communicate with WebSockets, and you can set up a network using graphql-ws. (There is also a way to use subscriptions-transport-ws, but it's deprecated, so we'll pass on that).

    import { ExecutionResult, Sink, createClient } from 'graphql-ws';
    import {
      Environment,
      Network,
      RecordSource,
      Store,
      SubscribeFunction,
      RelayFeatureFlags,
      FetchFunction,
      Observable,
      GraphQLResponse,
    } from 'relay-runtime';
    import { RelayObservable } from 'relay-runtime/lib/network/RelayObservable';
    import { createClient } from 'graphql-ws';
    
    const wsClient = createClient({
      url: GRAPHQL_SUBSCRIPTION_ENDPOINT,
      connectionParams: () => {
        return {
          mode: 'cors',
          credentials: 'include',
        };
      },
    });
    
    const subscribeFn: SubscribeFunction = (operation, variables) => {
      return Observable.create((sink: Sink<ExecutionResult<GraphQLResponse>>) => {
        if (!operation.text) {
          return sink.error(new Error('Operation text cannot be empty'));
        }
        return wsClient.subscribe(
          {
            operationName: operation.name,
            query: operation.text,
            variables,
          },
          sink,
        );
      }) as RelayObservable<GraphQLResponse>;
    };
    
    // Export a singleton instance of Relay Environment
    // configured with our network function:
    export const createRelayEnvironment = () => {
      return new Environment({
        network: Network.create(fetchFn, subscribeFn),
        store: new Store(new RecordSource()),
      });
    };
    
    export const RelayEnvironment = createRelayEnvironment();
    

    wsClient

    • For url, enter the websocket URL of the GraphQL server.
    • credentials can be set via connectionParams.

    subscribeFn

    • Defines the subscription behavior of the Observable.
    • Validate the query string in if (!operation.text) { ... } and if it is invalid, raise an error and abort the execution.
    • Finally, the return wsClient.subscribe( ... ) code actually subscribes to the subscription using the WebSocket client and passes the payload of the GraphQL operation to the sink (i.e., the Observer).
    • In short, this function is responsible for handling the GraphQL subscription request and pushing the result to the Observable stream whenever a subscription event occurs.

    createRelayEnvironment

    • Create and return a new Relay Environment.
    • A Relay environment is a container that manages other high-level Relay objects, network layer, cache, etc.
    • We have assigned functions to fetchFn to handle GraphQL query/mutation requests and subscribeFn to handle subscription requests.
    • To create a Relay Store to store and manage cache data, we used the RecordSource store.

    RelayEnvironment

    • The createRelayEnvironment function is called to initialize the RelayEnvironment and export it for later import and use elsewhere.
    • This configured RelayEnvironment is mainly used by QueryRenderer, useLazyLoadQuery, commitMutation, etc.

    CORS error

    Initially, I read the config.toml file used on the server side to set the websocket URL of the GraphQL server and set the address. However, I kept getting CORS errors and Unauthorized every time I sent a request. So I did a lot of shoveling around, and with the help of my colleague, I was able to solve it. (Thank you so much 🥹🙏)

    The solution is to use http-proxy-middleware to set up setupProxy!

    As you can see in the create-react-app manual, you can set up a setupProxy to proxy requests from your development server to a specific path on your real server, usually to prevent CORS issues in development environments where the frontend and backend are separated, or to proxy requests from your development server to a specific path on your real server.

    The code looks like this

    const { createProxyMiddleware } = require('http-proxy-middleware');
    
    module.exports = function (app) {
      app.use(
        createProxyMiddleware('/graphql', {
          target: 'http://127.0.0.1:9220',
          changeOrigin: true,
          followRedirects: true,
          ws: true,
        }),
      );
    };
    

    createProxyMiddleware('/graphql', { ... })

    • Sets the middleware to handle all HTTP requests originating from '/graphql'.

    target: 'http://127.0.0.1:9220'

    • Set the address of the server to which proxied requests will be forwarded. Here we set it to port 9220.

    changeOrigin: true

    • Change the host header of the request to the host of the target. Use this to work around CORS issues.

    followRedirects: true

    • This setting causes the proxy to follow redirects when the server sends a redirect response to a request.

    ws: true

    • This setting enables the WebSocket proxy. The websocket connection between the client and server is also passed through this proxy, which we set to true for subscribe.

    Leaderboard page

    After a lot of digging, we've finally finished the leaderboard page! 🎉 A big thank you to everyone who participated. 🙇🏻‍♀️

    Conclusion

    Using GraphQL subscriptions, we were able to implement features like real-time rankings. Although I struggled with how to set it up because of CORS, it was not difficult to use because it is not much different from writing a query.

    I think the biggest advantages of subscriptions are real-time updates and efficiency. Because it receives data from the server in real time, users always see the latest status, and because it only gets updates when the data it needs changes, it can minimize server requests for data that doesn't change often.

    However, it is complex as it requires an implementation of websockets or similar real-time protocols, as well as logic to manage the connection state between the client and server. Although not covered in this article, subscription requires additional work on the server side. And because it requires a real-time connection, it can consume server resources and client resources.

    Therefore, which method is more cost or performance efficient depends on many factors, including the nature of your application, the frequency of data updates, and the number of concurrent users, so use your best judgment.

    references

    • https://relay.dev/docs/v10.1.3/subscriptions/
    • https://relay.dev/docs/guided-tour/updating-data/graphql-subscriptions/#configuring-the-network-layer
    • https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API
    • https://github.com/enisdenjo/graphql-ws
    • https://github.com/apollographql/subscriptions-transport-ws
    • https://graphql.org/blog/subscriptions-in-graphql-and-relay
    • https://create-react-app.dev/docs/proxying-api-requests-in-development

    This post is automatically translated from Korean

    28 March 2024

  • Meet Lablup at NVIDIA GTC 2024: Pushing the Frontiers of AI Technology

    By Lablup

    Greetings from Lablup! We are thrilled to announce our participation in the upcoming NVIDIA GTC 2024 conference, taking place from March 18th to 21st in San Jose, California. As a Silver Sponsor, Lablup is gearing up to showcase our cutting-edge AI technologies and products at this premier event, which is making a comeback as an in-person gathering after a five-year hiatus.

    About GTC 2024

    GTC is the world's largest AI conference, hosted by NVIDIA. With over 300,000 attendees expected to join both online and in-person, this year's event promises an unparalleled opportunity to explore the latest AI tech trends. From the highly anticipated keynote by NVIDIA CEO Jensen Huang to more than 900 sessions, 300+ exhibits, and 20+ technical workshops covering generative AI and beyond, GTC 2024 is set to be a game-changer for anyone interested in the future of AI.

    Lablup at GTC 2024

    At GTC, Lablup will be running an exhibition booth (#1233) where we will demonstrate Backend.AI Enterprise, the only NVIDIA DGX-Ready software in the APAC region. Backend.AI is an AI infrastructure management platform that maximizes the performance of NVIDIA DGX systems and other GPU infrastructures while enhancing usability.

    We will also be introducing FastTrack, our MLOps solution that streamlines and automates the entire development process for generative AI models. Prepare to be amazed by our demo showcasing how FastTrack can automatically fine-tune foundation models for various industries and transform them into chatbots and other practical applications.

    Sessions at GTC

    Lablup will be presenting two sessions at GTC.

    The first session, titled "Idea to Crowd: Manipulating Local LLMs at Scale," will delve into the techniques and use cases for fine-tuning and operating local LLMs across various scales, from personal GPUs to large-scale data centers. We will share how we optimize resource usage through quantization and lightweight techniques, and illustrate the expansion process of personalized LLMs through concrete examples.

    Our second session, "Personalized Generative AI," will explore how to effortlessly run and personalize generative AI models on small-scale hardware such as personal GPUs, PCs, or home servers. We will introduce automated methods for operating and fine-tuning generative AI in compact form factors, offering a glimpse into a future where personalized AI assistants become an integral part of our daily lives.

    Hope to meet you soon!

    We've given you a sneak peek into the exciting technologies and vision Lablup will be presenting at GTC 2024. If you're attending the event in San Jose this March, be sure to visit our booth (#1233) to experience the latest AI tech firsthand and engage with the Lablup team.

    For those joining online, our session presentations will provide valuable insights into the present and future of local LLMs and personalized generative AI. Lablup remains committed to pushing the boundaries of AI technology, making it more accessible and user-friendly for businesses and individuals alike.

    Don't miss this incredible opportunity to witness the power of AI and its potential to revolutionize our world. Join us at GTC 2024 and let's embark on this exciting journey together. See you there!

    15 March 2024

  • 2023 Lablup DevOps Summer Retrospect

    By Gyubong Lee

    In this post, I'll share my experience as a developer at Lablup over the past 9 months.

    Table of Contents

    • Motivation to apply
    • From Intern to DevOps!
    • rraft-py Development
    • Open Source Contribution Academy Regional Sprint Backend.AI Mentoring
    • Attending various conferences
    • 2023 Open Source Contribution Academy
    • Presenting at PyCon
    • Conclusion

    Motivation to apply

    Even before I joined Lablup, I knew that I wanted to have a career where I could continue to help others through the programs I develop, whether as a hobby or during work hours.

    Open source was particularly appealing to me because it meant that not only could my code help others, but that they could freely modify and utilize it if they wanted to.

    One of the things I realized after working on my own project, Arvis, for my graduation project, is that it's not really easy to keep a project going simply because it's something I love to do, as it keeps growing in size. I tried to plan and execute the project carefully from the beginning, but in the end, I realized that I underestimated the time and effort required to maintain the project.

    In that regard, Lablup, which actively encourages and supports open source-related activities and even develops core parts of its source code as open source, was the company of my dreams.

    From Intern to DevOps!

    The last three weeks of My internship at OSSCA Lablupwere spent studying and researching distributed systems, specifically implementing the Raft algorithm. Although my job title changed from intern to DevOps, I still felt like I was expanding on my internship learning, including Raft, to solve issues I worked on during my internship.

    I've been involved in a variety of other activities that I'll mention below, but my main work at the company to date has been writing a Python binding of the Raft algorithm implementation to replace the existing distributed lock structure, including writing rraft-py, and thinking about how to integrate it with Backend.AI.

    rraft-py Development

    rraft-py is a Python binding implementation of tikv/raft-rs, and you can read more about it in the GitHub Readme / Wiki. I'll also be presenting some technical details on the topic in my PyCon 2023 KR talk next month, if you're interested.

    For now, I'm going to focus on my experience as a Lablup developer, leaving aside the technical details of what I learned while developing rraft-py.

    I had to think a lot about rraft-py because it was not just about fixing an issue in Backend.AI, but also about creating a separate project and integrating that project with Backend.AI.

    Overall, there were several mile stones in the project, and I feel like I was able to move forward with the project with a little more stability after each mile stone. There was definitely a high sense of accomplishment each time, but there were also many times when I was frustrated because I realized later that the code I had initially written didn't work the way I intended. But Lablup allowed me the time to do these shoveling sessions, and I think I've gotten to where I am today because of the things I've learned that I would have otherwise dismissed as "shoveling".

    Results of running the rraft-py example code

    There's still a long way to go to integrate rraft-py into Backend.AI, but the bottom line is that it's great to have the experience of thinking for yourself and making your own decisions as you continue to evolve your project, and for developers who like this kind of experience, Lablup could be one of the best options out there.

    Open Source Contribution Academy Regional Sprints Backend.AI Mentoring

    While rraft-py development was my main focus, as it required more time than I had anticipated, I also had the opportunity to work on a variety of other projects.

    One of the most memorable experiences was participating in the 1st Daegu Open Source Contribution Academy Regional Sprint as a Backend.AI mentor.

    In fact, I participated as a mentor without a deep understanding of Backend.AI, and to make matters worse, the sprint period was only 2 days, so I was worried about many things.

    In order to make sure that the mentees learn at least one thing and go home as satisfied as possible, I had to think about how to explain Backend.AI to those who don't know it at all, and how to build a development environment on different platforms (personally, I usually only develop on macOS + docker desktop environment, but some of the mentees were working on Windows environment, so I had to shovel while building the development environment). I had to think about a lot of things and prepare.

    In conclusion, I was able to learn a lot more than I thought because I was unfamiliar with these processes, and the mentees followed along better than I thought, so I think it was a meaningful time for everyone to create more than one PR.

    The 1st Daegu Open Source Contribution Academy Regional Sprint

    Participation in various conferences

    We had the opportunity to participate in various conferences and exhibitions such as AI Expo, AWS Summit, and Next Rise. It was great to learn how to explain Backend.AI to different types of people, and it was also interesting to see the different technologies of other companies.

    AI EXPO KOREA 2023

    2023 Open Source Contribution Academy

    As a company with an open source culture, Lablup actively participates in the Open Source Contribution Academy every year. This year, I also participated in the Open Source Contribution Academy, which encourages participation in various other projects besides the Backend.AI team, so I've been working on GlueSQL as a mentee.

    I think this culture of freedom is very attractive to developers with a strong desire to grow.

    (In addition to myself, there are two other people involved in other projects in the 2023 Contribution Academy).

    PyCon announcement

    Based on my experience in developing rraft-py at my company, I was also given the opportunity to present at 2023 PyCon KR.

    Personally, I'm a bit nervous because it's my first time presenting in public, but I'm doing my best to prepare. For those who are interested in presenting, I am looking forward to sharing not only the presentation materials but also the source code and work history through GitHub.

    Conclusion

    Lablup is a company with a strong open source culture, encouraging participation in various open source and community-related events such as the Open Source Contribution Academy (https://www.oss.kr/contribution_academy) and PyCon, and giving developers the opportunity to take initiative in their work.

    I hope to continue to participate, learn, grow, and contribute to open source activities of various nature at Lablup.

    This post is automatically translated from Korean

    18 July 2023

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

8F, 577, Seolleung-ro, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.