Tag : Internship

  • 2023 Winter Intern in Lablup

    By Byeongjo Kim

    Overview

    I applied to the Open Source Contribution Academy (hereinafter referred to as the Contribution Academy) hosted by OpenUp, and worked as a fall intern at Lablup Inc. (hereinafter referred to as Lablup) from November to December for 8 weeks. Afterwards, I extended it for an additional 8 weeks from January to February, working a total of 16 weeks.

    After being discharged from the military, I have written about the experiences I had while working at Lablup, my first company as a developer.

    Motivation for Applying

    Even before the Contribution Academy, I was interested in Lablup, and coincidentally, I had an opportunity to contribute through the Contribution Academy.

    During the Contribution Academy period, I worked on resolving issues and refactoring the webui of Backend.AI.

    While participating in the Contribution Academy, I felt a lot of affection, interest, and enjoyment towards Backend.AI, and I began to think that I wanted to continue contributing after the program ended.

    Lablup happened to provide an opportunity to work there in conjunction with the Contribution Academy, so I applied without hesitation.

    Onboarding

    For the first 3 weeks of the internship, I underwent an onboarding process.

    I went through implementing a RealTime Chat, setting up the Backend.AI environment, and then the Pebble Seminar in that order.

    RealTime Chat

    This was the first assignment to become familiar with the core side of Backend.AI's code. I implemented a real-time chat app using Python, utilizing the aiohttp, aioredis, and asyncio libraries.

    Since there was no condition to store the chat contents, I used the in-memory database redis.

    I made it so that when a user enters a chat room, they subscribe to that chat room, and when a user enters a message, it publishes the entered message to the subscribed users, meaning the other users in the same chat room.

    RealTime Chat in action

    While I was able to handle Python at a basic level from preparing for coding tests, I had no experience using libraries like aiohttp, asyncio, and aioredis, so it took me some time to understand and grasp the concepts.

    However, this assignment helped me a lot in understanding the core side of Backend.AI's code, and it was good to be able to study new libraries.

    Setting up the Backend.AI environment

    Since I had already installed Backend.AI during the Contribution Academy, setting up the environment during the internship period wasn't too difficult.

    However, I was well aware that installing Backend.AI is not easy, as I had encountered many errors and failures while trying to install it during the Contribution Academy, and the other person doing the internship with me also had a lot of difficulties during the installation process.

    Since I had already experienced those failures and knew the solutions, I was able to help, and we were able to install it quickly and move on to other tasks.

    While setting up the environment, I also configured a virtual machine and VPN, and set up the environment on a virtual machine as well, so that I could work even if there were problems on my local machine. After setting up the configuration on the virtual machine, I mainly used the local for development during the subsequent work, and the virtual machine as a test server. The company's VM Farm, which allows for easy management and configuration of virtual machines, made it great for setting up development and testing environments.

    Pebble Seminar

    After completing the RealTime Chat and setting up the Backend.AI environment, I prepared a short seminar based on understanding the structure and code of Backend.AI. I was tasked with presenting on GraphQL and Relay, which are used in the Backend.AI WebUI.

    While I had experience with GraphQL, I felt that my knowledge was lacking for presenting in front of others, and Relay was a new library to me, so I was quite worried about preparing for the Pebble Seminar and read through many documents to prepare. First, I familiarized myself with the concepts by reading the official documentation for GraphQL and Relay, and then analyzed the Backend.AI code one by one to understand how they were applied and functioning in Backend.AI.

    Pebble Seminar preparation materials

    By analyzing the code while preparing for the Pebble Seminar, I naturally came to understand the code running in the WebUI, and this greatly helped me in finding and resolving issues during the subsequent work.

    Resolving Backend.AI issues and implementing features

    After completing the onboarding, I finally joined the frontend team and started resolving Backend.AI issues and implementing features. I had a coffee chat with the frontend lead to define the categories of work for this internship period:

    1. Creating a Table Column Setting component
    2. Researching E2E Testing
    3. Daily tasks

    During the 8-week internship period from November to December, I created a total of 19 Pull Requests, of which 18 were merged, and 1 is still under review. Since I had experience finding and assigning issues during the Contribution Academy, I had less difficulty with it, and because I enjoyed resolving issues, I was able to resolve more issues than others.

    Feature Addition PRs

    1. Implementing Table Columns Setting

    https://github.com/lablup/backend.ai-webui/pull/2071

    This was one of the issues I aimed to work on during the internship period. It was the only component that I conceived and implemented from scratch during the fall internship, rather than refactoring an existing component. Before implementing this feature, I thought it was a simple task that I could finish quickly, but things turned out differently from my expectations.

    First, I realized that I had been thinking too simplistically about creating new components. Even though I had designed and considered the props to be received before creating components in the past, through this issue, I felt that when creating a new component, I should invest more time and effort while considering scalability. I also realized that I should pay more attention to how other sites are designed and what features are applied.

    Table Columns Setting

    2. Adding service endpoint and owner columns to the Model Serving page table

    https://github.com/lablup/backend.ai-webui/pull/2047

    Previously, when creating a model service, users had to go to the detail page to check the endpoint, which is a frequently used feature. So there was a request to add the endpoint to the table column. Additionally, since the admin account can see services of users in the same group, there was a suggestion to have a column showing the service owner. Since the GraphQL fields for retrieving this data were already implemented, I added the fields to the query to fetch the endpoint and service owner data, and then added columns to the table to display the data. The owner column is only shown for admin accounts.

    Implementation view. Screen for admin account (left) and user account (right)

    3. Disabling log button for sessions in CANCELLED state

    https://github.com/lablup/backend.ai-webui/pull/2045

    The CANCELLED state means that the container has never been created or failed to be created. Previously, the log button was enabled even for sessions in the CANCELLED state, and if a user clicked the log button, the agent could not find the container information, resulting in a 500 error. In this PR, I made it so that the log button is disabled for sessions in the CANCELLED state, preventing users from clicking it.

    Session in TERMINATED state (session 1) and CANCELLED state (session 2)

    4. Testing and creating a custom hook for dark mode

    https://github.com/lablup/backend.ai-webui/pull/2120

    Before implementing dark mode, I found components with hardcoded colors and implemented a custom hook named useThemeMode for applying dark mode. When creating the custom hook, I tried to use the useLocalStorageState hook from ahooks, but contrary to my expectations that it would automatically manage states with the same key value, I found that they operated independently. To handle states with the same key value automatically updating when the value changes, I added a custom hook named useLocalStorageGlobalState, and then used that to create the useThemeMode custom hook for setting the dark mode.

    Bug fix PR

    1. Allowing signup without invitation token

    https://github.com/lablup/backend.ai-webui/pull/2046

    In the config.toml, when the allowSignupWithoutConfirmation option is set to true, users can sign up without an invitation token. However, when a user clicked the sign up button, an error occurred because the token value was undefined. In this PR, I modified it so that if allowSignupWithoutConfirmation is true, the token variable is not used. Additionally, previously users could modify other input values while the core was processing the data after clicking the sign up button, and the previous data remained when the dialog was closed and reopened. In this PR, I made it so that other input values cannot be entered while data is being processed, and the previous input values are cleared when the dialog is closed.

    2. Displaying the correct screen for the selected sub-tab on the user management page

    https://github.com/lablup/backend.ai-webui/pull/2055

    On the user management page, there are sub-tabs for displaying active users and deactivated users. However, if a user navigated to another page and then returned to the user management page, even though the sub-tab was set to inactive, the screen displayed the list of active users, causing confusion. This PR resolved that issue by remembering the current sub-tab when navigating to another page, and displaying the appropriate screen for that sub-tab when returning to the user management page.

    Before (left) and after (right) the fix

    Extending the internship

    As I resolved issues, the 8-week period flew by, and it was time to wrap up the fall internship.

    Working at Lablup as my first job after being discharged from the military was an important period for me. During the internship at Lablup, I was able to experience my strengths and weaknesses, what skills I needed to further prepare, and how other developers work. The 2-month period felt very short, and since I had enjoyed working so much during that time, I wanted to continue working. So I expressed my desire to extend the internship to the lead, and we agreed to extend it for another 8 weeks until February. During the fall internship, I had thought a lot about my weaknesses, but I couldn't find my strengths. So, I started with these 3 personal goals:

    1. Find my strengths during this period
    2. Read the documentation whenever I have time
    3. Work even harder, leaving no regrets

    Resolving issues and implementing features during the extended period

    The work during the extended period did not differ much from before. Without the onboarding process and installation, I could focus more on resolving issues.

    Feature Addition PRs

    1. Refactoring ErrorLogList

    https://github.com/lablup/backend.ai-webui/pull/2131

    I refactored the ErrorLog List, which was previously implemented using Lit elements, to React. This feature was the most satisfying issue for me, as I personally use it frequently after the refactoring.

    Before (left) and after (right) refactoring

    During the refactoring, new Search and Error filter features were added.

    Added Search feature (left) and Filter feature (right)

    2. Modal drag functionality

    https://github.com/lablup/backend.ai-webui/pull/2179

    I used the React-draggable library to add functionality for modals to be dragged. By adding the Draggable prop to a modal, it can be applied to any modal that requires dragging.

    Draggable Modal

    By clicking the icon on the left side of the modal title and moving the mouse, the modal can be moved to the desired position on the screen.

    Currently, it is applied to the modal for viewing user information on the user management page and the modal for changing user settings, where you can check it.

    While it is not being used in many places yet, I think this PR will be useful as more components and features are added.

    Bug fix PR

    1. Modifying Vfolder invitation permissions

    https://github.com/lablup/backend.ai-webui/pull/2143

    There was an issue where the user permissions for group vfolders were not being updated. When trying to modify the permissions, the items were not displayed or selectable properly in the select. Previously, the items were being displayed using option tags, but I changed it to use mwc-list-item to display the items and modified the overflow option to resolve this issue.

    Before (left) and after (right) the PR

    2. ResourceGroupSelect extending outside the card

    https://github.com/lablup/backend.ai-webui/pull/2166

    There was an issue where the ResourceGroupSelect value would be displayed outside the card if it was too large.

    Symptoms of the issue

    To resolve this issue, I set the max-width CSS on the Select component so that it cannot exceed the width of the card.

    Additionally, in this PR, I added a Search feature to the Select component, for which I used the useControllableValue hook from ahooks. The useControllableValue hook helps manage props from either the parent or itself. While it was a simple PR, it took me longer than expected since it was my first time using useControllableValue. I was able to resolve this issue with the help of the lead and another intern.

    3. Key pair list not showing when clicking the generate & manage key pair button on the summary page

    https://github.com/lablup/backend.ai-webui/pull/2194

    On the summary page, there are buttons for "Generate New Key Pair" and "Manage Key Pairs." However, when clicking these buttons, instead of showing the key pair list, it simply navigated to the user management page, displaying the user list.

    "Generate New Key Pair" and "Manage Key Pairs" buttons on the summary page

    When clicking the "Generate New Key Pair" button (left) and when clicking "Manage Key Pairs" (right)

    While this issue was not critical, I resolved it because I had experienced a lot of confusion when I first used Backend.AI and didn't fully understand the key pair feature.

    After resolving this issue, I could confirm that the key pair list was displayed on the screen as intended.

    After resolving the issue, when clicking the "Generate New Key Pair" button (left) and when clicking "Manage Key Pairs" (right)

    Completing the Internship

    Thanks to the Contribution Academy, which started from a friend's recommendation after being discharged from the military, I was able to contribute at Lablup for an extended period. Since I had no previous internship or project experience at other companies, this was a very important period for me as I was starting anew after being discharged. It was great that I could experience my strengths and weaknesses, the skills I lack, and the culture of an open-source company at Lablup. How many companies make you want to go to work every day with their horizontal structure, free atmosphere, pleasant working environment, and good equipment? Although I worked at Lablup for 4 months, I felt like I wanted to go to work every day, and if it was Lablup, I could work at a company while doing interesting and desirable work for a long time. Over the 4-month period, I also developed a fondness for Backend.AI, the service provided by Lablup, and I plan to attend the conference hosted by Lablup every year whenever possible to see its advancements and technologies.

    Lablup Office

    This post was also published on the author's personal blog. https://gee05053.tistory.com/32 This post is automatically translated from Korean

    11 March 2024

  • 2023 Summer Intern in Lablup

    By Dongjin Park

    #Overview

    I applied to CUop, a collaboration between universities specializing in science and technology, and worked as an intern at Lablup for 8 weeks.

    I wrote about my experiences through onboarding, developing Backend.AI, and attending PyCon.

    Motivation for applying

    I first learned about Lablup through a PyCon presentation session I stumbled upon. I could tell that the company had a lot of technically deep and passionate members. I applied to Lablup because I was interested in Python and asynchronous programming.

    onboarding

    During the first two weeks, we conducted an onboarding process.

    We went through the Realtime Web Chat implementation, Backend.AI development environment, and code base seminar.

    Realtime Web Chat

    This is an assignment to familiarize yourself with Python asyncio. You will develop a real-time chat app using aiohttp, an asynchronous web framework, and redis, an in-memory database. The task also includes setting up docker compose to build python and redis at once. For more information, see the Github Readme.

    To broadcast the messages through redis, we used Redis pub/sub. pub/sub acts as a platform that delivers messages without storing them. Since we didn't have a requirement to store messages, we used Redis pub/sub. We also registered the Redis pub/sub process as a task using asyncio.create_task() to run it in an event loop. Realtime Web Chat Launch Screen Realtime Web Chat Launch Screen

    I was able to understand the basic behavior of asyncio. I was able to ask questions and solve the difficult parts. I think Lablup has a great environment for interns and junior developers to grow because they can freely ask questions through Microsoft Teams.

    Build the Backend.AI development environment

    I installed Backend.AI on a VM Farm and a local VM and tried to run it myself. I read the official documentation, but the process was not smooth. I encountered various errors, which I solved by sharing with my fellow interns and asking questions on Teams.

    💡 A VM farm is an environment where virtual machines are managed and run. Lablup has its own VM Farm.

    This was my first experience of developing with a VM Farm and VSCode connected via SSH. To develop Backend.AI, I need to run multiple processes (Manager, Agent, Storage proxy, Web server, etc.) and Docker Containers. If I use a laptop, it's a bit of a battery drain. However, with VM Farm, you can develop Backend.AI lightly because all you need is an SSH connection. In fact, I was able to develop for a long time using VM Farm when I was out of the office and couldn't charge my laptop.

    Code Base Seminar

    After looking at the code, focusing on the difficult parts of Backend.AI, I prepared a seminar based on my understanding. I was in charge of presenting the Manager, Agent, and GraphQL parts.

    Since Backend.AI is open source, the official documentation is well written. I studied the architecture of Backend.AI by looking at the official documentation to understand the overall structure and asking the employees directly if I had any questions. Since the topic of the presentation was session/kernel creation and scheduling control of Backend.AI Manager, I studied the manager code and analyzed the logs of the manager process. A sequence diagram I drew in preparation for a seminar presentation. A sequence diagram I drew in preparation for a seminar presentation.

    Analyzing the logs, I found a bug that caused the session state to change from Preparing back to Pulling. It was rewarding to analyze the logs one by one. It was difficult to analyze the logs in order because of the asynchronous code base, but drawing a call graph and a sequence diagram was very helpful.

    Develop Backend.AI

    After onboarding, I started working on Backend.AI. I worked on GitHub issues and volunteered to help or found and resolved issues myself.

    I created 9 pull requests from the Backend.AI repository repository and 2 pull requests from the Backend.AI-WebUI repository, and they all merged!

    I chose high-priority issues that I was confident in addressing. I wanted to make as many contributions as I could in the short time frame of two months.

    First PR

    https://github.com/lablup/backend.ai/pull/1395

    I created a PR to fix a bug I found while preparing for a seminar. It was an easy PR to fix the API parameter. However, it was a good experience to learn about branch name convention, commit convention, and news fragment, and to experience the CI (Continuous Integration) process with GitHub Actions and to do some Git-related shoveling beforehand.

    💡 A news fragment is a one-sentence Markdown description of what the branch created by the PR is trying to do. You want to keep it simple and clear so that if you see this PR again in the future, you'll know what it was trying to do.

    PRs for vfolder

    When I heard that Teams had an open issue for an intern, I jumped at the chance to apply. I had to learn a new concept called vfolder, but I knew it would be important to get to know the product.

    PR (1)

    https://github.com/lablup/backend.ai/pull/1397

    Only admin can create a project type vfolder. It should be possible to create a vfolder regardless of the max_vfolder_count of the keypair resource policy, but if the user type vfolder exceeds the max_vfolder_count, the project type vfolder cannot be created. At first, I was confused by the terminology, but by analyzing the code and asking questions, I was able to interpret the terminology.

    PR (2)

    https://github.com/lablup/backend.ai/pull/1400

    Fixed new bugs discovered while addressing PR (1).

    PR (3)

    https://github.com/lablup/backend.ai/pull/1417

    I found an issue related to the PR (1) issue. DB migration and GraphQL were new to me, but I wanted to try them out, so I supported them. I used a DB migration tool called Alembic, studied GraphQL schema concepts, and modified my query and mutation code to support backward compatibility. I tried to use cURL to test the modified code, but GraphQL has a much longer request format than the REST API, which was cumbersome. I wrote the test code by asking questions to interns and employees who are familiar with GraphQL. I wrote python code to test the modified query and mutation in the form of CLI to make testing easier.

    WSProxy related PRs

    Supported an issue in Teams. There was a bug in the WebUI where it was not possible to delete a session if the wsproxy address of the resource group was invalid. I also wanted to get some experience with WebUI development.

    PR (1)

    https://github.com/lablup/backend.ai/pull/1423

    I read through the WebUI code to troubleshoot the issue, but I couldn't quite grasp the concept of wsproxy. I realized that wsproxy has v1 and v2, but it was not easy to understand the difference between the two, so I asked the employees. The main difference between v1 and v2 is the path of the traffic: v1 goes through the manager to communicate with the container, while v2 can communicate directly with the container without going through the manager, which is faster. Once I understood what wsproxy does and the difference between v1 and v2, it was easier to understand how the code worked, and I realized that a lot of people didn't know the difference. I realized that questions that seemed easy to ask might not have been asked in the organization.

    PR (2)

    https://github.com/lablup/backend.ai-webui/pull/1819

    I also modified the webui code to fix the issue. To fix the JavaScript code, I learned about callback functions, promise objects, and async/await. I handled errors so that they didn't affect other logic, and defined duplicate code as functions to eliminate duplication of code.

    PR (3)

    https://github.com/lablup/backend.ai-webui/pull/1833

    However, since the WebUI needs to be backwards compatible with Backend.AI 22.09, we had a review from our CEO that it should also handle HTTP Status 404, so we made it handle both 404 and 500.

    PR (4)

    https://github.com/lablup/backend.ai/pull/1466

    However, after the code was merged, a bug occurred: when setting up v1 wsproxy, the return value for wsproxy-version disappeared. This happened because I was modifying the core code and didn't handle all the branches. I fixed the code in a hurry, but it was a simple mistake, and I realized that I should write test code to prevent such mistakes.

    PRs related to Manager

    https://github.com/lablup/backend.ai/pull/1444

    In preparation for the seminar, I came across an issue with manager that I had been studying. With my internship coming to an end, I thought I could contribute by resolving the issue on the code I knew best.

    This PR has been heavily modified by code reviews. Initially, we designed scheduler health check and scheduler trigger as the same API. After receiving code reviews, I split the two functions into different APIs to separate responsibilities. We originally stored health information only for the schedule function, but we also stored health information for the prepare function and the scale_services function because we needed to create the trigger API after storing health information for the three functions that run periodically according to the scheduler's global timer to get a complete picture of the scheduler's health. We also changed the design so that we can store the scheduler's health based on the manager ID because there may be multiple manager processes.

    The scheduler's storage was also reviewed. Initially, we looked at the code in the existing manager state API to store manager state in etcd and did the same for scheduler state. etcd is good for storing configuration information that needs to be kept consistent, but it is slow to write. redis is a volatile database, but it performs well even with many reads/writes. Since the scheduler state is read/written periodically and does not need to be consistent, we switched to storing it in redis.

    Agent-related PRs

    https://github.com/lablup/backend.ai/pull/1472

    Now that I had a good understanding of the Manager part of Backend.AI, I wanted to understand another important component: the Agent. I came across an issue about the Agent, so I took a look.

    While Backend.AI was running, there was a bug where the internal state of the Agent did not match the state of the actual working container. As a result, when creating a session, an InsufficientResource Error was thrown during the resource allocation process, even though there were actually enough resources. When the error occurred, we needed to improve the logging to understand what went wrong during the resource allocation process.

    It took a while to figure out the resource allocation process. The concurrency issues were difficult, and it took a lot of Q&A with the CTO to get a general idea of the flow and what to log.

    A few weeks after my internship ended, the CTO merged it with more than 10 commits, adding refactorings and test code. What impressed me was that he wrote test code to reproduce the error. I had to go through a complex process (see PR) to reproduce the error myself, which took a lot of development time. I could see the difference in productivity here. Of course, I thought about writing test code, but I thought the implementation would be too complicated, and I thought that writing test code would be the end of my internship. In the future, I shouldn't be so intimidated by writing test code and just try it out and learn as I go. Also, it seems that the refactorings were done with a focus on code readability. The functions in the part I modified were too long and not very readable, but after refactoring, the functions were shorter and the logging was cleaner. I realized that I shouldn't stop at implementation, but try to make good code.

    Attending PyCon

    On August 12 and 13, Lablup will have a booth at PyCon. Companies that sponsor PyCon are given the opportunity to have a booth. I was an intern, but I wanted to participate in the booth activities and listen to some of the talks. The company had some PyCon tickets left over, so I was able to participate.

    At the Lablup booth, we had an event where we challenged Llama2 to write a 10-line pyramid at a prompt. The challenge wasn't that easy, and it was important to explain it in a way that Llama2 could understand. Two lucky people who submitted the correct answer were drawn to win a Nintendo Switch and a Logitech mouse. My role at the booth was to help direct PyCon attendees to the event, and since there were a lot of employees at PyCon, I was free to attend any talks I wanted to hear. Since Rableup is an open source company, we encourage people to contribute to open source and participate in conferences. In fact, four of our presenters attended PyCon, so we value participating in conferences.

    Lablup Inc. booth Lablup Inc. booth

    During the RustPython session, a tool called ruff was introduced as a replacement for the python lint tools flake8 and isort. Since ruff is composed of Rust, it is 100x faster than flake8. At Backend.AI, we were using flake8 and isort for lint, but after our CTO reviewed ruff, I watched him adopt ruff for the Backend.AI project right on the stairs of COEX. I realized that he was really good at the process involved in coding, applying new tools to the project in a short time and even revising the official documentation. I realized that I want to be a proficient developer someday. After PyCon, I read the updated official documentation, applied ruff to the Backend.AI development environment, and experienced linting 100x faster. If I hadn't participated in PyCon, I wouldn't have gotten up to speed on these great tools so quickly. I hope to continue participating in developer conferences in the future. Group photo with members of Lablup Inc. Group photo with members of Lablup Inc.

    End of internship

    During my internship, I tried to get as much experience as possible, and I wanted to contribute a lot. In the end, I was able to experience a lot because I tried to contribute a lot. I was quick to volunteer for issues that came up in Teams, so I was able to understand the core components of Backend.AI: vfolder, wsproxy, web-ui, manager, and agent. I also learned new concepts like DB Migration, GraphQL, and etcd. Although it was a bit physically demanding to attend the conference from morning to evening on the weekend, it was fun to listen to more than 10 presentation sessions, get inspired, and meet various people through booth activities.

    During my internship, I think I actively asked questions about things I didn't understand, which helped me to solve issues quickly. I think the reason why I was able to ask a lot of questions was because there was a culture of horizontal rabble-rousing, and there were many people who were kind enough to answer my questions, so I was able to actively ask questions. I would like to take this opportunity to thank the members for their support.

    I was able to experience a variety of things, including asynchronous programming experience, GitHub collaboration, presenting English seminars, and attending conferences. I feel like I've grown a lot as a developer through this program. I recommend the Lablup internship to anyone who is thirsty for growth.

    This post is automatically translated from Korean

    22 November 2023

  • Recap of my Lablup Internship, Summer 2022

    By Sion Kang

    Introduction

    My first encounter with 'Lablup' was in the summer of 2019. At that time, I attended a GDG Seoul 'Everyone's Toy Story' event because someone I knew was presenting there. I had the opportunity to sit in on a presentation by Lablup about their GPU virtualization tools for machine learning, and that piqued my interest. At that time, my fascination with machine learning was at its peak, and the technical depth of their presentation was impressive—it was my introduction to a company pioneering in this field.

    Subsequently, I reconnected with the company at the 42 Seoul open source hackathon. This was an event focused on developing a product using a designated open source within a limited timeframe, where I became a part of the Backend.AI team. Despite it being three years since the initial presentation at GDG Seoul, the impression it left was so profound that the company's name was instantly familiar to me. Throughout the contest, the guidance provided by Jungkyu (CEO of Lablup) was instrumental, contributing significantly to our achievement being a second place at a hackathon.

    In May 2022, I was working on a school-based internship at a company called SATREC INITIATIVE. As my internship was drawing to a close, I came across an announcement for Lablup's summer internship on Facebook. Recalling the valuable mentorship from Mr. Shin during a competition, and with a keen interest in the developer community and open source, I chose Lablup as my next destination.

    At that period, I was engaged in a project named 42 World, which was a significant phase of learning and growth for me. In my interview with Lablup, I detailed my experiences with the 42 World project, particularly the challenges I faced while implementing a MonoRepo. Interestingly, Lablup had encountered similar issues with MonoRepo in Backend.AI, allowing us to exchange a sense of empathy as developers during our conversation.

    After being accepted into the Lablup internship, I started my internship with four other interns. I joined the company about a week later than the others to give myself time to finish my existing internship and relocate. My first week was dedicated to orientation, familiarizing myself with Backend.AI, and acclimating to the company's culture. The onboarding documentation was comprehensive, contributing to a welcoming environment for newcomers to quickly adapt. A significant portion of my orientation involved installing and configuring Backend.AI. Since I began a week after the other interns, they were able to assist me, which made the orientation process relatively smooth.

    Getting to my work

    During the second week, I began tackling the actual tasks. I opted to collaborate with the DevOps, Frontend, and Research teams. Each chapter leader provided me with a 'good-first-issue' to address. My choice was the DevOps team. My initial task involved refactoring the 'run' command, which initiates a session and runs the specified code. The goal was to integrate 'start', to launch the session, and 'exec', to execute the code, thereby minimizing redundant code.

    I faced challenges with the first issue assigned to me as it required a thorough understanding of Backend.AI's repository structure, irrespective of the implementation difficulty. I realized that while it's important to know why an issue exists, it's also important to know exactly what Backend.AI is trying to accomplish and understand how the issue needs to be solved to achieve that goal so that I can solve it correctly.

    After addressing the initial issue, I took on the task of testing a feature in development known as vfolder clone. My role was solely focused on DevOps, providing me with my inaugural experience with Backend.ai-webui, a Frontend chapter project. My involvement wasn't limited to just testing the vfolder clone; I also executed it personally, identifying areas for improvement and bugs, which I then documented as issues. This made me feel somewhat guilty, as it seemed I was generating work for other teams. However, the Frontend chapter's encouragement to contribute assuaged my concerns. This experience reaffirmed my understanding that open-source contributions extend beyond code; there are numerous ways to contribute.

    Improving CI/CD

    As I've always been interested in CI/CD, I was intrigued by the actions utilized in Backend.AI. At the time, Backend.AI could bypass CI with the skip:ci tag, but I noticed that neither skip:ci nor skip:changelog tags would work if the PR was labeled after creation, necessitating an extra commit. Since external contributors lack label permissions and Backend.AI is open source, resolving this seemed crucial. After exploring GitHub Actions, I discovered a trigger for labeling, which allowed me to address the issue. My proactive approach in finding and fixing problems independently, rather than through assigned tasks, was highly appreciated by the company. This experience deepened my interest in actions, leading to further enhancements. Noticing frequent omissions in assignments, I proposed the auto-auth-assign action for automation, which I had previously utilized. Subsequently, I considered automating the labeling process. Although new to the labeler action, I conducted multiple tests in a test repository before implementing it in Backend.AI, successfully automating the labeling of various systems integrated into a monolith. During this process, I identified the benefit of linking labels from issues to PRs. Unable to find an existing action, I took the initiative to create the auto-label-in-issue action, learning the GitHub API and action scripting in the process.

    Finishing my Internship

    This internship has been a significant learning experience for me. Although it's my second internship, it's my first within an IT company, as my previous employer wasn't in this sector. What draws me to Lablup is its commitment to open-source products and the active contributions to the community. The company's horizontal structure allowed me to freely share my thoughts, making me wonder if a company could really operate so collaboratively. One of the greatest aspects of Lablup is the autonomy it offers, allowing you to pursue what you're passionate about rather than what you're obliged to do.

    Having developed a strong understanding of the project, I was disappointed when the internship neared its conclusion. Fortunately, Lablup offered me the opportunity to extend my internship, which I accepted, and continued to work on action issues. As there are few developers specializing in this area, I recently presented on the subject at GDG Daejeon. This led to the amusing nickname "Action Mask" from my colleagues.

    I have shared my internship experience with the Open Source Contribution Academy, where I performed well. I believe that my experience at the company was the foundation for that.

    19 December 2022

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

Namyoung Bldg. 4F/5F, 34, Seolleung-ro 100-gil, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.