Webinar
[TheElec] Backend.AI - Lablup's Strategy to Change the Future of AI Infrastructure
By Jeongkyu Shin- 00:00 Introduction
- 01:35 Introduction to Lablup company, development and provision of AI infrastructure platform
- 03:27 Major customers are large AI developers, including cloud companies
- 07:12 Increasing complexity of AI infrastructure management
- 11:10 Discussion on Backend.AI's management capabilities and AI infrastructure optimization
- 14:10 Distribution of Lablup's main customers, domestic and international sales ratio
- 16:51 Key advantages of Lablup solutions
- 19:25 Explanation of GPU virtualization technology advantages and competitiveness
- 22:10 Explanation of Lablup's continuous hardware response and optimization work
- 25:00 Discussion on Lablup's overseas competitors and competitiveness in the market
- 28:07 Lablup's future goals
- 31:15 Discussion on AI farm project and the importance of promoting global competition
3 September 2024
[TheElec] AI Gold Rush, Levi’s, and Lablup
By Joongi KimAI 골드러시와 리바이스, 그리고 래블업
- 00:00 Introducing Joongi Kim, CTO of Rableup
- 07:55 What is the relevance of open source and business direction?
- 10:25 Lablup's revenue, funding, etc.
- 12:17 Backend.AI, a solution used by customers
- 18:33 What makes Backend.AI different?
- 22:58 About churned customers and reasons for churning
- 24:58 What is the new business structure?
- 34:43 Lablup's overseas sales performance
- 36:35 Why was Lablup selected for the semiconductor sector to foster new industry startups?
- 39:57 What is the status and future of MPU-based work?
- 47:46 Lablup's shareholding structure and revenue situation
27 October 2023
[TalkIT] LLM Evolution and Generative AI Enterprise Use Issues and Alternatives
By LablupRecommended for these people!
- AI-related departments, companies interested in utilizing LLMs, companies considering AI adoption
Premium Webinar Key Points
- 01 Trends in the development of LLM in the last 5 years
- 02 Issues and alternatives for LLM commercialization in general companies
- 03 Responding to accelerating AI changes: developers, operators, and executives
- 04 New opportunities triggered by ChatGPT
- 05 Identity of AI models that are good at language: Digital intelligence vs. brain
Since ChatGPT launched last November, LLM-related technologies have been announced almost daily. How is LLM evolving and how should companies prepare to utilize LLM, Jeongkyu Shin, CEO of Lablup, an AI operation platform called "Backend AI" that was highlighted at the NVIDIA GTC conference, explains in the simplest possible way.
20 October 2023
[allshow TV][AI/DX] - FastTrack : AIOps for hyperscale
By Joongi Kimallshow TV AI/DX - FastTrack : AIOps for hyperscale
2 June 2023
[Naver Cloud] Hiperscale Enterprise AIOps Platform, Backend.AI
By Joongi KimBackend.AI is a deep learning platform that helps execute artificial intelligence model development and high-performance computing by efficiently utilizing cloud and data center computing resources.
It uses GPU partitioning virtualization for containers to create an orchestration layer optimized for high-density, highly integrated workloads, enabling the execution of large-scale computational workloads.
This presentation will guide you through the development background of Backend.AI, its technical features and advantages, and how you can utilize Backend.AI on the Naver Cloud Platform.
13 July 2022
[TalkIT] Why is MLOps Necessary, and What Does it Consist of? feat. 2022 MLOps Ecosystem Trends
By Joongi Kim, Jeongkyu Shin- 00:00 Participation in NVIDIA GTC2022 MLOps Panel Talk
- 01:53 Lablup's Competitiveness in MLOps
- 03:59 Why MLOps is Necessary
- 07:04 Orchestrator
- 08:43 Distributed/Parallel Processing Tools
- 09:57 MLOps Modules (General, Serving)
- 11:23 Open Source MLOps Operation Tools
- 12:14 MLOps Issues in 2022
8 April 2022
[TalkIT] MLOps Ecosystem 2022 Outlook and Practical Roadmap for Hyperscale AI Acceleration with Backend.AI
By Joongi Kim, Jeongkyu ShinThe key to accelerating AI development and deployment processes is MLOps. We invite you to explore the MLOps software platform ecosystem that manages the entire process from data collection and processing, AI model training, to AI services. Additionally, we'll introduce you to a more evolved hyperscale AI practical roadmap through the MLOps ecosystem integration of Backend.AI, the first NVIDIA DGX-Ready Software in the Asia-Pacific region.
[Session Guide]
- MLOps Ecosystem and 2022 Outlook / Jeongkyu Shin, CEO (Lablup)
- Accelerating Hyperscale AIOps with Backend.AI / Joongi Kim, CTO (Lablup)
7 April 2022
[MS Dev Korea] - Exploring GitHub Codespaces and Devcontainers | Azure One Step
By Kyujin ChoInstead of setting up and configuring a development environment for application development each time, you can use GitHub Codespaces to access the same Visual Studio Code-based development environment in your web browser. At this time, Devcontainers allow developers to easily set up the development environment as intended. We will explore GitHub Codespaces and Devcontainers with a demo from developer Kyujin Cho from the Lablup startup.
6 April 2022
[allshow TV] Backend.AI MLOps Platform for NetApp ONTAP AI, the Platform for NVIDIA DGX Foundry, and Hyperscale AI Infrastructure
By Jeongmook Kim(1) We'll examine why NetApp ONTAP AI was chosen as the foundational platform from NVIDIA DGX POD certification to DGX Foundry. We'll also take a brief look at successful cases including TOP500 #72 and public cloud services.
(2) Through Backend.AI, the only NVIDIA DGX-Ready Software in the Asia-Pacific region, we'll explore how to optimally operate hyperscale AI infrastructure that includes computational resources like GPUs, storage, and networks.
Additionally, we'll discover how to effectively build and manage AI services at the enterprise level, along with customer case studies.
17 March 2022
[TalkIT] Parallel Processing Storage and Container-based GPU Virtualization for AI Infrastructure
By Jeongmook KimPureStorage and Lablup delve deeply into case studies and A to Z strategies of companies that have already experienced AI platform optimization and project success.
The strategic choice for successful AI implementation in enterprises has become a new challenge and daily routine for many companies and decision-makers. Many companies are investing resources for successful artificial intelligence projects and striving to build the best infrastructure. However, there are still many cases where they find the method difficult or lack confidence in the data solutions they can choose.
So, how about we find the answers by examining the cases of companies that are one step ahead in pursuing, investing in, and succeeding with AI projects? We aim to provide insights that we can share by closely examining real company cases: What are the common difficulties companies face when pursuing AI projects? What strategies were used to overcome these difficulties and build the best platform? How did they validate AI architecture and design, and with what solutions did they implement them?
Joining us are PureStorage and Lablup Inc., who have excellently supported AI consulting and led projects to success in various industries including large enterprises, startups, universities, hospitals, and public institutions.28 July 2021
[SW-Centered Society] - Interview with Jeongkyu Shin of Lablup
By Jeongkyu ShinInterview with Jeongkyu Shin of Lablup
21 February 2021