태그 : NVIDIA GTC

  • Personalized Generative AI: Operation and Fine-Tuning in Household Form Factors

    By 신정규

    The advent of personal computers has fundamentally transformed our lives over the past 40 years. Just to name a few, we witnessed the digitization of life by the internet and the smartphones. Now we're on the cusp of a new era, moving beyond the age of PCs to the age of personalized agents (PAs) or artificial intelligence. At the heart of this transformation is the rapid progress of large language models (LLMs) and multimodal AI. It's no longer about generating smart replies from somewhere in the cloud; with the advent of powerful consumer GPUs, it's now possible to run a personalized generative AI at home.

    We'll introduce automated methods for running generative AI models on compact form factors like PCs or home servers, and for personalizing them via fine-tuning. We'll show how PA can be more closely integrated into our daily lives. Furthermore, we'll showcase the results obtained through these methods via a live demo, inviting you to contemplate the future of personalized AI with us.

    1 March 2024

  • From Idea To Crowd: Manipulating Local LLMs At Scale

    By 신정규, 김준기

    Large language models (LLMs) are the pinnacle of generative AI. While cloud-based LLMs have enabled mass adoption, local on-premises LLMs are garnering attention in favor of personalization, security, and air-gapped setups. Ranging from personal hobbies to professional domains, both open-source foundational LLMs and fine-tuned models are utilized across diverse fields. We'll introduce the technology and use cases for fine-tuning and running LLMs on a small scale, like PC GPUs, to an expansive scale to serve mass users on data centers. We combine resource-saving and model compression techniques like quantization and QLoRA with vLLM and TensorRT-LLM. Additionally, we illustrate the scaling-up process of such genAI models by the fine-tuning pipeline with concrete and empirical examples. You'll gain a deep understanding of how to achieve the operation and expansion of personalized LLMs, and inspirations for the possibilities that this opens up.

    1 March 2024

  • MLOps Platforms, NVIDIA Tech Integrations to Scale for AI and ML Development

    By 김준기

    래블업의 김준기 CTO 가 아래 공동 웨비나에 참여해서 Backend.AI 에 대해 소개하였습니다.

    MLOps는 AI 배포를 가속화하는 핵심입니다. 이 세션에서는 세 개의 ISV와 그들의 고객들이 각자의 AI 배포를 가속화하기 위해 MLOps 솔루션을 성공적으로 구현한 방법에 대한 인터뷰를 들을 수 있습니다. 우리는 기업 고객들이 직면하는 가장 일반적인 배포 과제들을 다루고, MLOps 파트너 생태계가 이러한 과제들을 해결하는 데 어떻게 도움을 줄 수 있는지에 초점을 맞출 것입니다.

    Run.AI와 Wayve가 자율주행 차량에서 AI/ML의 확장을 선도한 방법에 대해 들을 수 있습니다. 또한 Weights & Biases가 John Deere/Blue River Technology와 협력하여 농업 분야에서 AI 발전을 이룬 방법에 대해서도 들을 수 있습니다. 마지막으로, Backend.AI가 LG전자의 스마트 공장을 더욱 효율적으로 만드는 데 어떻게 지원했는지 들을 수 있습니다.

    이 세션은 MLOps 생명주기 전반에 걸친 특정 사용 사례와 모범 사례를 강조할 것입니다. NVIDIA MLOps 파트너들과 그들이 기업 고객들과 어떻게 배포했는지에 대해 알아보세요. 이 세션에서는 놓치고 싶지 않을 실제 솔루션 예시들을 보여줄 것입니다.

    1 March 2022

  • Leveraging Heterogeneous GPU Nodes for AI

    By 신정규

    In this session, Lablup Inc. will present three solutions for achieving optimal performance when combining various GPUs as one AI/high performance computing cluster. Their solutions are based on Backend.AI, an open source container-based resource management platform specialized in AI and high performance computing. They'll include real-world examples that provide AI developers and researchers with an optimal scientific computing environment and massive cost savings.

    1 October 2020

  • Accelerating Hyperparameter Tuning with Container-Level GPU Virtualization

    By 신정규, 김준기

    It's commonly believed that hyperparameter tuning requires a large number of GPUs to get quick, optimal results. It's generally true that higher computation power delivers more accurate results quickly, but to what extent? We'll present our work and empirical results on finding a sweet spot to balance both costs and accuracy by exploiting partitioned GPUs with Backend.AI's container-level GPU virtualization. Our benchmark includes distributed MNIST, CIFAR-10 transfer learning, and TGS salt identification cases using AutoML with network morphism and ENAS tuner with NNI running on Backend.AI's NGC-optimized containers. Attendees will get a tangible guide to deploy their GPU infrastructure capacity in a more cost-effective way.

    1 October 2020

도움이 필요하신가요?

내용을 작성해 주시면 곧 연락 드리겠습니다.

문의하기

본사 및 HPC 연구소

서울특별시 강남구 선릉로 577 CR타워 8층

© Lablup Inc. All rights reserved.