Tag : History

  • Learning from history

    By Sanghyeon Seo

    Learning from history

    ChatGPT and other language giants that have gained traction in the last half-decade didn't just fall out of the sky out of nowhere. We've seen it many times throughout history, where a cumulative development of technology reaches an inflection point and rapidly transforms society. Sometimes, the path to that inflection point is strikingly similar for technologies that evolved in very different times and contexts.


    Mankind has long wanted to fly - it's no coincidence that Civilization 6 has "Dream of Flying" as its theme song.

    How did the Wright brothers realize their dream? In 1899, Wilbur Wright writes to the Smithsonian Institution and asks them to send him what is known about airplanes. After three months of reviewing the material he receives, Wilbur concludes that not much is known about flight, except that it's a problem. Plausible theories have turned out to be false and improbable theories have turned out to be true, he writes, so he can't believe anything he hasn't seen for himself.

    What Wilbur wanted to know from his literature review was this: what do we need to know about flying? What of it is already known? What are the remaining problems that need to be solved? Surprisingly, Wilbur was able to answer all of these questions from his literature review, something his competitors were unable to do.

    In a 1901 lecture, Wilbur summarized his conclusion: "There are three problems with flying. You need wings to make the airplane float, you need an engine to make the airplane go, and you need a way to control the airplane."[^1]

    [^1]: Some Aeronautical Experiments (Wilbur Wright, 1901).

    Wilbur saw that the wing problem and the engine problem had been solved to a certain extent, so he needed to solve the piloting problem. To solve the piloting problem, he needed an airplane, and to build an airplane, he needed to solve the piloting problem. Wilbur concluded that he could solve the problem of controlling an airplane by taking the problem of controlling a glider.

    To test the glider, he needed high hills and strong winds, and they had to be sand dunes for the safety of the experimenters. In 1900, Wilbur requested data from the Weather Bureau to review the windiest places in the United States. The staff at the Kitty Hawk Weather Station wrote back that the beach next to the station was unobstructed and would be suitable for the experiment."[^2]

    [^2]: Letter from J. J. Dosher, Weather Bureau, to Wilbur Wright, August 16, 1900.

    The 1901 experiment was a disappointment: the wing didn't have enough lift. The Wright brothers had used data from the Auto Lilienthal to calculate the area of the wing, and they had become suspicious of its accuracy.

    After analyzing their experimental data, they concluded that John Smeaton's value of the proportionality constant, which had been used without question for over 100 years, including by Otto Lilienthal, was incorrect.

    In order to systematically analyze the lift of wings without the time-consuming and laborious glider experiments, the Wright brothers built a wind tunnel. Their analysis showed that the data from the Auto Lilienthal was correct, except for the incorrect value of the proportionality constant, but the wing used by the Auto Lilienthal was inefficient.

    The 1902 glider that resulted from this analysis had a larger area to offset the revised value of Smithen's constant and a flatter shape to increase efficiency. (They changed the flatness of the wing from 1/12 to 1/24.) The new glider flew very well.

    That's how the Wright brothers were able to make their historic first flight at Kitty Hawk in 1903.


    Humans have long wanted to talk to machines - countless science fiction novels and movies bear witness.

    To create AI, OpenAI had to solve three problems. Computing infrastructure, models, and data. You can think of the computing infrastructure as the engine, the models as the wings, and the data as the controls.

    To manage the compute infrastructure, OpenAI used Kubernetes, but it wasn't something they could just grab and go. When they hit 2,500 nodes, they ran into problems with the Linux kernel ARP cache overflowing,[^3] and when they hit 7,500 nodes, they had to fix a bug in Kubernetes to enable anti-affinity.[^4]

    [^3]: Scaling Kubernetes to 2,500 nodes, January 18, 2018.

    [^4]: Scaling Kubernetes to 7,500 nodes, January 25, 2021.

    (Advertisement: Lablup's Backend.AI has already been used in practice on large clusters for AI training and inference, solving a number of scaling problems and implementing its own scheduling algorithms to support features like affinity and anti-affinity.)

    The scaling law for AI is the equivalent of the lift equation for an airplane. Just as the lift equation describes the lift of a wing as the area of the wing, the lift coefficient, and the proportionality constant, the scaling law describes the loss of an AI model as the size of the model, the size of the data, and the power law exponent.

    Just as the Wright brothers discovered that John Smeaton's constant of proportionality was 0.003, not 0.005, the power law exponent of scaling law was initially thought to be 0.73,[^5] but was actually found to be 0.50.[^6] The incorrect value was calculated because the learning rate was not adjusted for the size of the data.

    [^5]: Scaling Laws for Neural Language Models, January 23, 2020.

    [^6]: Training Compute-Optimal Large Language Models, March 29, 2022.

    OpenAI knew that control of the model was an important issue, so before we trained our first GPT, we were already working on reinforcement learning from human preferences,[^7] which we applied to the control of robots, reminiscent of the Wright brothers referencing bird flight for control of airplanes and first applying it to gliders instead of airplanes.

    [^7]: Deep reinforcement learning from human preferences, June 12, 2017.

    To apply this research to language models, human preference data was collected, resulting in InstructGPT.[^8] It's hard to know exactly because OpenAI hasn't published its research since GPT-4, but research is showing that it can learn not only from explicit feedback, but also from implicit feedback, such as users retrying to create or continuing and stopping conversations.[^9] If so, OpenAI could create a positive feedback loop where it improves its models to gather users, who then gather users to improve their models.

    [^8]: Training language models to follow instructions with human feedback, March 4, 2022.

    [^9]: Rewarding Chatbots for Real-World Engagement with Millions of Users, March 10, 2023.


    In this article, we've compared how humans went from flying to talking to machines, and we've seen some very similar patterns in the evolution of technology throughout history.

    What other examples will we see in the future as AI technology advances and we work to make it more accessible to more people? Can Rableup and Backend.AI help accelerate that process, allowing people to experiment and realize what we've learned from history more quickly? We're in the middle of this inflection point.

    This post is automatically translated from Korean

    12 July 2023

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

Namyoung Bldg. 4F/5F, 34, Seolleung-ro 100-gil, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.