吴恩达来信:LLMs的美好未来

2 评论 3226 浏览 5 收藏 11 分钟

全球人工智能教育及研究领导者、DeepLearning.AI创始人吴恩达,4月21日于知乎发布本文。

Dear friends,

The competitive landscape of large language models (LLMs) is evolving quickly. The ultimate winners are yet to be determined, and already the current dynamics are exciting. Let me share a few observations, focusing on direct-to-consumer chat interfaces and the LLM infrastructure and application layers.

First, ChatGPT is a new category of product. It’s not just a better search engine, auto-complete, or something else we already knew. It overlaps with other categories, but people also use it for entirely different purposes such as writing and brainstorming. Companies like Google and Microsoft that are integrating LLMs into existing products may find that the complexity of switching not only technologies but also product categories raises unique challenges.

OpenAI is clearly in the lead in offering this new product category, and ChatGPT is a compelling direct-to-consumer product. While competitors are emerging, OpenAI’s recent move to have ChatGPT support third-party plugins, if widely adopted, could make its business much more defensible, much like the app stores for iOS and Android helped make those platforms very defensible businesses.

Second, the LLM infrastructure layer, which enables developers to interact with LLMs via an API, looks extremely competitive. OpenAI/Microsoft leads in this area as well, but Google and Amazon have announced their own offerings, and players such as Hugging Face, Meta, Stability AI, and many academic institutions are busy training and releasing open source models. It remains to be seen how many applications will need the power of the largest models, such as GPT-4, versus smaller (and cheaper) models offered by cloud providers or even hosted locally, like gpt4all, which runs on a desktop.

Finally, the application layer, in which teams build on top of LLMs, looks less competitive and full of creativity. While many teams are piling onto “obvious” ideas — say, building question-answering bots or summarizers on top of online content — the sheer diversity of potential LLM-powered applications leaves many ideas relatively unexplored in verticals including specialized coaching and robotic process automation. AI Fund, the venture studio I lead, is working with entrepreneurs to build applications like this. Competition feels less intense when you can identify a meaningful use case and go deep to solve it.

LLMs are a general-purpose technology that’s making many new applications possible. Taking a lesson from an earlier era of tech, after the iPhone came out, I paid $1.99 for an app that turned my phone into a flashlight. It was a good idea, but that business didn’t last: The app was easy for others to replicate and sell for less, and eventually Apple integrated a flashlight into iOS. In contrast, other entrepreneurs built highly valuable and hard-to-build businesses such as AirBnB, Snapchat, Tinder, and Uber, and those apps are still with us. We may already have seen this phenomenon in generative AI: Lensa grew rapidly through last December but its revenue run appears to have collapsed.

Today, in a weekend hackathon, you can build a shallow app that does amazing things by taking advantage of amazing APIs. But over the long term, what excites me are the valuable solutions to hard problems that LLMs make possible. Who will build generative AI’s lasting successes? Maybe you!

One challenge is that the know-how for building LLM products is still evolving. While academic studies are important, current research offers a limited view of how to use LLMs. As the InstructGPT paper says, “Public NLP datasets are not reflective of how our language models are used. . . . [They] are designed to capture tasks that are easy to evaluate with automatic metrics.”

In light of this, community is more important than ever. Talking to friends who are working on LLM products often teaches me non-intuitive tricks for improving how I use them. I will continue trying to help others wherever I can.

Keep learning!

Andrew

亲爱的朋友们,

大型语言模型 (LLMs) 的竞争格局正在迅速打开。最终赢家尚未出炉,但目前的形势已经令人兴奋。我想分享一些观察结果,重点关注直接面向消费者的聊天接口以及LLMs基础设施和应用程序层。

首先,ChatGPT是一个新的产品类别。它不仅仅是一个更好的搜索引擎——能自动完成检索,及其他我们已经知道的功能。ChatGPT与其他类别有一些重叠,但人们也将其用于了完全不同的目的,如写作和头脑风暴。谷歌和微软等公司正在将LLMs集成到现有产品中,这样做可能不仅需要转换技术,还要转换产品类别,这就带来了独特的挑战。

OpenAI在提供这种新的产品类别方面显然处于领先地位,ChatGPT就是一种引人注目的直接面向消费者的产品。虽然竞争对手不断涌现,但OpenAI最近让ChatGPT支持第三方插件的举措——一旦被广泛采用,可能会使其业务更具防御性——会像iOS和Android的应用商店使这些平台的业务更具防御性一样。

其次,LLMs的基础设施层使开发人员能够通过API与LLMs进行交互,这看起来极具竞争力。OpenAI和微软在这一领域也处于领先地位,谷歌和亚马逊也争相发布了自己的产品,而Hugging Face, Meta, Stability AI等公司和许多学术机构都在忙着训练和发布开源模型。有多少应用程序需要用到像GPT-4这样的最大型模型,而不是云提供商提供的更小(更便宜)的模型,甚至是本地托管的模型(比如运行在桌面上的gpt4all)还有待观察。

最后是应用程序层。开发团队建立在LLMs的基础上,看起来竞争不那么激烈,且充满创造力。虽然许多团队都在尝试“显而易见”的想法——比如在在线内容的基础上构建问答机器人或摘要器。但LLMs支持的潜在应用程序的多样性,使得许多想法在专业指导和机器人过程自动化等垂直领域还未被充分探索。我领导的风投公司AI Fund正在与企业家合作开发这样的应用程序。当你能够确定一个有意义的用例并深入解决它时,竞争的感觉就不那么激烈了。

LLMs是一种通用技术,它使许多新的应用成为可能。在iPhone问世后,我从早期科技时代吸取了教训花费1.99美元购买了一个能把手机变成手电筒的应用程序。这是个好主意,但这笔生意没能持续多久:这款应用很容易被其他人复制,售价也更低,最终苹果将手电筒集成到了iOS系统中。相比之下,其他企业家建立了价值更高和开发难度更大的业务,如AirBnB、Snapchat、Tinder和Uber,这些应用程序至今仍在被使用。我们可能已经在生成式人工智能中看到了这种现象:Lensa(一款火爆的照片编辑器)在去年12月的使用量增长迅速,但收入却不尽如人意。

现在,你可以在一个周末进行的黑客马拉松中构建一个简单的应用程序,通过利用厉害的API来实现惊人的结果。但从长远来看,令我兴奋的是LLMs能为解决难题提供有价值的解决方案。谁将打造生成式人工智能的长期成功?也许就是你!

我们面临的一个挑战是,构建LLMs产品的技术诀窍仍在不断发展。虽然学术研究很重要,但目前的研究对如何使用LLMs只提供了有限的帮助。正如InstructGPT论文所说,“公共NLP数据集并不能反映我们的语言模型是如何被使用的……(它们)被设计用来捕捉那些容易用自动指标进行评估的任务。”

鉴于此,社群的作用比以往任何时候都更加重要。与从事LLMs产品开发工作的朋友交谈能带给我一些直觉以外的技巧来改进对这些产品的使用。我将继续尽我所能去帮助别人。

请不断学习!

吴恩达

作者:吴恩达;全球人工智能教育及研究领导者、DeepLearning.AI创始人

原文地址:https://zhuanlan.zhihu.com/p/623672319

本文转载自知乎专栏@吴恩达 ,转载请注明原作者及来源

若标注有误,请联系主编微信:419297645更改

题图来自 Unsplash,基于 CC0 协议

该文观点仅代表作者本人,人人都是产品经理平台仅提供信息存储空间服务。

更多精彩内容,请关注人人都是产品经理微信公众号或下载App
评论
评论请登录
  1. 应该是公关文稿吧
    提炼的精华点:
    【LLMs能为解决难题提供有价值的解决方案】
    【社群的作用比以往任何时候都更加重要】
    其他都是领导最喜欢的那些无营养说辞

    来自四川 回复
  2. 套话一堆

    来自广东 回复