The Killer Use Case for LLMs Is Summarization
The killer use case for large language models (LLMs) is clearly summarization. At least today, in my limited experience, LLMs are incapable of generating unique insights. While LLMs are good at writing creatively regurgitated text based on certain inputs or writing generally about a topic, they’re unlikely to “think” something unique. However, LLMs appear to be quite good at knowing what they do and don’t know, and this is especially true when they are provided with a clear chunk of information or text to summarize.
Much of the world’s information would benefit from clear summarization. Unfortunately, summarization is not easy work: to summarize, one must read through and understand source material, and then generate a concatenated list of insights based on the source material. This is an expensive endeavor, especially if one is not sure that it is worth it to invest time in comprehending the material. In today’s era of ever more information people yearn for a simple way to find information that is valuable to them and discard information that is not. Fortunately, LLMs are highly capable at summarization, and can effectively condense large volumes of information.
The recent advent of Bing Chat heralds a new era in LLM usage. Although often inconsistent, occasionally aggressive, and riddled with inaccurate data, Bing Chat could be the first step towards a world where information is primarily found using specialty search engines that leverage LLMs to index troves of books, blog posts, or academic publications.
But search is not the only area where LLM’s are valuable as summarization machines. A recent post on Hacker News about categorizing the entire BBC In Our Time podcast series using the Dewey decimal system and GPT-3 generated a flurry of insight. One user of the forum,
mattlondon, wrote in response: “The idea of [LLMs] as a ‘universal coupler’ is fascinating, and I think I agree with the author that we are probably standing at an early-90s-web moment with LLMs as a function call (the technology is kinda-there and mostly-works, and people are trying out a lot of ideas … some work, some don’t). My mind is racing. Thanks for the epiphany moment."
The above comment is spot on. Just as the web revolutionized how we find, consume, and digest information, so will LLMs. However, unlike the web, LLMs will actually interpret our information, instead of just providing a way to disseminate, index, and discover it.
Take, for example, the corporate setting. It is not easy to identify actionable insights and pull out valuable information from the fluff that corporations are usually full of. Therefore, good corporate CEOs tend to possess one trait in common: a strong ability to operate at multiple levels of abstraction, and the ability to prioritize and summarize based on many forms of input from all levels of the organization. Good executive functioning requires the ability to quickly zoom between different abstraction layers—from the minute to the 30,000 foot view—and use the insights from each layer to prioritize and delegate tasks as necessary.
Thus, in the future, I would imagine that competent executives will rely on LLMs to condense and summarize information from across their organizations. Soon, I imagine that every meeting within a given corporation will be recorded, transcribed, and then summarized using LLM condensation. Further levels of abstraction (summaries of summaries) will then be pulled together at each level of the corporate hierarchy to be presented to leadership one level up.
Corporations that start to do this sooner will appear to have super powers. Having executive-level summaries on everything happening within an organization at the touch of a finger is a power that no Fortune 100 CEO could have dreamed of just a few years ago.
LLMs will also be useful to liposuction the dark underbelly of white-collar industry: unstructured or semi structured data (though perhaps here they will function less as summarization machines and more as organization machines). Enter a modern health system or enterprise, and you will find that many critical functions in the organization rely on complex and messy spreadsheet-driven workflows. As LLMs develop, they will surely be used to clean up and index tabulated data far faster than could be hoped without. Initial steps are already being made in this direction, with GPT-powered projects like YoBulk. In this way, all companies or organizations dependent on managing large volumes of unstructured or messy data (including science and academia) will be radically changed by LLMs.
Other generative use-cases for LLMs, such as writing code or emails, will doubtlessly be valuable. But, in the short-term, I am betting that the greatest value of LLMs will be their ability to summarize and organize information.