I often drink matcha. Matcha is a bright green powder ground from green tea leaves. Taking matcha powder and swirling it into water renders a luxuriant smooth tea.
A cup of matcha, as imagined by Midjourney.
This tea is something of substance — a suspension that would not be the same without its constituent components — a solid and a liquid blended into something entirely new.
This is a bundle of somewhat unstructured notes from a Twitter Spaces event about value investing. The Spaces event was put on by a good friend of mine, Jason Wong, almost a year ago (May 21st, 2022). While going through my notes recently, I realized that these bullets might be worth publishing if I could turn them into a slightly more readable document. This is about the best I could do.
The killer use case for large language models (LLMs) is clearly summarization. At least today, in my limited experience, LLMs are incapable of generating unique insights. While LLMs are good at writing creatively regurgitated text based on certain inputs or writing generally about a topic, they’re unlikely to “think” something unique. However, LLMs appear to be quite good at knowing what they do and don’t know, and this is especially true when they are provided with a clear chunk of information or text to summarize.
Begin.
I’m going to experiment with free writing for the first time in a long time. I’m in the back of an Uber right now and it’s not that I don’t want to talk to the driver, it’s just that I’m not in a huge talking mood right now. For that reason, I’m going to experiment with free writing in this altered state of consciousness… We’ll see how it goes.