Back from Iceland!
I've been thinking about the idea of outliers and means. There's a description of a general idea in recent books by Kahneman and Taleb, that where there is a distribution of outcomes, over time there will be movement toward the mean. Kahneman says "It took Galton several years to work his way from his discovery of filial regression in size to the broader notion that regression inevitably occurs when the correlation between two measures is less than perfect" (Thinking, Fast and Slow, Kindle loc. 2998). This is a bit self-evident, since the mean emerges as outcomes are observed. But it is meaningful if there is something like a normal distribution of outcomes. Taleb mentions this in Fooled By Randomness, which is no surprise really, since he is very interested in tail events. Gregory Clark even mentions it in his book The Son Also Rises, mentioning that endogamy (marrying within one's social class) can be a way of delaying or preventing the rich from sinking back down into the mass of humanity.
But what about the world of ideas? Imagine there can be either a perceptible or a subtle influence of previous outcomes on future ones. This type of thing seems to occur naturally when my thinking about a topic is influenced by things I have read or have observed other people believing. Or have been taught or socialized into. This seems somewhat connected with the power of paradigms described by Kuhn or the stickiness of memes described by Dawkins. Seems to me that original, creative thinking is most often an outcome that diverges from the mean. But now we're looking at a new entrant on the knowledge playing-field: large language models. How do the statistical ways they generate responses relate to this idea of regression?
I asked ChatGPT (May 24 Version) a couple of questions about this and these were the results:
Dan: How does the output of ChatGPT resemble regression to the mean?
ChatGPT:
The concept of regression to the mean refers to the tendency for extreme or unusual observations to move closer to the average in subsequent measurements. While regression to the mean primarily applies to statistical phenomena, it can also have some parallels with the output of ChatGPT.
ChatGPT generates responses based on patterns and information it has learned from its training data. It aims to provide coherent and contextually relevant replies to user queries. However, it's important to note that ChatGPT's responses are not a direct reflection of reality or a perfect representation of the training data. Instead, they are an approximation based on statistical patterns learned during training.
Similar to regression to the mean, the output of ChatGPT can exhibit some tendencies towards moderation or moving closer to the average response. If a user asks a question that requires a highly specific or nuanced answer, ChatGPT might not always provide an accurate or precise response. Instead, it may generate a more general or commonly observed answer that aligns with the patterns it has learned from training data.
In some cases, ChatGPT may provide outputs that are less extreme or less certain than the user's input or previous statements. This moderation or tempering of extreme statements can resemble the concept of regression to the mean. However, it's important to note that ChatGPT's output is influenced by a variety of factors, including the training data, the prompt or query, and the model's architecture, and it doesn't precisely adhere to the statistical concept of regression to the mean.
Dan: How does this similarity between the process ChatGPT uses to generate responses and regression to the mean compare to human creativity?
ChatGPT
The similarity between ChatGPT's response generation process and regression to the mean differs from human creativity in several important ways.
1. Statistical patterns vs. human imagination: ChatGPT's responses are primarily based on statistical patterns it has learned from training data. It generates outputs by identifying common patterns and associations in the data, which may include regression to the mean-like tendencies. In contrast, human creativity involves the ability to imagine and generate novel ideas, concepts, and solutions that may not be directly derived from statistical patterns. Human creativity can encompass originality, inventiveness, and the ability to think beyond conventional boundaries.
2. Context and understanding: While ChatGPT can analyze the context of a conversation to some extent, its understanding is limited to patterns it has learned during training. Human creativity, on the other hand, is driven by a deeper understanding of the world, personal experiences, emotions, and diverse perspectives. Human creative outputs often draw upon a wide range of knowledge, intuition, and contextual understanding that goes beyond statistical patterns.
3. Purpose and intention: ChatGPT's purpose is to generate responses that are contextually relevant and coherent based on its training data. It aims to provide informative or helpful answers, but it lacks intentional creativity or original thought. Human creativity is driven by conscious intention and purpose. Creativity involves actively seeking new perspectives, challenging assumptions, and generating ideas or solutions that are novel, meaningful, and purposeful.
4. Emotional and aesthetic dimensions: Human creativity often involves the expression of emotions, artistic expression, and aesthetic judgments. It encompasses various forms of art, music, literature, design, and more. ChatGPT, while capable of generating text that may seem creative in some cases, lacks true emotional depth and subjective aesthetic judgment.
So the AI's responses are statistically determined by the training data (with some potential tweaks to slightly randomize the responses by taking the second-most frequent next word rather than just grabbing the #1 choice every time). There is a HUGE amount of data readily available, but there really isn't an opportunity to discard boring or conventional answers and hone in on innovative or original ones. Because there isn't a goal involved other than answering a question accurately (#3 above).
So it seems like I need to think more deeply about novelty and insight. Specifically when I'm designing content for students. I want them to experience the human elements of learning, not just reproduce the data accurately. They can get that info anytime (until the machine stops), but what will they do with it? That will be the key to their relevance in the world as humans (not to mention their value-add vs. AI agents in the workplace).
One way of doing this is to curate content that juxtaposes ideas and perspectives in ways that doesn't represent the "mean" or the dominant paradigm. Then ask them to engage with it critically. Primary sources in history are probably a good way to introduce this variation in perspectives. So my instinct about these being more "interesting" than textbook narratives and historians' interpretations is probably useful.
Along these lines, this weekend I listened to a REALLY good America This Week episode, I think it was #41. Matt Taibbi and Walter Kirn talked very intelligently about AI and then discussed Ursula K. Leguin’s short story, “The Ones Who Walk Away from Omelas.” Taibbi’s main issue with AI is that it can be easily used to automate the tasks of surveillance and social control. He mentioned the recent statements made by representatives of the US military that drone targets are being chosen algorithmically. Kirn responded that the ultimate goal might be murder without guilt. Also plausible deniability and a scapegoat the next time US drones wipe out a wedding party. “Oh, the AI screwed up. Oops!”
Kirn’s issues with AI seem to be more generally that it will make some human labor obsolete and that it will speed the culture’s rush to conformity and mediocrity. Conformity because we will have to be socially pressured to pretend (or even convince ourselves) that we like the new Wonder Bread and Tang that AI will produce better than artisan loaves and freshly-squeezed citrus juice. Or, more important, than doing things for ourselves (last week they discussed the Forster story, “The Machine Stops”.) And mediocrity because that’s what convergence to the mean is going to produce. NOT originality, at least at first. If we get originality, then we’ve got much bigger problems.
Along the way, they did agree that the formulaic ways that a lot of intellectual property is currently produced (including a lot of Hollywood scripts, Writer’s Guild!) is already well on its way to that convergence. How problematic, really, will it be if the final step is taken and a couple of scriptwriters lose their jobs? The ones who write crappy scripts, that is. I agree with Walter that great scriptwriters will be safe, especially those who can write dialogue. Somehow I don’t see a David Mamet or an Elmore Leonard being disintermediated by an app. If it’s an app that specifically steals their work and repurposes it, then I think they might have a legitimate legal claim. I think we’re far from seeing an app understand what makes Mamet or Leonard dialogue compelling. To a great extent, I think we’re still doing what Jaron Lanier observed several years ago. Treating the little successes of the AI the way we treat a baby’s first words. Isn’t that precious! We ignore the fact that what ChatGPT produces really isn’t that insightful. It’s just the same info you could get out of a mediocre human-produced article. If it’s true (as Kirn speculates), it’s no surprise that financial and sports news is being taken over by AI.