Storythinking with Tracy and Dan (Ch 5)
The Limits of Logic--or Why We Still Need Storythinking
Storythinking: The New Science of Narrative Intelligence, Angus Fletcher, 2023
Table of Contents (evolving)
Chapter 1 (hosted by Dan)
Chapter 2 (hosted by Tracy)
Chapter 3 (hosted by Dan)
Chapter 4 (hosted by Tracy)
Chapter 5 (this post, where Dan writes the main post and Tracy comments in italics)
Fletcher grabbed my attention with his beginning paragraphs in this chapter. He says, italicized, that "mental illness was a breakdown of logic", without saying for whom specifically other than setting the idea within Nazi Germany. Then he equates this vision with the contemporary idea of the Singularity. Is he suggesting the silicon valley utopians who seek this goal are Nazis?
This chapter is once again based on Fletcher's assertion that "logic, powerful as it is, can only compute what is, not what could happen". It's the same point he has made several times about the differences between nouns and verbs; the complexity of causality. Along the way, though, he arrives (by torturous means) at a very interesting and relevant question. Given the ability of our tools (computers and AI) to do calculation and logic better, cheaper, and faster than we will ever be able to do, "why devote the bulk of school time to setting up future generations to be second-class algorithms?"
Tracy’s comment. The educational implications of the argument are huge. Whether he’s going after STEM, worrying about math phobes, or attacking the Common Core — or chastising semiotics-based close reading and critical thinking — Fletcher’s aiming to hit at multiple levels. He’s also, arguably, going after what has made America a preeminent world power, i.e. our science, technology, medicine, and the ability to harness natural resources. But he has a point. Now that computers and AI are this advanced, why indeed try to train humans to compete with them? Why not let humans do what they’re good at — and flip the table on Silicon Valley while we’re at it?
In the second section of the chapter, Fletcher tells an interesting story of Eilhard von Domarus, who seems to have had a fascinating life and to have interacted with several important characters in both Europe and America (such as novelist Neal Stephenson's favorite philosopher, Edmund Husserl). All this name-dropping Fletcher does suggests some connections between ideas and progress in Domarus' thinking, but he could have been more explicit and added some detail. Domarus' summation, "sanity = logic" is interesting, especially when Fletcher follows this statement in the next paragraph with a description of how Warren McCulloch apparently spent a considerable amount of time and effort helping Domarus render his dissertation into logical, sane prose.
Fletcher continues with a story of an apparent autodidact genius, Walter Pitts, who he says was the inspiration for Good Will Hunting. Fletcher describes Pitts and McCulloch excitedly concluding that they
had proved what Domarus had intuited: sane minds thought in pure symbolic logic. This meant that all the brain’s healthy operations took the form of induction, deduction, interpretation, and other syllogistic processes. Which meant, in turn, that all the intelligent things that humans had done—in science, technology, art, business, politics, literature—could be reduced to AND-OR-NOT formulas.
He says the hope that "human minds (or at least the minds of trained philosophers) were general logic engines" was echoed in the 17th-century works of Thomas Hobbes and Gottfried Leibniz and 19th-century George Boole. My mind had already jumped to Neal Stephenson's depiction of Leibniz and his "universal language" in the Baroque Trilogy, which is a narrative that among other things describes why (apart from being an interesting way to organize a library) this project went nowhere.
Tracy’s comment. I’d heard of Turing, and Betrand Russell, of course, and I knew the Enlightenment philosophers were inspired by scientific inquiries as much as by political thought, but Fletcher makes the connections international and cross-cultural in ways I never heard of, and he singles out a handful of highly influential (?) crackpots, whose ideas apparently became mainstream. This idea of finally being able to test the claims of philosophy to value logic in human thinking. It sounds like dull methodology, but the psychological implications (to go along with the educational ones) are really huge. What is the best model of mind? What does it mean for a human to be a rational or reasonable being? Is that even the goal? Why are the biggest advocates of sanity = logic the brilliant crackpots of the world!?
Fletcher continues with the "rise of computational theories of mind" and their spread in cognitive science and evolutionary biology. He largely ignores the undercurrent of debate between materialism and spiritualism that must have been a large part of this development (although at the very end of the chapter he does briefly come down on the side of materialism, describing the human brain as "reductively mechanical"). Fletcher imagines this logic-only perspective considers anything happening in our minds that isn't logical as a glitch, saying "Our neural operating system sputters with biological quirks, erratic emotions, and local data." Although I don't think it was probably this binary (or snarky) for all the people pursuing these ideas, he has identified three issues: the biological interactions that make up thinking, the relationship between emotion and "pure" thought, and the difference between the particular and the general. I think these are all extremely important elements of how we actually think and are probably all features rather than bugs.
Tracy’s comment. It’s hard to imagine the substrate (flesh, wet brains vs. chips and electrodes) doesn’t matter. The problems of qualia, emotions, consciousness… I’m particularly exercised by the problem of general versus particular. It could be this underlies a major East-West distinction. Might philosophy be better done through poetry? Or hermeneutics of classic texts (is allegory a generalist method, really)? Or through science fiction, as Dan suggests? What do abstractions and “rules of thought” (AND-NOT-OR) really buy us?
In a section titled "The Failed Experiment", Fletcher returns to his discussion of temporality and causality, which he says only story can handle. He says,
Because logic can do what story can’t, it can’t do what story can: process actions. Actions include, at minimum, a cause and its effect, and those two elements cannot exist concurrently in logic’s eternal is. A cause must precede its effect in time, necessitating either a past or a future. Which means that when actions are fed into a logical system, the system is confronted with an insoluble problem: Render a cause and its effect into a single, present-tense instant. Or in other words: Take two things that can’t coexist—and make them simultaneous."
Fletcher makes an interesting point here, that this process (which he says is the process required in Natural Language Programming) "converts action verbs (e.g., Jane runs) into linking verbs and participles (e.g., Jane is running)". I thought this was a fascinating new insight into the passive voice, which we discourage students from using because it tends to remove agency and causality from their writing. If a writer describes Jane as being in the condition of running, we don't learn anything about why or for how long she runs. Surely her motivation is relevant.
I mentioned this last week: for me, motivation is huge. And it's one of the things I wonder about, with respect to Artificial Intelligences. We don't really understand what drives us, both in our actions and in our trains of thought. But I suppose mostly our motivations are biological needs and psychological urges derived from them. What do we imagine is going to motivate AI?
Tracy’s comment. Fletcher does keep returning to this noun-verb distinction, saying that the collapse of action and agency, cause and effect, of verbs into nouns and adjectives, ends in “magical thinking” and the wiping out of time. A preference for symbol and eternal propositional truth is deadening. I think I get the intuition, but if the purpose of writing is to evoke human thought, which literally cannot function without action, agency, causality, and time — and we do tend to anthropomorphize everything, attributing motivation and psychology even where it doesn’t exist (like with AI?) — I’m not sure I understand the danger exactly. Isn’t the logic meant to be a corrective on over-rampant human imagination? Probably we could heed a good warning here against making everything into a problem of classification (AND-NOT-OR identity politics?) or a solution based on applying laws of physics to human behavior (technocracy, social engineering?).