This morning I watched a YouTube video of Brian Greene’s World Science Festival interview of Jaron Lanier:
About minute 14, Lanier says Generative AI is just a collaboration of people, not a new intelligence. He says a lot of thinking about this began with the question, can you get a program to tell the difference between a cat and a dog? He describes "gradient descent" training and says it's trying to be anti-viral. This is hopeful for the network as a whole, in addition to training the AI.
It's interesting to me how embedded Jaron's approach to the issue is rooted in history. He describes Ada Lovelace, Norbert Wiener, Marvin Minsky and others and seems well-versed in their work. I'm also impressed how he continues to bring the conversation back to human responsibility. We grab increasingly powerful tools to control our environment, he says around minute 32, and thus we are responsible to use them responsibly.
He also talks, around minute 35, about tracing source content (the input into the language model that is most relevant for generating the output). This seems to build on ideas he expressed years ago about Ted Nelson's image of the web as a two-way linking mechanism where, for example, micropayments could compensate creators for their input into the reader's experience and also, potentially, subsidiary works. The implication, of course, is that these Generative AI programs are keeping (or could keep) track of all the sources. This is something I don't think most people have talked about before. The typical image of language models is that they sort-of indiscriminately scoop up everything. If Jaron is hinting that these apps can be used to fix the anonymity problem, that's a HUGE potential fix to the existing WWW and a path to a new creator economy.
Tags: #AI, #JaronLanier