2023.09.24
But that lack of model is weird! The case study they give (which I duplicated via ChatGPT/GPT4) is that it can't tell you who Mary Lee Pfeiffer is the parent of... but then can tell you that one of Tom Cruise's parents is Mary Lee Pfeiffer. And this kind of gap was predicted in discussion of earlier forms of neural networks - which may indicate it's a fundamental problem, a shortcoming that can't readily be bridged.
It reminds me of Marvin Minsky's late 60s work with Perceptrons. ChatGPT was able to to remind me of the details -
Minsky and Papert proved that a single-layer perceptron cannot solve problems that aren't linearly separable, like the XOR problem. Specifically, a single-layer perceptron cannot compute the XOR (exclusive or) function.
Of course later multilayer networks (with backpropgation, and later transformers (not the cartoon toys)) overcame these limits, and gave us the LLMs we know are establishing our wary relationships with. So who knows if there could be another breakthrough.
But the results we get with LLMs are astounding - they are a type of "slippery thinking" that is phenomenally powerful... Hofstadter and Sandler called their book "Surfaces and Essences: Analogy as the Fuel and Fire of Thinking" and I would argue that so much of intelligence is an analogy or metaphor - far branches from the human situation of having to make a model with us as a bodily agent in the world.
And as we find more uses for LLMs, we need to be careful of adversaries thinking one step ahead. Like the once seemingly unstoppable, alien intelligence of AlphaGo derived Go players can be beaten by amatueur players - once other machine learning figured out what AlphaGo can't see on the board.
Suddenly, Captain Kirk tricking intelligent computers with emotional logic doesn't seem as farfetched as it once did...
Had a lovely bit of wine and cheese with Dylan and his mom Linda last night, they introduced me to NY Times' game Connections - I don't know if there are other fellow folks still Wordle-ing out there but "find 4 groups that match 4 at a time" is a stronger concept IMO - I have a lot more love for games that treat words as concepts and not just "an ordered collection of scrabble tiles", and there's some lateral thinking involved I dig - it's not just what do the words mean universally, sometimes there's a specific context (even a pop-culture one) you have to notice.
(It's a little bit like an easier and more human version of Semantle)
Heh, ChatGPT Plays Zork
Had a dream that Taylor Swift announced she was doing a "pronoun reveal" and all the annoying swifties were losing their shit for weeks and saying "I told you so" and then Taylor just tweeted "she/her"
Gonna say something that will definitely get screen capped and used to doxx me someday but like having a fetish isn't. It isn't evil. You know? People have fetishes. It's part of the human condition. You're not a serial killer just because you're unusually and offputtingly hype about women's shoes. Thought crime isn't real and it especially shouldn't be applied to fetishes. Every human brain is a diy project built by unlicensed electricians.
Gonna say something that will definitely get screen capped and used to doxx me someday but like having a fetish isn't. It isn't evil. You know? People have fetishes. It's part of the human condition. You're not a serial killer just because you're unusually and offputtingly hype about women's shoes. Thought crime isn't real and it especially shouldn't be applied to fetishes. Every human brain is a diy project built by unlicensed electricians.
The curious case of Chat GPT and weaponized confirmation bias