February 22, 2021

2021.02.22
I've been reading AI researcher Janelle Shane's "You Look Like a Thing and I Love You" - she runs the AI weirdness blog and has a lot of fun seeing what oddities algorithms can come up with, like if you had a robot trying to figure out what ice cream flavors sounded good to a human.

Shane's view of the current state of AI seems more in line with what I thought I knew of where things were, vs my former coworker Slater (who is probably significantly smarter and more versed and practice in the field than I am but OH WELL.) Shane puts a modern AI's brain about the level of a worm, I don't want to paraphrase badly but I think Slater uses the term "human level" more freely. And that's certainly true for some tasks. (In trying to understand Slater's perspective, I do noticed how unfair the question is in reverse... from the CPU's point of view typical human level for calculation isn't even at the pocket calculator level.) Slater has expressed that it's a bit daft to try and get a computer to be smart in just the same way a human is smart, that we already have humans to be smart in that way, and they can work together to much greater effect. (Again, apologies for oversimplifying his views.)

One problem with AI can lurk in edge cases you didn't think of. Like, they will happily exploit any glitches in your physics to maximize the result you tell it to look for - happily evolve a tiny, fast critter that can glitch behind a wall and then get shot back out at tremendous speed in order to maximize velocity, say. One quote:
Sometimes I think the surest sign that we're not living in a simulation is that if we were, some organism would have learned to exploit its glitches.
My first thought was...huh, that might be the kind of thing a warp drive maker would be hunting for. Then I realized, in a way, nuclear weapons are kind of exploiting a glitch like that, or at least using the same kind of logic as a game glitch exploiter. (Oh, what happens if we put ENOUGH of this one kind of material all in one space at once? BOOOM!)

The book goes over some examples of AI's exploiting deficiencies in the win conditions, same stuff as in Specification gaming examples in AI. (Like a robot meant to travel far with minimum energy expenditure might build itself very very tall, so that it can just fall over and end up with a center of gravity far from its starting position without using any energy. Weirdly, this might not be as fake-y a solution as it sounds- prairie grasses and Walking palms might use the same trick!)

So to a layreader like myself, the solution seems to be a "who watches the watchmen" kind of thing, like making an evaluation-function-evaluator, that would spot violations of real world physics (or human morality) that would make a found solution untenable.

But suddenly I combined this idea with another recent reading interest of mine, lateralization of human brains - trying to crack the mystery out why the brain - a machine designed for connections - has found it expedient to divide into two quasi-independent halves. My new "just so" story hypothesis is... maybe that's partially for one part to be able to monitor the other part for this kind of exploitation. You can still have one part optimizing for simpler things ("I like cookies! I'm going to grab this one!") but another that contextualize and evaluate the likely results in a wider context ("that cookie belongs to my friend and they will be mad at me if I steal it"). Given that so many of the connections between the hemispheres seem inhibitory, this idea - while grossly incomplete - might not be totally offbase.
Asked to describe what happened during the assault on the Capitol, 58% of Trump voters call it 'mostly an antifa-inspired attack that only involved a few Trump supporters.' That's more than double the 28% who call it 'a rally of Trump supporters, some of whom attacked the Capitol.'

USA Today + Suffolk University, via
Damn. What are these people smoking? What lies are they listening to?
Interesting piece on 5/4 time. I didn't realize that the "long long, short short" of the Mission Impossible theme also spells out "M I" in morse code...