December 5, 2017

"The GOP is basically GoFundMe for corporations, rich people and the morally corrupt."
The Last of the Iron Lungs Hate to admit it but reading about these reminds me of how stupid my love of Weird Al's "Mr. Frump in the Iron Lung" was, it was like my goto goofy a cappella song as a young teenager. (Yeah, I know it's goofy to pick on a novelty song, but man, that's just not how the contraptions work...)

I guess for it work, there has to be a subtext of A. Mr Frump is also in a coma (otherwise he could talk on the outbreath) and B. Mr Frump's iron lung has a terrible mechanial breakdown (I suspect if you die in an iron lung, you just keep being "breathed for", the whole point is that it keeps things steady no matter what heart-attacky thing your body might be doing...)
A big part of managing our fear of mortality is getting a grip on what regrets we might have. Travis Bradberry's answer to "What are the most common regrets that people have once they grow old?" is worth reading. A summary of it is:
1. They wish they hadn't made decisions based on what other people think. (Especially about their careers and moral decisions)
2. They wish they hadn't worked so hard.
3. They wish they had expressed their feelings.
4. They wish they had stayed in touch with their friends.
5. They wish they had let themselves be happy.
Good stuff. How are you dealing with your future regrets?
Artificially Intelligent Robot Predicts Its Own Future by Learning Like a Baby In "On Intelligence", Jeff Hawkins (he made the Palm Pilot but his other love is neuroscience) argues that intelligence and consciousness is a big game of "predict and test" - that relatively few researchers back then had noticed that we have about as many connections down the hierarchy of abstraction as up - so we don't just see light and dark, resolved into a border, resolved into a line, resolved in a face, that a higher system probably remembers there's a face there, and tells the lower systems to look for face-ish parts, and only report back if there's something surprising... i.e. predict and test, predict and test, all the time. (This failure to really "take in" the world as it is appears at a low level explains a lot things, like why it's hard to draw realistically vs based on your expectations of what the object looks like, and many other illusions and also political misthinks - tons of confirmation bias sneaks in there.

Anyway, it sounds like this robot is design to really live out that kind of theory, predicting what the scene should look in X seconds if it does action Y - like a baby exploring the world.

My hunch is that this style of learning - and safe sandboxes to foster it - will critical if we ever get true thinking AI, something with the ability to shape its own thinking at macro level, vs algorithms that kind of "learn" but only in the meta-patterns that are programmed into it at the outset. (Another theory says certain types of squid seem to have the same level horsepower humans do, but maybe it will never reach fruition because A. the undersea environment of objects isn't as rich and B. there are predators enough that a prolonged period of protected, learning, experimental childhood isn't possible.