"Freedom is the right of all sentient beings."

November 16, 2019
The website Lost in Mobile has a corresponding WhatsApp chat I great enjoy - admittedly a bunch of blokes, but from all around the world, and we enjoy chatting on techie and sometimes philosophical things... the top bar of the site has a link to start with the chat if you think it's something you might dig... anyway, here is some stuff from yesterday and today... I realized my rambling was more of a blog post, so here it is...
Kirk:
The USPTO wants to know if artificial intelligence can own the content it creates... What a weird question! My first thought is no - since this is one step away from personhood, and most AIs like this don't seem that autonomous.

But we grant virtual personhood to corporations, and they can hold copyrights, right? Is an AI less of a person than a corporation is?

Also shades of the issue if a monkey can hold a copyright on a photo they clicked the shutter for or if it fairly belongs to the human who set up the situation and owns the equipment...

Bob:
Doesn't the patent thing depend on the definition of "person"? Is part of that being self-aware? In the case of the simian selfie, my question would be did the simian in question initiate the copyright. Now one could argue that just because the copyright might not belong to the simian, that doesn't mean it belongs to the photographer.

As for corporations, don't they represent either the owners or the shareholders?

Andrew:
It's just lawyerly bullsh*t. If you created the AI you own the works it creates. An AI is just a machine, albeit a complex one.

Bob:
What happens if we get to the point where an AI is a separate, self-aware entity? Something that we would consider sentient if it weren't a "machine". Further, let's say it's mobile and has human-like appendages?

And going further, what about an AI that creates an AI? Is the second AI considered a derivative work? At what point is the AI no longer "just a machine". Think Data from Star Trek The Next Generation.

Andrew:
It's tough luck. Data was still created artificially and he's still a machine.

Bob
Not to be picky, or too pedantic, but I could argue that we're all machines in a way, but I know what you mean. I'm waiting for the dolphins to tell us that we're ruining their oceans.

Kirk:
"Si, abbiamo un anima. Ma e fatta di tanti piccoli robot."
--Italian philosopher Giulio Giorello
"Yes, we have a soul. But it is made up of many small robots."

Andrew:
I do get that we're bio-chemical machines but I do think there's something special about biological machines. Yes, there's a spectrum from us to amoebas, and some animals probably should have more rights than they do, but machines don't feature.

And for all I'm a geek, I think we should strongly resist making machines that are too human-like. I think it's dangerous on many levels.

Kirk:
The machines we have now don't feature (hadn't heard that phrase before) but I don't think I buy there's anything eternally unique about biology -- and if there was, at some point we'd figure out how to make "wetware" robots.

The book "Minds in Motion" suggests one idea: that much of biologicals' cognition is based on development and moving in space (and as humans so many of our ways of modeling the world are fundamentally physical)

Andrew:
Maybe we just shouldn't....
We're not ready to be gods.

Shaun:
I am :-)

Kirk:
I kind of agree (with Andrew not Shaun) but mostly just because I worry we'd be bad at it- that if we create something that has its own agenda, that agenda might not be well aligned with our own.

Famously folks keep moving the bar on "well if a computer can do THIS it must be intelligent" - playing chess, vision recognition, etc. Computers can do that stuff and still we see that well, it's still mostly a well crafted tool - it "thinks" in the same way birds "know how" to fly, designed into the system so to speak (not designed per se in the case of birds but you know what I mean )

Of course, Alpha Zero changes that scene a little bit- I've always said "chess programs, ho hum, wake me when a chess playing program is also good at playing backgammon" and that's kind of what we have here- Alpha Zero starts with no knowledge and plays against itself in a matter of hours or days becomes a world beater! with a way of playing its games that often seem uncanny and alien to experts in the game's usual progression

But still.... I guess now I want to say "wake me when it's a program that WANTS to start up chess or some other game of its own volition" And we're not there yet - or if we are, that system simultaneously figured out how to keep a low profile since humans might not appreciate the competition :-D

All this gets quickly into "what's humanity all about, anyway" - like is there a secular purpose or universe goal that many people could agree on? There might not be - but one proposal would be "to keep humanity alive for as long as possible" - this could be a means to other ends, or an end unto itself.

(Of course not everyone agrees with that - who think humanity is nature's current primary experiment with memetic intelligence, and it's certainly taking its toll on the biodiversity of the planet... (biodiversity being a pretty good other candidate for what best uinversal goals might be)

So I do think a decent mission for humanity is the creation of new categories of things.... ideas and concepts that wouldn't exist if we weren't here, but not just novelty in the way a list of random numbers is novel: novelty in meaning I guess. Which means, technically, if our robot or virtual children could do that after us, like could be made to explore the cosmos so would survive an asteroid strike or solar fair that made the earth uninhabitable for us... i dunno, I guess I'm for that! But not at the cost or risk of humanity.... but maybe if we had a really nice retirement ....

Of course, say we could make real AI, true virtual people, we'd be in an odd state. Like, it would seem morally wrong to not give the AI rights. Not treat them as intelligent, feeling and thinking slaves. Let them vote. But what happens to democracy when you can make all the clones you want, legions to swamp any popular vote?

(Of course, when you apply too much of that same thinking to humans of the real world rather than this still very hypothetical example, you get into some ugly eugenics and fascist places real quick)

But coming back to the democracy idea - should a virtual person get to vote, why or why not.... the argument against, at least for the clones, would be in part "because they were too easy to make, once we grew the original in a somewhat more organic way, teaching it etc" So that suggests a model that the value of a person is somehow tied into the effort and expense and time and resources that went into making them? Or maybe a better model is, the value of a person is somehow tied into the guesstimated quality and uniqueness of what's likely to come out of them.... like I think many omnivores would feel bad about the death of, say, a black bear in the woods more than a cow or pig thats lived in a controlled farm environment all its life... maybe partially because that bear has had a more unique life?

Or I dunno. This all might morally suspect territory - any line of reasoning that suggest devaluing some group of humans because of some constructed measure of "human value" is deeply suspicious! So maybe it's not worth going there for the sake of still very hypothetical questions about AI and virtual people...

Heh, I remember the Optimus Prime toy from the original Transformers line... every robot had a biographical 'tech spech' with a tagline, and his was "Freedom is the right of all sentient beings."
So for me this all brings up the question: "Am I on 'team human', or am I on 'team sentient being'"?

Just saw Trevor Noah at the Chevalier, a week after seeing Nick Kroll at the same venue. (Part of a birthday triumvirate of comedy for Melissa with Maria Bamford at the Wilbur next week.)

Never sure I will ever get used to fairly big time celebrities going "Helllllloooooo MEDFORD!"