YouTube has removed videos of robots fighting, citing “animal cruelty”

This is not clickbait. You can read the article from MIT Technology Review here:

https://www.technologyreview.com/f/614204/youtube-has-removed-videos-of-robots-fighting-citing-animal-cruelty/

I alluded briefly the other day to questions about our relationship with robots and A.I. It wasn’t a matter of conjecture. We have a visceral reaction to seeing animals hurt. That’s a natural. No question. No brainer. More recent studies have shown that we respond to CGI “interviewers” in videos similarly to how we respond when we are sitting in front of another human. This is despite that designers have taken special care to make it obvious that we are not talking to a real human, but an A.I. enabled image. Guests on the now defunct Muppet Show have admitted that during filming they ‘forget’ that they are talking to a puppet. Hell, I have even started talking to mannequins in department stores. So we only need a percentage of human cues to anthropomorphize non-humans.

Research has shown that we even empathize with robots (that are obviously robots) if we see them being treated poorly:

This brings me back to what I alluded to the other day regarding A.I. How close to consciousness does a machine have to get before it’s too much like us… too human… to treat differently than we do humans. (Not that all humans have equal rights but in a moral sense we are supposed to believe that we should!) Is this a valid question at all? What exactly is it that makes us human? And do our innate reactions translate to moral code? Watch the video above. This is all going to become a much bigger discussion as technology moves forward and becomes more autonomous.

Youtube, as it turns out, admitted that the fighting robot videos were removed mistakenly by their software matrices, but… It will, as I said, become a much broader discussion. It’s already happening, having transcended science fiction and bubbling up just below the surface of public discourse. I’m going to ask the question now then. At what point does a mechanical being we created become too close to human to treat the way we would treat a car, or a truck… or simply a tool. Do they have to be as close to us as the replicant/slaves in Bladerunner?

My observations suggest that we’re not anywhere near ready to have these conversations. We have a whole hierarchy of beliefs about what life in its various forms is worth, beginning with mankind’s supposed dominion over animals, and that breaks down by which ones treat us best, or taste the best, and those that are too damn ugly to eat, etc. Then we have a whole list of rationalizations of when, why and where its okay to kill other humans, either on purpose or “unfortunate accident. Consider the response to the Black Lives Matter movement. Statistics have long since proven that we, as a society, don’t hold the lives of non-white people in the same regard and it’s evidenced in a thousand different ways. A group of people start saying, hey we matter too, and half the nation is up in arms. The first person who looked at me and asked, “but don’t all lives matter?” would consider herself a liberal (which is another goofy tangent for another time) but it struck me as the dumbest question I’ve ever heard. My response?:

Hey, you might believe that, and I might believe that but all evidence is to the contrary. If we, as a whole society, truly believed that then everyone would be treated equally but they’re not, so no, we don’t believe that for a minute or we’d do something to change things.

No, it doesn’t seem to me that we are ready to have a discussion on whether a robot or android has rights.

Leave a comment