David Gunkel, professor at Northern Illinois University, has written an interesting new book in which he argues our moral considerations are too restrictive. Gunkel, who holds a Ph.d in philosophy, told the college newspaper NIU Today that we should expand our ethical circle to include, well … robots.
“Historically, we have excluded many entities from moral consideration and these exclusions have had devastating effects for others,” Gunkel says. “Just as the animal has been successfully extended moral consideration in the second-half of the 20th century, I conclude that we will, in the 21st century, need to consider doing something similar for the intelligent machines and robots that are increasingly part of our world.”
Well that’s interesting, but what does it mean? How should we “consider” robots? Should we not eat them? Should we not deconstruct and destroy them? Do they have worker’s rights? Should I feel bad when I harm them?
No, says Gunkel. He is merely arguing that we should consider the influence of robots and technology on our moral beliefs and actions. He doesn’t want to propose specific positions so much as he wants to get a conversation going:
Gunkel says he was inspired to write “The Machine Question” because engineers and scientists are increasingly bumping up against important ethical questions related to machines.
“Engineers are smart people but are not necessarily trained in ethics,” Gunkel says. “In a way, this book aims to connect the dots across the disciplinary divide, to get the scientists and engineers talking to the humanists, who bring 2,500 years of ethical thinking to bear on these problems posed by new technology.
“The real danger,” Gunkel adds, “is if we don’t have these conversations.”
With that, I agree. Which is good. I’m not about to start feeling empathy for robots.
Note: you can read an excerpt from The Machine Question here.
Tagged: ethics, machines, morality, philosophy, science, technology