Robotic culture is not just the behavior of the robots themselves, but a very large community of people also involved in the creation, development and management of the robotic world. The main fields involved in Roboethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and of course, industrial design.
There has been accelerated development in the designs of robotics in the last 15 years and with that a global concern about ethics has involved. The first question was addressed in last weeks article – just because we know how, should we. Ethics in relation to the treatment of non-human (animals) and even non-living things (machines) and their potential “spirituality” have been discussed forever. It wasn’t until Isaac Asimov published his Three Laws of Robotics (1942) in his science fiction works that ethics was finally discussed relative to robots. However, 2004 was the first world symposium on Roboethics. But for some reason, Roboethics are not well known by the general public except through Hollywood films.
When I first saw the Space Odyssey back in ’68, I was stationed in Colorado and too stoned to understand the implication of AI in Hal 9000. Even years later when the Terminator came out, I still had difficulty understanding the power of computer data to actually formulate reason through algebraic expressions based on a binary system. These two robots were programmed to eliminate a threat. But how did they determine what a threat was? That is why these were science fiction movies. But the real science has actually advanced to this same state today and who knows where it will end up in the future.
Ethics is often slow to catch up with technological developments. Here is my second question: should these robots be considered a legitimate moral agent that could be held responsible for decisions and actions? Recently in Germany, an assembly robotic arm crushed a man to death during a service operation and immediately the question of responsibility came up. How far or widespread would that responsibility go? When it comes to cognitive or autonomous humanoid robots, will the responsibility start and end with the robot and if so, then the next question will be: Will humanoid robots have rights?
How might society–and ethics–change with robotics? Should robots be programmed to follow a code of ethics, if this is even possible? Scientists are working relentlessly at improving AI technology for the benefit of man. Look around your own environment; smart TVs, smart phones, smart cars and even smart houses. What if, one day Artificial General Intelligence (AGI) exceeds our own intelligence and placed in synthetic humanoid form that is no different in appearance from us? Will these SAM’s (“strong” AI machines) be a new form of life? Should they be given rights and could they pose a threat to mankind?
These and many other concerns are being addressed in open summits by scientist around the world and several recent books on the topic. However, AI is and will continue to be developed by deep corporate, national, and military pockets. It will increasingly be developed in stealth mode to achieve monetary, market, and national dominance (the same as any other technology only more so.) Goldman Sachs does not share with JP Morgan. Google does not share with Facebook or Apple; Amazon does not share with Walmart; the US tries not to share with China; Iran tries not to share with Israel. That is why Robotics will be a kaleidoscope of uncertainty. This is literally an artificial arms race. Just ask Watson.