Self-aware robots that inhabit Asimov's stories and others such as "2001: A Space Odyssey" and "Battlestar Galactica" remain in the distant future. Today's robots still lack any sort of real autonomy to make their own decisions or adapt intelligently to new environments. But danger can arise when humans push robots beyond their current limits of decision-making, experts warn. That can lead to mistakes and even tragedies involving robots on factory floors and in military operations, when humans forget that all legal and ethical responsibility still rests on the shoulders of homo sapiens. "The fascination with robots has led some people to try retreating from responsibility for difficult decisions, with potentially bad consequences," said David Woods, a systems engineer at Ohio State University. Woods and a fellow researcher proposed revising the Three Laws to emphasize human responsibility over robots. They also suggested that Earth-bound robot handlers could take a hint from NASA when it comes to robot-human interaction. Updating Asimov Asimov's three laws of robotics are set in a future when robots can think and act for themselves. The first law prohibits robots from injuring humans or allowing humans to come to harm due to inaction, while the second law requires robots to obey human orders except those which conflict with the first law. A third law requires robots to protect their own existence, except when doing so conflicts with the first two laws. South Korea has used those "laws" as a guide for its Robot Ethics Charter, but Woods and his colleagues thought they lacked some vital points. Woods worked with Robin Murphy, a rescue robotics expert at Texas A&M University, to create three laws that recognize humans as the intelligent, responsible adults in the robot-human relationship. Their first law says that humans may not deploy robots without a work system that meets the highest legal and professional standards of safety and ethics. A second revised law requires robots to respond to humans as appropriate for their roles, and assumes that robots are designed to respond to certain orders from a limited number of humans. The third revised law proposes that robots have enough autonomy to protect their own existence, as long as such protection does not conflict with the first two laws and allows for smooth transfer of control between human and robot. That means a Mars rover should automatically know not to drive off a cliff, unless human operators specifically tell it to do so. link...Artificial intelligence researchers often idealize Isaac Asimov's Three Laws of Robotics as the signpost for robot-human interaction. But some robotics experts say that the concept could use a practical makeover to recognize the current limitations of robots.
Categories
- American Technology
- Asia
- Bangladesh Tech
- British Tech
- China Technology
- Cloud Computing
- Computer Sciences
- Computer Sciences news
- Consumer Electronics
- Data Backup and Storage Virtualization
- Data Storage
- DNA
- Electrical And Electronic
- Energy
- Indian Tech
- Iphones
- IT Infrastructure
- JapanTech
- Latest Technology
- Midmarket
- Mobile And Wireless
- Mobile Technology
- Mobiles
- Moon Tour
- Pakistan Technology
- Russian Tech
- Science
- Solar Power
- Space
- Technology
- UkTechnology
- Web 2.0 And SOA
- Web Services
- WikiLeaks
Science Fiction's Robotics Laws Need Reality Check
Topic: Technology