2.21.2009

Experts Warn: Military Robot Rebellion Inevitable - By “Experts” We Assume They Are Referring To Will Smith And McG


The long-standing sci-fi conceit that continues to inform the plot lines of films like I, Robot and the Terminator franchise - androids and robots wont to turning against their human creators and running amok - continues to become less the stuff of pop-corn movies, and more the fodder for documentaries, and full-blown government warnings.
To wit: a report commissioned by the Office of Naval Research and compiled by the Ethics and Emerging Technology department of the California State Polytechnic University (CalPoly) warning of the possibility that autonomous Military Robots created to fight future wars could turn on their human masters has begun circulating around the internet.
And by “circulating around the internet” we mean: please help yourself to a copy.
The report, prepared by Patrick Lin, Ph.D., is about as long as a Melville novel, except in this case, the cautionary tale has swapped man’s preoccupation with hunting down a giant, malevolent white-whale, with man’s preoccupation with building a giant, malevolent, army of killer-robots.
Lin’s concerns are precipitated by the complex reality that the millions of lines of code that will have to be written to run the robots software could potentially get “jumbled up,” and then, well... you know.
In his report, the good doctor emphasizes that the bellicose-’bots need to be programmed to adhere to the Laws of War.
One might argue that some big-brained, geek-outfit like Google could probably come in and grock this cybernetic-conundrum, but we have our reservations regarding that approach.
In any event, we’re “old school” and we can only hope that whoever has to crack this algorithmic-egg will prescribe to Isaac Asimov’s three rules of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
To read more, click HERE>

No comments:

Post a Comment