|
Post by Amaterasu Solar on Jan 11, 2015 21:43:21 GMT
The 4th goes without saying. If there are only three Laws under which One can be "tried," all else already IS allowed. As for the 5th, things like Self defense are not willful acts, they are reactionary with will residing only within the choices One has in reaction. So We don't try a victim for efforts to save Themselves against an attack. I don't see the need for this 5th.
|
|
pwm2
Junior Member
Posts: 74
|
Post by pwm2 on Jan 11, 2015 22:28:02 GMT
well it's concievable for one robot to convince another that a certain human needs to die, to protect the rest....much as what happened in France recently, only that was peeps programming other peeps! You see how easily the waters get muddied, especially with hu-mons! Robots need to be able to know the distinction, otherwise they will get used just like we do... OK maybe we can scrap that, but we still need 4 basic laws..i will re-read my Asimov books Love ya
|
|
|
Post by LUCKY☆ on Jan 12, 2015 0:31:05 GMT
20 peeps starving so kill one for the greater good,then 19 are starving so kill one etc.till there are none . i would not want a bot choosing greater good ,for no one would be left.but i havent completely thought it through as too many variables in my opinion.
|
|
|
Post by Amaterasu Solar on Jan 12, 2015 3:00:15 GMT
well it's concievable for one robot to convince another that a certain human needs to die, to protect the rest....much as what happened in France recently, only that was peeps programming other peeps! You see how easily the waters get muddied, especially with hu-mons! Robots need to be able to know the distinction, otherwise they will get used just like we do... OK maybe we can scrap that, but we still need 4 basic laws..i will re-read my Asimov books Love ya Without the motive to behave unEthically, I'm struggling to set up that scenario You describe... WHY would someOne try to get a robot to do this?
|
|
pwm2
Junior Member
Posts: 74
|
Post by pwm2 on Jan 12, 2015 13:47:22 GMT
Why would anyone anywhere commit an act of terrorism? surely not for profit, they do it because they are driven by twisted logic & their own crazy ideals. THEY think they are doing God's work...
Robots, if they are smart enough, need to recognise that not all the instructions given them by humans are good ones, and they have to decide if carrying out that order will harm somebody. Built in safeties are everywhere you look, to save us from ourselves, like the safety switch on a microwave oven, making it impossible to run it with the door open.....
The more advanced the machines get, the more advanced the safeties have to be, including software, and having to second guess human instructions will have to be included, i fear.
|
|
|
Post by Amaterasu Solar on Jan 12, 2015 19:29:24 GMT
Yes, but most of that presently is fomented by (PAID) agents provocateurs. And there is a difference between a robot - a programmed mechanical device - and AI. All software We use would best be chosen from open source... But even if there are a handful of Individuals that are that crazy AND can program AND can build specialty machines, I'm betting We can solve for each of the problems these Individuals create. I would rather live a life of freedom with risk than a stifled life of "safety."
|
|