Post by unkleE on Apr 24, 2013 2:48:39 GMT
I came across this at Victor Reppert's Dangerous Idea blog.
In a recent blog post titled Ten Years to the Robot Apocalypse, Richard Carrier has referenced some developments in robot AI that seem very interesting and futuristic - robots that evolve in their understanding and learn how to understand themselves and their environment, discover laws of nature for themselves, etc. Never mind for now that two computer science engineer commenters on Dangerous Idea suggested that Richard didn't understand what was going on in this field, let's stick with him for the moment.
Richard went on to suggest that if the capabilities of several of the robots were combined, they might present some threats. He uses the phrase "robot apocalypse", but this appears to be just a headline, for he points out he doesn't see the threat that seriously, but serious enough.
He then embarks on a "Digression on the triumph of atheism" in which he says these robots prove his refutation of Victor Reppert's argument for God from reason (AFR), because: "Their operations can be reduced to nothing but purely physical components interacting causally according to known physics (the operation of logic gates and registers exchanging electrons), yet they do everything that Christian apologist Victor Reppert insisted can’t be done by a purely physical system."
But, as one commenter pointed out, to achieve this the "machines are intelligently designed! (by humans)", which might seem to confirm Reppert's argument rather than Carrier's refutation, but doubtless that is arguable.
Carrier goes on to make the interesting point that: "they had better not forget to frontload some morality into any machine they try making self-sentient", believing (I think rightly) that a powerful AI robot might not make choices humans would want without this.
But this raises another interesting philosophical point. If these AI robots evolve to be so logical, and if, as I understand Carrier to believe, that morality evolves logically (he refers in the blog to "the physically reductive reality of objective moral facts"), why would he need to front load in morality? It seems here again, as with reason, he is relying on intelligent human design to give him what he claims doesn't require design.
Richard's blog and opinions of himself are as definite as ever (he is still a "renowned" author with "avid" fans all over the world; he has made an "extensive refutation" of the AFR and he seems to believe he is always right), but it seems to me that he has shot himself in the foot this time. If anything, his thoughts on these robots seem to demonstrate the opposite of what he claims. Or have I completely misunderstood?
In a recent blog post titled Ten Years to the Robot Apocalypse, Richard Carrier has referenced some developments in robot AI that seem very interesting and futuristic - robots that evolve in their understanding and learn how to understand themselves and their environment, discover laws of nature for themselves, etc. Never mind for now that two computer science engineer commenters on Dangerous Idea suggested that Richard didn't understand what was going on in this field, let's stick with him for the moment.
Richard went on to suggest that if the capabilities of several of the robots were combined, they might present some threats. He uses the phrase "robot apocalypse", but this appears to be just a headline, for he points out he doesn't see the threat that seriously, but serious enough.
He then embarks on a "Digression on the triumph of atheism" in which he says these robots prove his refutation of Victor Reppert's argument for God from reason (AFR), because: "Their operations can be reduced to nothing but purely physical components interacting causally according to known physics (the operation of logic gates and registers exchanging electrons), yet they do everything that Christian apologist Victor Reppert insisted can’t be done by a purely physical system."
But, as one commenter pointed out, to achieve this the "machines are intelligently designed! (by humans)", which might seem to confirm Reppert's argument rather than Carrier's refutation, but doubtless that is arguable.
Carrier goes on to make the interesting point that: "they had better not forget to frontload some morality into any machine they try making self-sentient", believing (I think rightly) that a powerful AI robot might not make choices humans would want without this.
But this raises another interesting philosophical point. If these AI robots evolve to be so logical, and if, as I understand Carrier to believe, that morality evolves logically (he refers in the blog to "the physically reductive reality of objective moral facts"), why would he need to front load in morality? It seems here again, as with reason, he is relying on intelligent human design to give him what he claims doesn't require design.
Richard's blog and opinions of himself are as definite as ever (he is still a "renowned" author with "avid" fans all over the world; he has made an "extensive refutation" of the AFR and he seems to believe he is always right), but it seems to me that he has shot himself in the foot this time. If anything, his thoughts on these robots seem to demonstrate the opposite of what he claims. Or have I completely misunderstood?