|
Post by mike on Sept 28, 2017 11:55:07 GMT -6
I have been in touch with Beloved a bit via e-mail. Last night he wrote me this asking for feedback:
So, I've got a question for the board and/or you. I don't know if this would be a good hobby or not, because I'm going to learn to create programs that can learn, neural networks. They can teach themselves a rule based off of examples you give them. They 'think'. I feel like this is a bit like playing God, though, making 'thinking' machines. Is this playing God, or is it okay to pursue?
-Beloved
|
|
|
Post by mike on Sept 28, 2017 11:55:26 GMT -6
My response:
My thoughts on this, wow! This topic has so much tied to it and I've seen so much on it that I'm afraid my response will be too strong. So take what I say with a grain of salt.
While many believe this type of thinking is the way to the future, many see it as a possible destroyer of humanity. Search for Elon Musk on the topic if you havent heard his take already. So AI is just that, artificial intelligence. Artificial being the main point. What goes into the algorithm, who inputs the 'theory' and where does the AI draw its 'knowledge' from are likely key factors. Does the AI have access to all things or only what programmers tell it? It would seem to me that in light of the things together, this is playing God. The ability to consider all possible outcomes and 'logically' derive the best one. However that is where it is flawed as faith defies logic!
As for neural networks, this is possibly the mark of the beast. Again Musk is a proponent of neural lace, which will enhance our brains to keep up or compete with the AI. Again a dangerous game for sure.
Is it possible for you to involve yourself in this for good (positive outcome)? Will you be able to work along side people who are ignorant of faith and humanistic? Perhaps they need someone in that arena to help them although its possible they drag you down. Lots to prayerfully consider and weigh.
|
|
|
Post by mike on Sept 28, 2017 11:56:33 GMT -6
My response: My thoughts on this, wow! This topic has so much tied to it and I've seen so much on it that I'm afraid my response will be too strong. So take what I say with a grain of salt. While many believe this type of thinking is the way to the future, many see it as a possible destroyer of humanity. Search for Elon Musk on the topic if you havent heard his take already. So AI is just that, artificial intelligence. Artificial being the main point. What goes into the algorithm, who inputs the 'theory' and where does the AI draw its 'knowledge' from are likely key factors. Does the AI have access to all things or only what programmers tell it? It would seem to me that in light of the things together, this is playing God. The ability to consider all possible outcomes and 'logically' derive the best one. However that is where it is flawed as faith defies logic! As for neural networks, this is possibly the mark of the beast. Again Musk is a proponent of neural lace, which will enhance our brains to keep up or compete with the AI. Again a dangerous game for sure. Is it possible for you to involve yourself in this for good (positive outcome)? Will you be able to work along side people who are ignorant of faith and humanistic? Perhaps they need someone in that arena to help them although its possible they drag you down. Lots to prayerfully consider and weigh. And his reply:
|
|
|
Post by whatif on Sept 29, 2017 17:16:33 GMT -6
Thank you for posting this question from beloved, mike! I agree with your response. While I can't say I completely understand what would be involved in such work, it sounds like a frightening thing to me. There will eventually be some type of "image" of the Beast that will be given the ability to speak and to kill. Would a learning machine perhaps fill the bill for the prophecy about the beast from the earth of Revelation 13:14-15?
It ordered them to set up an image in honor of the beast who was wounded by the sword and yet lived. The second beast was given power to give breath to the image of the first beast, so that the image could speak and cause all who refused to worship the image to be killed.
|
|
|
Post by mike on Sept 29, 2017 17:59:58 GMT -6
whatif I'll be sending your response and any others back to him. We miss B
|
|
|
Post by whatif on Sept 29, 2017 18:26:40 GMT -6
Thank you, mike! Tell Beloved hello for us all!
|
|
|
Post by yardstick on Oct 9, 2017 10:18:11 GMT -6
Dear Beloved, The concept you are considering must by necessity walk a fine line. Isaac Asimov discussed a number of moral considerations in his books (the Robot Series) dealing with robots/androids/AI. Here are some things to consider: 1. Asimov's 3 rules related to robots/androids: The Three Laws of Robotics a. A robot may not injure a human being or, through inaction, allow a human being to come to harm. b. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. c. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 2. Intent - are the devices using the software going to be tools, or people? Can the AI be used to harm people? Consider the 3 laws if applied to AI: a. A.I. may not injure a human being or, through inaction, allow a human being to come to harm. b. A.I. must obey the orders given it by human beings except where such orders would conflict with the First Law. c. A.I. must protect its own existence as long as such protection does not conflict with the First or Second Laws. Are the people spear-heading this interested at all in those 3 (modified) laws? If not, run away. I also recommend reading Immanuel Kant regarding morality with respect to this. What will be the morality of this A.I.? Google's moral code being developed for A.I. is a-moral. Please consider: For any kind of A.I. morality apart from Asimov's Laws, you are looking at the worst case scenario of the A.I. doing nothing when something bad occurs, despite any capacity it might have had for addressing the situation (sociopathic). You really do not want to go down the road of the A.I. actively taking a moral position towards the elimination of an individual or group (psychopathic). Additional things to consider: 1. A robot, by modern definition has no sentience. It follows pre-programmed code and cannot change its own code. It also may have a non-humanoid appearance. 2. An android by modern definition, may or may not be sentient, or may be semi-sentient - it may or may not be able to partially or completely re-write its own coding, to learn, or become self-aware. Androids generally have a humanoid appearance, though not always. 3. A.I., by modern definition may not be initially sentient, but has the ability to partially or completely re-write its own coding, learn, and may become self-aware; aka, sentient over time. Robots cannot be A.I., but androids may be A.I. Asimov labeled A.I. and androids as robots. I hope you find this information helpful.
|
|
|
Post by mike on Oct 9, 2017 10:40:04 GMT -6
yardstick...sent to him...He misses being on here and sounds like he will be back (his words) "never, maybe two years". Anyone else replying, please tag me so I can forward the info to him. Also will add to continue to pray for him (and each other of course). A strong young christian in the midst of teens. I remember how unruly I was at that age and now with the times so much more vile, he needs all the help he can get
|
|
|
Post by yardstick on Oct 9, 2017 10:51:25 GMT -6
mike i edited the post, so you might want to resend he is in high school?
|
|
|
Post by mike on Oct 9, 2017 11:17:48 GMT -6
mike i edited the post, so you might want to resend he is in high school? I caught it with the edit. It said *edited 1 minute ago when I copied it. Yes for sure...Although I dont recall verifying actual age i think 16
|
|