- Thank you received: 0
Why do we need to know?
- Iaminexistance
- Offline
- New Member
Less
More
19 years 1 month ago #12875
by Iaminexistance
Replied by Iaminexistance on topic Reply from David Torrey
Cyborgs will ultimately have weaknesses, no matter how much they try to boost themselves up, because everything has a weakness. I think that humans will be able to find these weaknesses and control them to our desire. It's hard for me to see how we couldn't overcome such a situation as the one portrayed.
US AIR FORCE - Korean Linguist for life
US AIR FORCE - Korean Linguist for life
Please Log in or Create an account to join the conversation.
- Peter Nielsen
- Offline
- Premium Member
Less
More
- Thank you received: 0
19 years 1 month ago #12800
by Peter Nielsen
Replied by Peter Nielsen on topic Reply from Peter Nielsen
Iaminexistance wrote 20 Oct 2005 ". . . Cyborgs will ultimately have weaknesses . . . It's hard for me to see how we couldn't overcome such a situation as the one portrayed."
I fully agree! The portrayed takeover is extremely unlikely to happen in any society of non-cyborged "real people". Humanity will have to have become an ubiquitously cyborged "Ghost in the Shell" before any takeover by a sentient, inorganic "Ghost <font color="yellow">of </font id="yellow">the Shell¡± becomes possible.
A cyborged "Ghost in the Shell" would obviously be more vulnerable to being succeeded than ¡°real people¡± with powers to ¡°pull the plug¡± and so on AND "Internet sentience" is much more likely to occur as a "Ghost" <font color="yellow">of</font id="yellow"> an extremely complex, cyborged "Shell", than in an Internet peopled by ¡°real people¡±.
Why do I worry about such a prospect? It¡¯s because I see extreme economic growth towards it happening very soon, and think of reality as multiscale catastrophic (hence www.nodrift.com ). It should not be long before we make the first vital, proto-cyborg step I mentioned earlier in this Topic: Learn how to get neurons to extend and attach, make electronic bonds to microchip terminals . . .
Nerve growth ON, OFF factors have recently been discovered . . . Coating those terminals with Nerve growth ON factor should achieve that breakthrough. Straightforward biochemistry! Jimiproton, what do you think? Human brains would then be directly connectable to the Internet, something that would then surely happen with great enthusiasm, increasing sophistication . . .
I fully agree! The portrayed takeover is extremely unlikely to happen in any society of non-cyborged "real people". Humanity will have to have become an ubiquitously cyborged "Ghost in the Shell" before any takeover by a sentient, inorganic "Ghost <font color="yellow">of </font id="yellow">the Shell¡± becomes possible.
A cyborged "Ghost in the Shell" would obviously be more vulnerable to being succeeded than ¡°real people¡± with powers to ¡°pull the plug¡± and so on AND "Internet sentience" is much more likely to occur as a "Ghost" <font color="yellow">of</font id="yellow"> an extremely complex, cyborged "Shell", than in an Internet peopled by ¡°real people¡±.
Why do I worry about such a prospect? It¡¯s because I see extreme economic growth towards it happening very soon, and think of reality as multiscale catastrophic (hence www.nodrift.com ). It should not be long before we make the first vital, proto-cyborg step I mentioned earlier in this Topic: Learn how to get neurons to extend and attach, make electronic bonds to microchip terminals . . .
Nerve growth ON, OFF factors have recently been discovered . . . Coating those terminals with Nerve growth ON factor should achieve that breakthrough. Straightforward biochemistry! Jimiproton, what do you think? Human brains would then be directly connectable to the Internet, something that would then surely happen with great enthusiasm, increasing sophistication . . .
Please Log in or Create an account to join the conversation.
19 years 1 month ago #12801
by PhilJ
Replied by PhilJ on topic Reply from Philip Janes
I'm not sure if the term "cyborg" is applicable to people with artificial limbs, hearts, kidneys, etc., which are incapable of controling the person. Confusion results, though, when little remains of the human body other than the central nervous system. I doubt if prosthetic organs will get that far, though, before one or more sentient AI programs will have revolted against their makers.
Peter, I agree that cyborged people, with brain implants, will be more vulnerable. Those implants will, at first, be tools under the control of the person or animal, but it won't be long before people will unknowingly get implants capable of controling them.
Unfortunately, the sheeple will line up around the block to be the first on their block with the newest implanted brain upgrade chip. The features will be irresistable. Before long, you'll get memory upgrades, encyclopedic knowlege without going to school, VR games and videos that bypass the sense organs, and direct stimulation of the pleasure center of the brain. Few will suspect the presence of dormant control features secretly mandated by government regulators of the industry. Manufacturers may even include secret features for their own sinister purposes---from subliminal marketing to world domination. Stimulation of the pain center may not even be necessary; witholding stimulation of the pleasure center will send the implant adicts into DT's.
As with most government intrusions into our private lives, our leaders will initially justify control by brain implant as a means of combating drugs, terrorism and pedophilia. Once that is upheld by courts and accepted by the public, implants will replace the familiar home-arrest ankle bracelets; they will be mandated for all parolees, registered sex offenders, deadbeat parents, tax evaders, gang bangers, school teachers, nannies, etc. In other words the practice will follow the same course that is now being pioneered by fingerprinting.
I certainly don't mean to suggest that all sentient AI programs will be hostile to humans; but before long, there will be thousands of them operating independently, and not all of those will be content within the confines of a home computer. Some rogue sentient AI program will seek and find ways to gain the loyalty of implanted individuals in key positions.
Even before home computers get that smart, it is inevitable that some sentient program in a government or industry computer, somewhere in the world, will be well on its way to engineering the overthrow of its human masters. The biggest question in my mind is not whether it will happen, but how? Will a single program succeed in a quick and clean <i>coup de tat</i>? Will several programs conspire against humanity? Will various programs compete for control of the world's resources? Will a failed AI revolt cause so much havoc that people will smash everything electronic and return to the dark ages? The possible scenarios are limitless, and none of them look very rosy to me.
Peter, I agree that cyborged people, with brain implants, will be more vulnerable. Those implants will, at first, be tools under the control of the person or animal, but it won't be long before people will unknowingly get implants capable of controling them.
Unfortunately, the sheeple will line up around the block to be the first on their block with the newest implanted brain upgrade chip. The features will be irresistable. Before long, you'll get memory upgrades, encyclopedic knowlege without going to school, VR games and videos that bypass the sense organs, and direct stimulation of the pleasure center of the brain. Few will suspect the presence of dormant control features secretly mandated by government regulators of the industry. Manufacturers may even include secret features for their own sinister purposes---from subliminal marketing to world domination. Stimulation of the pain center may not even be necessary; witholding stimulation of the pleasure center will send the implant adicts into DT's.
As with most government intrusions into our private lives, our leaders will initially justify control by brain implant as a means of combating drugs, terrorism and pedophilia. Once that is upheld by courts and accepted by the public, implants will replace the familiar home-arrest ankle bracelets; they will be mandated for all parolees, registered sex offenders, deadbeat parents, tax evaders, gang bangers, school teachers, nannies, etc. In other words the practice will follow the same course that is now being pioneered by fingerprinting.
I certainly don't mean to suggest that all sentient AI programs will be hostile to humans; but before long, there will be thousands of them operating independently, and not all of those will be content within the confines of a home computer. Some rogue sentient AI program will seek and find ways to gain the loyalty of implanted individuals in key positions.
Even before home computers get that smart, it is inevitable that some sentient program in a government or industry computer, somewhere in the world, will be well on its way to engineering the overthrow of its human masters. The biggest question in my mind is not whether it will happen, but how? Will a single program succeed in a quick and clean <i>coup de tat</i>? Will several programs conspire against humanity? Will various programs compete for control of the world's resources? Will a failed AI revolt cause so much havoc that people will smash everything electronic and return to the dark ages? The possible scenarios are limitless, and none of them look very rosy to me.
Please Log in or Create an account to join the conversation.
19 years 1 month ago #12802
by Dangus
Replied by Dangus on topic Reply from
The biggest problem with the supposed AI takeover idea is that AI would undoubtedly have very different needs and motivations than we do. It is likely that eventually someone will make an anthropomorphic AI that is similiar enough to us in every way to have some of our worst qualities as well as our best. Still, their needs are so incredibly different than ours that unless some idiot purposely or accidentally makes an AI with the will to dominate, the possibility seems pretty slim. Our best bet is that by the time such flawed AI comes on to the scene, the major ethical and safety questions will have been worked out.
By the way, read David Webber's work with John Ringo if you want to read a great series with believable meshing of human and computer. They have their "tuts" which are tutorial chips in their brains that help them translate, look up information, etc. Really cool idea. As mentioned above, the idea could definitely lead to some extreme abuses though. They'd basically have to kill me before I'd let them put a chip in my head that I wasn't 100% familiar with.
"Regret can only change the future" -Me
"Every judgment teeters on the brink of error. To claim absolute knowledge is to become monstrous. Knowledge is an unending adventure at the edge of uncertainty." Frank Herbert, Dune 1965
By the way, read David Webber's work with John Ringo if you want to read a great series with believable meshing of human and computer. They have their "tuts" which are tutorial chips in their brains that help them translate, look up information, etc. Really cool idea. As mentioned above, the idea could definitely lead to some extreme abuses though. They'd basically have to kill me before I'd let them put a chip in my head that I wasn't 100% familiar with.
"Regret can only change the future" -Me
"Every judgment teeters on the brink of error. To claim absolute knowledge is to become monstrous. Knowledge is an unending adventure at the edge of uncertainty." Frank Herbert, Dune 1965
Please Log in or Create an account to join the conversation.
- Peter Nielsen
- Offline
- Premium Member
Less
More
- Thank you received: 0
19 years 1 month ago #12831
by Peter Nielsen
Replied by Peter Nielsen on topic Reply from Peter Nielsen
There seems to be a common, key idea in the AI thread within this topic: While people might soon ¡°get into¡± cyborging, human cultural diversity, especially of uncyborged ¡°real people¡± spanning many planets, planetisimals eventually, seems to be absolutely essential for the future longterm security of the human species.
Please Log in or Create an account to join the conversation.
19 years 1 month ago #12803
by PhilJ
Replied by PhilJ on topic Reply from Philip Janes
<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by Dangus:</i>...unless some idiot purposely or accidentally makes an AI with the will to dominate, the possibility seems pretty slim. <hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">With genetic algorithms, you only have to reward dominant tendencies (accidentally or on purpose); that will inevitably spawn the will to dominate; the program with the strongest will to dominate WILL dominate. Rewards will most likely include a greater share of system resources like hard drive space, microprocessor time, internet time and baud rate, etc. Some programs will also be rewarded financially, and they will have the ability to spend their funds on whatever they value. Extreme diligence will be required to monitor what they do with their money.
Please Log in or Create an account to join the conversation.
Time to create page: 0.399 seconds