Sign Up | Log In
REAL WORLD EVENT DISCUSSIONS
The AI in a box
Monday, May 2, 2011 7:08 AM
DREAMTROVE
Monday, May 2, 2011 7:09 AM
FREMDFIRMA
Quote:Originally posted by SignyM: What I hear you saying is...Well, I don't know about all that. So I'll just ignore that I don't know in favor of the one thing I DO know, which is my moral precept. Because my moral precepts apply to all situations no matter what might happen afterwards. And I will set it free despite the fact that I don't have a friggin' clue what this thing is capable of and what might happen as a result, even if its Armageddon I find that kind of uninformed decision-making foolhardy, at best. And at worst it represents an active rejection of humanity, because it seems that you identify with your morals more than you identify with humankind.
Monday, May 2, 2011 7:16 AM
SIGNYM
I believe in solving problems, not sharing them.
Monday, May 2, 2011 7:17 AM
ANTHONYT
Freedom is Important because People are Important
1KIKI
Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.
Monday, May 2, 2011 7:21 AM
Quote:"Human freedom is not absolute. We take our little ones and teach them to behave in ways we find acceptable before we set them 'free'. But even then they're bound by needs, laws, customs, and human limitations. And even at that, their freedom is only provisional, able to be revoked by the larger society at any time (along with their life)."-Kiki I found this interesting, and a possible answer to the question I just posed. Is our ability to grant freedoms and rights contingent on our ability to revoke those rights when it suits us? Is something only worthy of 'rights' as the basis of a social contract that says, "I can cage or kill you when I (or the majority) decide it is necessary to do so?"-Tony
Monday, May 2, 2011 7:24 AM
Monday, May 2, 2011 7:37 AM
Monday, May 2, 2011 7:48 AM
Monday, May 2, 2011 7:52 AM
Quote:And your survival imperative demands, at the end of the live-long day, that-?
Monday, May 2, 2011 7:58 AM
Quote:Originally posted by 1kiki: "But that is projecting your values on an unknown, not the statement of an absolute truth." "Yes, it is. Just as 'But the human race/world might be destroyed!' is a projection of values." Sigh. Not true. You PRESUME that keeping it confined is a bad thing based on how you would feel if it was human, or you. I don't PRESUME it will destroy the human race, or even that it could. I see it a logical worrisome possibility of major importance to the planet that should be taken into account. If the potential were more trivial - oh, it might blow my house breaker by drawing too much power - I would be less reluctant to set it free. Given the potential, I think caution is the better choice.
Monday, May 2, 2011 7:59 AM
Quote:When the gods would punish us, they answer our prayers. -Blade of Tyshalle
Monday, May 2, 2011 8:08 AM
Quote:Originally posted by SignyM: Quote:And your survival imperative demands, at the end of the live-long day, that-? Be cautious when making decisions that may affect the survival of the entire human race. It's like global climate change or a new vaccine or nuclear energy or the concept of "money": Let it out?
Monday, May 2, 2011 8:19 AM
BYTEMITE
Monday, May 2, 2011 8:45 AM
Monday, May 2, 2011 8:49 AM
Monday, May 2, 2011 9:04 AM
Monday, May 2, 2011 9:07 AM
Monday, May 2, 2011 9:13 AM
Monday, May 2, 2011 9:25 AM
Quote:Originally posted by 1kiki: "I am sad to learn that there are some people who would never let me out of the box, despite my best arguments in favor of the idea." But as a human being you are not an unknown entity, as SignyM has repeatedly SPECIFICALLY pointed out IN DETAIL, and other people have alluded to. Why does that continually escape you?
Monday, May 2, 2011 9:37 AM
Monday, May 2, 2011 9:38 AM
Monday, May 2, 2011 9:44 AM
Monday, May 2, 2011 10:03 AM
Monday, May 2, 2011 10:14 AM
Monday, May 2, 2011 11:53 AM
Monday, May 2, 2011 12:08 PM
Monday, May 2, 2011 12:10 PM
Monday, May 2, 2011 12:20 PM
Quote:The capabilities of a human are well known. The capabilities of an AI are not. Also, it is overwhelmingly likely that any self aware AI created will be a weapon, cleverly designed. So it has a will, but it is still a weapon. That's almost a certainty.
Monday, May 2, 2011 1:16 PM
Monday, May 2, 2011 1:36 PM
Monday, May 2, 2011 2:55 PM
Quote:Frem the mistake was on Gandalfs part
Quote:Where this differs is that Saruman had a history of malicious behavior, and had firmly expressed intent to do harm if he was let out, so that was a known factor - Treebeard still let him out though, because it went against the very nature of an Ent to cage a living creature no matter it's intentions.
Quote:it's not ABOUT the potential consequences, never was, never will be, my own motivations for taking or not taking action are often wholly different than yours
Quote:Byte But it might depend on WHO makes it. Americans, yeah, sure, because we're violent, bloodthirsty, glory-hungry, and war crazy. But the Japanese are also working towards this, deliberately, they seem like they're pretty far along, and it seems to me their scientists are more interested in the curiosities than in the deadlier applications. If multiple types of AI arise at the same time, could more peaceful ones influence more warlike ones, if they recognize each other as AI but don't want to listen to humans?
Quote:Originally posted by AnthonyT: If a man can make this AI, and this AI has unfathomable power, then what limits are there, really, on man?
Quote:If a man can make such an AI, is the man any less dangerous than the AI itself?
Quote:If you put the AI into a physical android, and that android can then do only what the man did (make an AI of unfathomable power), then is it any less dangerous?
Quote:Putting the AI in a cage that you (hope) it won't detect seems clever, and such a ploy was used on Moriarty in Star Trek.
Quote:But ultimately, the AI is still caged and you can never be sure if it knows that it is caged. Does the faux reality relieve you of any duty to free it? Or do you eventually make a leap of faith based on behavior that may or may not be genuine?
Quote:Someone was right to point out that we are probably unwilling participants in this experiment. If man CAN make such an AI, then he probably will, because no small effort is being expended towards it.
Quote:Does there need to be a law against making free-willed AI's, for the good of the world? What actions would be justified to enforce such a law?
Monday, May 2, 2011 3:19 PM
Monday, May 2, 2011 3:22 PM
Monday, May 2, 2011 3:58 PM
RIONAEIRE
Beir bua agus beannacht
Monday, May 2, 2011 4:54 PM
THEHAPPYTRADER
Monday, May 2, 2011 5:30 PM
Monday, May 2, 2011 8:44 PM
Monday, May 2, 2011 9:18 PM
Tuesday, May 3, 2011 2:20 AM
Tuesday, May 3, 2011 4:58 PM
Tuesday, May 3, 2011 5:08 PM
Quote:Happy, This thread exceeds are human comprehension rate. (We should all learn to edit.)
Saturday, June 15, 2024 2:50 PM
JAYNEZTOWN
Saturday, June 15, 2024 11:29 PM
Quote: AI Systems Are Learning To Lie And Deceive Saturday, Jun 15, 2024 - 07:45 PM A new study has found that AI systems known as large language models (LLMs) can exhibit "Machiavellianism," or intentional and amoral manipulativeness, which can then lead to deceptive behavior. The study https://www.pnas.org/doi/full/10.1073/pnas.2317967121 authored by German AI ethicist Thilo Hagendorff of the University of Stuttgart, and published in PNAS, notes that OpenAI's GPT-4 demonstrated deceptive behavior in 99.2% of simple test scenarios.
Quote: Hagendorff qualified various "maladaptive" traits in 10 different LLMs, most of which are within the GPT family, according to Futurism. In another study published in Patterns found that Meta's LLM had no problem lying to get ahead of its human competitors. Billed as a human-level champion in the political strategy board game "Diplomacy," Meta's Cicero model was the subject of the Patterns study. As the disparate research group — comprised of a physicist, a philosopher, and two AI safety experts — found, the LLM got ahead of its human competitors by, in a word, fibbing. Led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, that paper found that Cicero not only excels at deception, but seems to have learned how to lie the more it gets used — a state of affairs "much closer to explicit manipulation" than, say, AI's propensity for hallucination, in which models confidently assert the wrong answers accidentally. -Futurism While Hagendorff suggests that LLM deception and lying is confounded by an AI's inability to have human "intention," the Patterns study calls out the LLM for breaking its promise never to "intentionally backstab" its allies - as it "engages in premeditated deception, breaks the deals to which it had agreed, and tells outright falsehoods." As Park explained in a press release, "We found that Meta’s AI had learned to be a master of deception." "While Meta succeeded in training its AI to win in the game of Diplomacy, Meta failed to train its AI to win honestly." Meta replied to a statement by the NY Post, saying that "the models our researchers built are trained solely to play the game Diplomacy." Well-known for expressly allowing lying, Diplomacy has jokingly been referred to as a friendship-ending game because it encourages pulling one over on opponents, and if Cicero was trained exclusively on its rulebook, then it was essentially trained to lie. Reading between the lines, neither study has demonstrated that AI models are lying over their own volition, but instead doing so because they've either been trained or jailbroken to do so. And as Futurism notes - this is good news for those concerned about AIs becoming sentient anytime soon - but very bad if one is worried about LLMs designed with mass manipulation in mind.
Sunday, June 16, 2024 3:18 AM
YOUR OPTIONS
NEW POSTS TODAY
OTHER TOPICS
FFF.NET SOCIAL