Sign Up | Log In
REAL WORLD EVENT DISCUSSIONS
Elon Musk, Apple co-founder Steve Wozniak and 1,000 other tech leaders call for pause on AI development which poses a 'profound risk to society and humanity'
Thursday, March 30, 2023 12:24 AM
6IXSTRINGJACK
Thursday, March 30, 2023 4:48 AM
JAYNEZTOWN
Thursday, March 30, 2023 6:29 AM
SECOND
The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two
Thursday, March 30, 2023 9:36 AM
Quote: Would your rationale for this pause have applied to basically any nascent technology — the printing press, radio, airplanes, the Internet? “We don’t yet know the implications, but there’s an excellent chance terrible people will misuse this, ergo the only responsible choice is to pause until we’re confident that they won’t”?
Quote:Why six months? Why not six weeks or six years?
Quote:When, by your lights, would we ever know that it was safe to resume scaling AI—or at least that the risks of pausing exceeded the risks of scaling? Why won’t the precautionary principle continue for apply forever?
Quote:Were you, until approximately last week, ridiculing GPT as unimpressive, a stochastic parrot, lacking common sense, piffle, a scam, etc. — before turning around and declaring that it could be existentially dangerous?
Quote:How can you have it both ways?
Quote:If the problem, in your view, is that GPT-4 is too stupid, then shouldn’t GPT-5 be smarter and therefore safer? Thus, shouldn’t we keep scaling AI as quickly as we can … for safety reasons? If, on the other hand, the problem is that GPT-4 is too smart, then why can’t you bring yourself to say so?
Thursday, March 30, 2023 10:08 AM
Quote:Originally posted by 6IXSTRINGJACK: There is possibly a race of beings on another planet in our near infinite universe that is capable of being responsible shepherds of AI technology. Human beings are not capable of that. Not even close.
Thursday, March 30, 2023 10:16 AM
Thursday, March 30, 2023 10:31 AM
SIGNYM
I believe in solving problems, not sharing them.
Thursday, March 30, 2023 10:54 AM
Quote:Originally posted by SIGNYM: The problem with AI is that, unlike nuclear power/weapons (another potentially world-ending technology) we don't have a prayer of understanding what it's doing, or why. In fact, people are dicking around with it in the hopes that it can "understand" or "see" things that mere mortals can't.
Quote:Letting the genie out of the bottle is an apt metaphor. Just don't let it reside on the internet (in the "wild") where it might presumably copy itself onto every server in the world (or, worse, exist in transit everywhere) or control its own power supply.
Thursday, March 30, 2023 3:25 PM
Quote:Originally posted by JAYNEZTOWN: In the history of panic calling for a pause, this usually means they already screwed up genie is out of the bottle
Thursday, March 30, 2023 3:51 PM
Thursday, March 30, 2023 4:15 PM
Thursday, March 30, 2023 4:23 PM
Quote:Originally plagiarized by SECOND So, one idea that people have had—this is actually Yudkowsky’s term—is “Coherent Extrapolated Volition.” This basically means that you’d tell the AI: “I’ve given you all this training data about human morality in the year 2022. Now simulate the humans being in a discussion seminar for 10,000 years, trying to refine all of their moral intuitions, and whatever you predict they’d end up with, those should be your values right now.” There have already been some demonstrations of this principle: with GPT, for example, you can just feed in a lot of raw data from a neural net and say, “explain to me what this is doing.” One of GPT’s big advantages over humans is its unlimited patience for tedium, so it can just go through all of the data and give you useful hypotheses about what’s going on.
Thursday, March 30, 2023 4:30 PM
Quote:Originally posted by SIGNYM: Also, there are innate social instincts in most people. (SECOND seems to lack them, which might be why he aligns with AI.)
Thursday, March 30, 2023 4:32 PM
Quote:Originally posted by 6IXSTRINGJACK: Scott needs to shut the fuck up and sit down.
Friday, March 31, 2023 7:58 AM
Friday, March 31, 2023 8:08 AM
Friday, March 31, 2023 10:29 AM
Quote:Originally posted by second: Quote:Originally posted by 6IXSTRINGJACK: Scott needs to shut the fuck up and sit down.. . . no matter what AI safety proposal anyone comes up with, Eliezer has ready a completely general counterargument. Namely: “yes, but the AI will be smarter than that.” In other words, no matter what you try to do to make AI safer—interpretability, backdoors, sandboxing, you name it—the AI will have already foreseen it, and will have devised a countermeasure that your primate brain can’t even conceive of because it’s that much smarter than you. I confess that, after seeing enough examples of this “fully general counterargument,” at some point I’m like, “OK, what game are we even playing anymore?” If this is just a general refutation to any safety measure, then I suppose that yes, by hypothesis, we’re screwed. Yes, in a world where this counterargument is valid, we might as well give up and try to enjoy the time we have left. But you could also say: for that very reason, it seems more useful to make the methodological assumption that we’re not in that world! If we were, then what could we do, right? So we might as well focus on the possible futures where AI emerges a little more gradually, where we have time to see how it’s going, learn from experience, improve our understanding, correct as we go—in other words, the things that have always been the prerequisites to scientific progress, and that have luckily always obtained, even if philosophically we never really had any right to expect them. We might as well focus on the worlds where, for example, before we get an AI that successfully plots to kill all humans in a matter of seconds, we’ll probably first get an AI that tries to kill all humans but is really inept at it. Now fortunately, I personally also regard the latter scenarios as the more plausible ones anyway. But even if you didn’t—again, methodologically, it seems to me that it’d still make sense to focus on them. https://scottaaronson.blog/?p=6823 The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/folder/1uwh75oa407q8/Firefly
YOUR OPTIONS
NEW POSTS TODAY
OTHER TOPICS
FFF.NET SOCIAL