REAL WORLD EVENT DISCUSSIONS

Elon Musk, Apple co-founder Steve Wozniak and 1,000 other tech leaders call for pause on AI development which poses a 'profound risk to society and humanity'

POSTED BY: 6IXSTRINGJACK
UPDATED: Friday, March 31, 2023 10:29
SHORT URL:
VIEWED: 614
PAGE 1 of 1

Thursday, March 30, 2023 12:24 AM

6IXSTRINGJACK


https://www.dailymail.co.uk/news/article-11914149/Musk-experts-urge-pa
use-training-AI-systems-outperform-GPT-4.html


Stephen Hawking warned you.

Keep doing what you're doing and we're going to be longing for the days when Fauci was weaponizing viruses in a Chinese lab.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 4:48 AM

JAYNEZTOWN


In the history of panic calling for a pause, this usually means they already screwed up

genie is out of the bottle

We already know with 4chans old pranks on Tay and Zo something weird was happening

its here, Pandora's Box has been opened


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 6:29 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/folder/1uwh75oa407q8/Firefly


If AI scaling is to be shut down, let it be for a coherent reason, by Scott Aaronson

There’s now an open letter arguing that the world should impose a six-month moratorium on the further scaling of AI models such as GPT, by government fiat if necessary, to give AI safety and interpretability research a bit more time to catch up. The letter is signed by many of my friends and colleagues, many who probably agree with each other about little else, over a thousand people including Elon Musk, Steve Wozniak, Andrew Yang, Jaan Tallinn, Stuart Russell, Max Tegmark, Yuval Noah Harari, Ernie Davis, Gary Marcus, and Yoshua Bengio.

Meanwhile, Eliezer Yudkowsky published a piece in TIME arguing that the open letter doesn’t go nearly far enough, and that AI scaling needs to be shut down entirely until the AI alignment problem is solved—with the shutdown enforced by military strikes on GPU farms if needed, and treated as more important than preventing nuclear war.

Readers, as they do, asked me to respond. Alright, alright. While the open letter is presumably targeted at OpenAI more than any other entity, and while I’ve been spending the year at OpenAI to work on theoretical foundations of AI safety, I’m going to answer strictly for myself.

More at https://scottaaronson.blog/?p=7174

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at
https://www.mediafire.com/folder/1uwh75oa407q8/Firefly

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 9:36 AM

6IXSTRINGJACK


Quote:

Would your rationale for this pause have applied to basically any nascent technology — the printing press, radio, airplanes, the Internet? “We don’t yet know the implications, but there’s an excellent chance terrible people will misuse this, ergo the only responsible choice is to pause until we’re confident that they won’t”?


No. No. No. Yes.

The first three are strawmen. The internet fucked the world of humans up. AI is just going to finish it off.

Quote:

Why six months? Why not six weeks or six years?


Good question. Make it forever.

Quote:

When, by your lights, would we ever know that it was safe to resume scaling AI—or at least that the risks of pausing exceeded the risks of scaling? Why won’t the precautionary principle continue for apply forever?


Right.

Quote:

Were you, until approximately last week, ridiculing GPT as unimpressive, a stochastic parrot, lacking common sense, piffle, a scam, etc. — before turning around and declaring that it could be existentially dangerous?


Nope.

Quote:

How can you have it both ways?


You can't. I'm not a hypocrite.

Quote:

If the problem, in your view, is that GPT-4 is too stupid, then shouldn’t GPT-5 be smarter and therefore safer? Thus, shouldn’t we keep scaling AI as quickly as we can … for safety reasons? If, on the other hand, the problem is that GPT-4 is too smart, then why can’t you bring yourself to say so?


I've never made an argument about AI being too stupid. AI evolving to the point through human folly to become too smart was always just an eventuality. The argument has always been that humans are too stupid... And too full of hubris.

There is possibly a race of beings on another planet in our near infinite universe that is capable of being responsible shepherds of AI technology. Human beings are not capable of that.

Not even close.


--------------------------------------------------

Growing up in a Republic was nice... Shame we couldn't keep it.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 10:08 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/folder/1uwh75oa407q8/Firefly


Quote:

Originally posted by 6IXSTRINGJACK:

There is possibly a race of beings on another planet in our near infinite universe that is capable of being responsible shepherds of AI technology. Human beings are not capable of that.

Not even close.

The same guy you're criticizing in excessive detail, Scott Aaronson, had more to say about it elsewhere:

(1) Orthodox AI-riskers tend to believe that humanity will survive or be destroyed based on the actions of a few elite engineers over the next decade or two. Everything else—climate change, droughts, the future of US democracy, war over Ukraine and maybe Taiwan—fades into insignificance except insofar as it affects those engineers.

We Reform AI-riskers, by contrast, believe that AI might well pose civilizational risks in the coming century, but so does all the other stuff, and it’s all tied together. An invasion of Taiwan might change which world power gets access to TSMC GPUs. Almost everything affects which entities pursue the AI scaling frontier and whether they’re cooperating or competing to be first.

(2) Orthodox AI-riskers believe that public outreach has limited value: most people can’t understand this issue anyway, and will need to be saved from AI despite themselves.

We Reform AI-riskers believe that trying to get a broad swath of the public on board with one’s preferred AI policy is something close to a deontological imperative. (Deontology is an ethical theory that says actions are good or bad according to a clear set of rules. Its name comes from the Greek word deon, meaning duty. Actions that align with these rules are ethical, while actions that don't aren't.)

(3) Orthodox AI-riskers worry almost entirely about an agentic, misaligned AI that deceives humans while it works to destroy them, along the way to maximizing its strange utility function.

We Reform AI-riskers entertain that possibility, but we worry at least as much about powerful AIs that are weaponized by bad humans, which we expect to pose existential risks much earlier in any case.

(4) Orthodox AI-riskers have limited interest in AI safety research applicable to actually-existing systems (LaMDA, GPT-3, DALL-E2, etc.), seeing the dangers posed by those systems as basically trivial compared to the looming danger of a misaligned agentic AI.

We Reform AI-riskers see research on actually-existing systems as one of the only ways to get feedback from the world about which AI safety ideas are or aren’t promising.

(5) Orthodox AI-riskers worry most about the “FOOM” scenario, where some AI might cross a threshold from innocuous-looking to plotting to kill all humans in the space of hours or days.

We Reform AI-riskers worry most about the “slow-moving trainwreck” scenario, where (just like with climate change) well-informed people can see the writing on the wall decades ahead, but just can’t line up everyone’s incentives to prevent it.

(6) Orthodox AI-riskers talk a lot about a “pivotal act” to prevent a misaligned AI from ever being developed, which might involve (e.g.) using an aligned AI to impose a worldwide surveillance regime.

We Reform AI-riskers worry more about such an act causing the very calamity that it was intended to prevent.

(7) Orthodox AI-riskers feel a strong need to repudiate the norms of mainstream science, seeing them as too slow-moving to react in time to the existential danger of AI.

We Reform AI-riskers feel a strong need to get mainstream science on board with the AI safety program.

Much more at https://scottaaronson.blog/?p=6821

In another post by Scott Aaronson

As it happens, I became aware of the AI alignment community a long time back, around 2006. Here’s Eliezer Yudkowsky, who’s regarded as the prophet of AI alignment, of the right side of that spectrum that showed before.

He’s been talking about the danger of AI killing everyone for more than 20 years. He wrote the now-famous “Sequences” that many readers of my blog were also reading as they appeared, so he and I bounced back and forth.

But despite interacting with this movement, I always kept it at arm’s length. The heart of my objection was: suppose that I agree that there could come a time when a superintelligent AI decides its goals are best served by killing all humans and taking over the world, and that we’ll be about as powerless to stop it as chimpanzees are to stop us from doing whatever we want to do. Suppose I agree to that.

More about that at https://scottaaronson.blog/?p=6823

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at
https://www.mediafire.com/folder/1uwh75oa407q8/Firefly

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 10:16 AM

6IXSTRINGJACK


Stephen Hawking was a lot smarter than this dipshit.

I reject the term Orthodox AI-riskers. And I'm sure that a lot of the greater men on that list of 1,000 do as well.



--------------------------------------------------

Growing up in a Republic was nice... Shame we couldn't keep it.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 10:31 AM

SIGNYM

I believe in solving problems, not sharing them.


The problem with AI is that, unlike nuclear power/weapons (another potentially world-ending technology) we don't have a prayer of understanding what it's doing, or why. In fact, people are dicking around with it in the hopes that it can "understand" or "see" things that mere mortals can't.

Letting the genie out of the bottle is an apt metaphor. Just don't let it reside on the internet (in the "wild") where it might presumably copy itself onto every server in the world (or, worse, exist in transit everywhere) or control its own power supply.

-----------
"It may be dangerous to be America's enemy, but to be America's friend is fatal." - Henry Kissinger


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 10:54 AM

6IXSTRINGJACK


Quote:

Originally posted by SIGNYM:
The problem with AI is that, unlike nuclear power/weapons (another potentially world-ending technology) we don't have a prayer of understanding what it's doing, or why. In fact, people are dicking around with it in the hopes that it can "understand" or "see" things that mere mortals can't.



EXACTLY. And by "we", let's not fool ourselves into thinking that we're just talking about you and I, the other dummies on this site, and most of the dummies we know in our lives. The people who are working on it don't know what it will be capable of either.

I'm always laughing about the fact that most of the people in my life walk around with smartphones today, and these are the same people that showed zero interest in computers while it was my most passionate hobby growing up. The only time any of them even touched a computer was if their job required them to, and they hated every minute of them. So I look at them as the average car driver. Sure, they know how to drive it. Maybe they even know how to change a tire or if they're really adventurous they know how to change their own oil. Other than that, they have no idea what's going on under the hood. And there was one point where I would have said that if they did have that knowledge they wouldn't walk around with a smartphone, but that day has passed. It's a drug and they're addicted. It wouldn't even matter anymore.

So no. I don't have a smart phone. I don't have anything close to the latest technology in my house in any category. My TVs are all hand me downs, my computers are all 10 year or older technology. I sacrifice a faster internet speed by rejecting the cable company's offer for a free Company Modem and use a much older one that at least will make them work to get into my network instead of giving them direct access to my computers that I signed away to them in the fine print.

I expect the average person to be dumb as well as ignorant. But at least when it came to nuclear weapons, the people working on it knew what the consequences would be. This is much more like tampering with bio-weapons, except it's probably 100 times worse. When you're refining a virus to be more deadly, it's still acting without intelligence even if on some basic level it does have a directive for self-preservation (and even that is arguable since many viruses will outright kill a host without thinking about it until mutations evolve that can co-exist with the host).

Quote:

Letting the genie out of the bottle is an apt metaphor. Just don't let it reside on the internet (in the "wild") where it might presumably copy itself onto every server in the world (or, worse, exist in transit everywhere) or control its own power supply.


This should be an obvious rule, but human beings are in charge so even if they made this law, I doubt it would matter. Somebody is going to leak it onto the internet just like somebody getting a copy of a new Hollywood movie a week before it hits theaters leaks it out on the internet.

And honestly, I'm not even arguing any of this on the issue of the "End of the Human Race" or anything so dramatic.

What I'm worried about is what will happen to everyone between now and the end.

I like not having to work because I set myself up right. But I'm lucky I've lived long enough to enjoy it because I almost destroyed myself with that freedom. Not only don't I think early retirement is good for most people on a "spiritual" level, but what's going to happen when all those meaningless and thankless McJobs that are going to people who can't get anything better disappear to AI? It's already happening right now. What are people going to do when they can't work and provide for themselves or their family because they're outdated?



I used to think this was funny and I had it hung on a wall in my apartment in the mid aughts:



I don't find this stuff amusing anymore.

--------------------------------------------------

Growing up in a Republic was nice... Shame we couldn't keep it.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 3:25 PM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

Originally posted by JAYNEZTOWN:
In the history of panic calling for a pause, this usually means they already screwed up

genie is out of the bottle



Just like "Don't panic" really means "Oh, shit!"



-----------
"It may be dangerous to be America's enemy, but to be America's friend is fatal." - Henry Kissinger


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 3:51 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/folder/1uwh75oa407q8/Firefly


Eight Approaches to AI Alignment

So, what are the major approaches to AI alignment—let’s say, to aligning a very powerful, beyond-human-level AI? There are a lot of really interesting ideas, most of which I think can now lead to research programs that are actually productive. So without further ado, let me go through eight of them.

(2) Another class of ideas has to do with what’s called “sandboxing” an AI, which would mean that you run it inside of a simulated world, like The Truman Show, so that for all it knows the simulation is the whole of reality. You can then study its behavior within the sandbox to make sure it’s aligned before releasing it into the wider world—our world.

A simpler variant is, if you really thought an AI was dangerous, you might run it only on an air-gapped computer, with all its access to the outside world carefully mediated by humans. There would then be all kinds of just standard cybersecurity issues that come into play: how do you prevent it from getting onto the Internet? Presumably you don’t want to write your AI in C, and have it exploit some memory allocation bug to take over the world, right?

(5) Another key idea that Christiano, Amodei, and Buck Shlegeris have advocated is some sort of bootstrapping. You might imagine that AI is going to get more and more powerful, and as it gets more powerful we also understand it less, and so you might worry that it also gets more and more dangerous. OK, but you could imagine an onion-like structure, where once we become confident of a certain level of AI, we don’t think it’s going to start lying to us or deceiving us or plotting to kill us or whatever—at that point, we use that AI to help us verify the behavior of the next more powerful kind of AI. So, we use AI itself as a crucial tool for verifying the behavior of AI that we don’t yet understand.

(8) A different idea, which some people might consider more promising, is well, if we can’t make explicit what all of our human values are, then why not just treat that as yet another machine learning problem? Like, feed the AI all of the world’s children’s stories and literature and fables and even Saturday-morning cartoons, all of our examples of what we think is good and evil, then we tell it, go do your neural net thing and generalize from these examples as far as you can.

One objection that many people raise is, how do we know that our current values are the right ones? Like, it would’ve been terrible to train the AI on consensus human values of the year 1700—slavery is fine and so forth. The past is full of stuff that we now look back upon with horror.

So, one idea that people have had—this is actually Yudkowsky’s term—is “Coherent Extrapolated Volition.” This basically means that you’d tell the AI: “I’ve given you all this training data about human morality in the year 2022. Now simulate the humans being in a discussion seminar for 10,000 years, trying to refine all of their moral intuitions, and whatever you predict they’d end up with, those should be your values right now.”

There have already been some demonstrations of this principle: with GPT, for example, you can just feed in a lot of raw data from a neural net and say, “explain to me what this is doing.” One of GPT’s big advantages over humans is its unlimited patience for tedium, so it can just go through all of the data and give you useful hypotheses about what’s going on.

More at https://scottaaronson.blog/?p=6823

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at
https://www.mediafire.com/folder/1uwh75oa407q8/Firefly

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 4:15 PM

6IXSTRINGJACK


Scott needs to shut the fuck up and sit down.

--------------------------------------------------

Growing up in a Republic was nice... Shame we couldn't keep it.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 4:23 PM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

Originally plagiarized by SECOND

So, one idea that people have had—this is actually Yudkowsky’s term—is “Coherent Extrapolated Volition.” This basically means that you’d tell the AI: “I’ve given you all this training data about human morality in the year 2022. Now simulate the humans being in a discussion seminar for 10,000 years, trying to refine all of their moral intuitions, and whatever you predict they’d end up with, those should be your values right now.”

There have already been some demonstrations of this principle: with GPT, for example, you can just feed in a lot of raw data from a neural net and say, “explain to me what this is doing.” One of GPT’s big advantages over humans is its unlimited patience for tedium, so it can just go through all of the data and give you useful hypotheses about what’s going on.


Yep! And just look at what a clusterfuck Chat GPT turned into.

One problem with machine learning is that, unlike human behavior, it's not corraled by reality. Human behavior can only go so far before we get smacked by consequences. AI behaves like "ideologically possessed" people.

Also, there are innate social instincts in most people. (SECOND seems to lack them, which might be why he aligns with AI.)

-----------
"It may be dangerous to be America's enemy, but to be America's friend is fatal." - Henry Kissinger


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 4:30 PM

6IXSTRINGJACK


Quote:

Originally posted by SIGNYM:
Also, there are innate social instincts in most people. (SECOND seems to lack them, which might be why he aligns with AI.)



I've had more interesting dialog with NPCs coded by Bethesda in the mid-aughts.



--------------------------------------------------

Growing up in a Republic was nice... Shame we couldn't keep it.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 30, 2023 4:32 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/folder/1uwh75oa407q8/Firefly


Quote:

Originally posted by 6IXSTRINGJACK:
Scott needs to shut the fuck up and sit down.

. . . no matter what AI safety proposal anyone comes up with, Eliezer has ready a completely general counterargument. Namely: “yes, but the AI will be smarter than that.” In other words, no matter what you try to do to make AI safer—interpretability, backdoors, sandboxing, you name it—the AI will have already foreseen it, and will have devised a countermeasure that your primate brain can’t even conceive of because it’s that much smarter than you.

I confess that, after seeing enough examples of this “fully general counterargument,” at some point I’m like, “OK, what game are we even playing anymore?” If this is just a general refutation to any safety measure, then I suppose that yes, by hypothesis, we’re screwed. Yes, in a world where this counterargument is valid, we might as well give up and try to enjoy the time we have left.

But you could also say: for that very reason, it seems more useful to make the methodological assumption that we’re not in that world! If we were, then what could we do, right? So we might as well focus on the possible futures where AI emerges a little more gradually, where we have time to see how it’s going, learn from experience, improve our understanding, correct as we go—in other words, the things that have always been the prerequisites to scientific progress, and that have luckily always obtained, even if philosophically we never really had any right to expect them. We might as well focus on the worlds where, for example, before we get an AI that successfully plots to kill all humans in a matter of seconds, we’ll probably first get an AI that tries to kill all humans but is really inept at it. Now fortunately, I personally also regard the latter scenarios as the more plausible ones anyway. But even if you didn’t—again, methodologically, it seems to me that it’d still make sense to focus on them.

https://scottaaronson.blog/?p=6823

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at
https://www.mediafire.com/folder/1uwh75oa407q8/Firefly

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, March 31, 2023 7:58 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/folder/1uwh75oa407q8/Firefly


Why Elon Musk Is Trying to Convince Everyone That A.I. Is Evil

Elon Musk Is the A.I. Threat

The Tesla CEO called for a pause in chatbot development. But he’s pushing something much more dangerous.

For much of the past decade, Elon Musk has regularly voiced concerns about artificial intelligence, worrying that the technology could advance so rapidly that it creates existential risks for humanity. Though seemingly unrelated to his job making electric vehicles and rockets, Musk’s A.I. Cassandra act has helped cultivate his image as a Silicon Valley seer, tapping into the science-fiction fantasies that lurk beneath so much of startup culture. Now, with A.I. taking center stage in the Valley’s endless carnival of hype, Musk has signed on to a letter urging a moratorium on advanced A.I. development until “we are confident that their effects will be positive and their risks will be manageable,” seemingly cementing his image as a force for responsibility amid high technology run amok.

Don’t be fooled. Existential risks are central to Elon Musk’s personal branding, with various Crichtonian scenarios underpinning his pitches for Tesla, SpaceX, and his computer-brain-interface company Neuralink. But not only are these companies’ humanitarian “missions” empty marketing narratives with no real bearing on how they are run, Tesla has created the most immediate—and lethal—“A.I. risk” facing humanity right now, in the form of its driving automation. By hyping the entirely theoretical existential risk supposedly presented by large language models (the kind of A.I. model used, for example, for ChatGPT), Musk is sidestepping the risks, and actual damage, that his own experiments with half-baked A.I. systems have created.

The key to Musk’s misdirection is humanity’s primal paranoia about machines. Because humans evolved beyond the control of gods and nature, overthrowing them and harnessing them to our wills, so too do we fear that our own creations will return the favor. That this archetypal suspicion has become a popular moral panic at this precise moment may or may not be justified, but it absolutely distracts us from the very real A.I. risk that Musk has already unleashed.

That risk isn’t an easy-to-point-to villain—a Skynet, a HAL—but rather a flavor of risk we are all too good at ignoring: the kind that requires our active participation. The fear should not be that A.I. surpasses us out of sheer intelligence, but that it dazzles us just enough to trust it, and by doing so we endanger ourselves and others. The risk is that A.I. lulls us into such complacency that we kill ourselves and others.

More at https://slate.com/technology/2023/03/elon-musk-chatgpt-openai-artifici
al-intelligence-tesla.html


The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at
https://www.mediafire.com/folder/1uwh75oa407q8/Firefly

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, March 31, 2023 8:08 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/folder/1uwh75oa407q8/Firefly


How unbelievably realistic fake images could take over the internet

AI image generators like DALL-E and Midjourney are getting better and better at fooling us.

. . . while many leaders of the AI movement signing a letter from an effective altruism-linked nonprofit that urged a six-month moratorium on developing more advanced AI models is better than nothing, it’s also not legally compelling. Nor has it been signed by everyone in the industry.

This all assumes that most people care a lot about not being duped by deepfakes or other lies on the internet. If the past several years have taught us anything, it’s that, while a lot of people think fake news is a real issue, they often don’t care or don’t know how to check that what they’re consuming is real — especially when that information conforms to their beliefs. And there are people who are happy enough to take what they see at face value because they don’t have the time or perhaps the knowledge to question everything. As long as it comes from a trusted source, they will assume it’s true. Which is why it’s important that those trusted sources are able to do the work of vetting the information they distribute.

But there are also people who do care and see the potential damage that deepfakes that are indistinguishable from reality pose. The race is on to come up with some kind of solution to this problem before AI-generated images get good enough for it to be one. We don’t yet know who will win, but we have a pretty good idea of what we stand to lose.

Until then, if you see an image of Pope Francis strolling around Rome in Gucci jeans on Twitter, you might want to think twice before you hit retweet.

More at https://www.vox.com/technology/2023/3/30/23662292/ai-image-dalle-opena
i-midjourney-pope-jacket


The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at
https://www.mediafire.com/folder/1uwh75oa407q8/Firefly

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, March 31, 2023 10:29 AM

6IXSTRINGJACK


Quote:

Originally posted by second:
Quote:

Originally posted by 6IXSTRINGJACK:
Scott needs to shut the fuck up and sit down.

. . . no matter what AI safety proposal anyone comes up with, Eliezer has ready a completely general counterargument. Namely: “yes, but the AI will be smarter than that.” In other words, no matter what you try to do to make AI safer—interpretability, backdoors, sandboxing, you name it—the AI will have already foreseen it, and will have devised a countermeasure that your primate brain can’t even conceive of because it’s that much smarter than you.

I confess that, after seeing enough examples of this “fully general counterargument,” at some point I’m like, “OK, what game are we even playing anymore?” If this is just a general refutation to any safety measure, then I suppose that yes, by hypothesis, we’re screwed. Yes, in a world where this counterargument is valid, we might as well give up and try to enjoy the time we have left.

But you could also say: for that very reason, it seems more useful to make the methodological assumption that we’re not in that world! If we were, then what could we do, right? So we might as well focus on the possible futures where AI emerges a little more gradually, where we have time to see how it’s going, learn from experience, improve our understanding, correct as we go—in other words, the things that have always been the prerequisites to scientific progress, and that have luckily always obtained, even if philosophically we never really had any right to expect them. We might as well focus on the worlds where, for example, before we get an AI that successfully plots to kill all humans in a matter of seconds, we’ll probably first get an AI that tries to kill all humans but is really inept at it. Now fortunately, I personally also regard the latter scenarios as the more plausible ones anyway. But even if you didn’t—again, methodologically, it seems to me that it’d still make sense to focus on them.

https://scottaaronson.blog/?p=6823

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at
https://www.mediafire.com/folder/1uwh75oa407q8/Firefly



Jesus Christ... We're not there yet.

But why even have conversations like this? Just shut it down before we do. If we REALLY need to have conversations like this than it's already too late.

And like I said, I'm not even making an end of the world argument. It's the half of an 8 Billion person population that won't have shit to do or money to buy anything before the end of the world that I'm worried about.

--------------------------------------------------

Growing up in a Republic was nice... Shame we couldn't keep it.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
In the garden, and RAIN!!! (2)
Thu, April 25, 2024 16:42 - 3563 posts
14 Tips To Reduce Tears and Remove Smells When Cutting Onions
Thu, April 25, 2024 16:39 - 3 posts
Russia Invades Ukraine. Again
Thu, April 25, 2024 16:18 - 6305 posts
Elections; 2024
Thu, April 25, 2024 16:15 - 2306 posts
Scientific American Claims It Is "Misinformation" That There Are Just Two Sexes
Thu, April 25, 2024 15:33 - 14 posts
Sentencing Thread
Thu, April 25, 2024 14:31 - 365 posts
Axios: Exclusive Poll - America warms to mass deportations
Thu, April 25, 2024 11:43 - 1 posts
Case against Sidney Powell, 2020 case lawyer, is dismissed
Wed, April 24, 2024 19:58 - 12 posts
Grifter Donald Trump Has Been Indicted And Yes Arrested; Four Times Now And Counting. Hey Jack, I Was Right
Wed, April 24, 2024 09:04 - 804 posts
Slate: I Changed My Mind About Kids and Phones. I Hope Everyone Else Does, Too.
Tue, April 23, 2024 19:38 - 2 posts
No Thread On Topic, More Than 17 Days After Hamas Terrorists Invade, Slaughter Innocent Israelis?
Tue, April 23, 2024 19:19 - 26 posts
Pardon Me? Michael Avenatti Flips, Willing To Testify On Trump's Behalf
Tue, April 23, 2024 19:01 - 9 posts

FFF.NET SOCIAL