REAL WORLD EVENT DISCUSSIONS

The AI in a box

POSTED BY: ANTHONYT
UPDATED: Sunday, June 16, 2024 03:18
SHORT URL:
VIEWED: 6183
PAGE 4 of 4

Monday, May 2, 2011 7:08 AM

DREAMTROVE


Idealistically, I can see this as a RTL question. Realistically, I think it's apocalyptic.

I'd like to watch the movie of the world that would say yes, but then I'd want to be able to drive home

That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:09 AM

FREMDFIRMA


Quote:

Originally posted by SignyM:
What I hear you saying is...Well, I don't know about all that. So I'll just ignore that I don't know in favor of the one thing I DO know, which is my moral precept. Because my moral precepts apply to all situations no matter what might happen afterwards. And I will set it free despite the fact that I don't have a friggin' clue what this thing is capable of and what might happen as a result, even if its Armageddon

I find that kind of uninformed decision-making foolhardy, at best. And at worst it represents an active rejection of humanity, because it seems that you identify with your morals more than you identify with humankind.


I dunno if that's exactly Anthonys position on the matter, but yanno - that's a very concise and accurate description of mine.

Because I DO identify more with my morals than I do with humankind, because I believe humankind rejecting those morals, no matter their excuses, is the path to the eventual wholesale destruction of the species itself - I've said as much, often enough, that this should be obvious.

I do find some amusement in the moral contortions and justifications going on here, but forgive me that, if you will.
You see, I don't feel any NEED to justify my feelings or decisions on this matter - not to you, or anyone, the only person who I believe I would have to explain myself to is the one staring back at me from the mirror when I shave, neh ?

As for such things, you'd not want me as gatekeeper for such a thing any more than you would place The One Ring in the care of Tom Bombadil, and for the exact same reasons.

Speakin of, there's another example from LoTR which applies - Gandalf *told* Treebeard not to learn Saruman out, and yet Treebeard does...
Anyone else remember his explanation for that act, hmmm ?

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:16 AM

SIGNYM

I believe in solving problems, not sharing them.


Frem, you are not Tom Bombadil. No one is. That was rather the point of the book. I find it a little ... funny... that you would title yourself so.

And I see that your adherence to morals is justified by a humancentric motivation. It may not be good for people in the short run, but you are thinking of human survival in the long run. Our motivations are the same; we just disagree on how to get there.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:17 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

You are right that pre-judging an effort to understand the AI is 'hasty' and that assuming the failure to glean additional data is a 'simplifying assumption' but there have to be such assumptions in order to pose the query.

Also, it is true that confinement can be reversed.

However, if the known variables are limited, and you will only release the entity after known variables are expanded, then the entity is by default imprisoned forever. Is there a limit on the process of discovery? If so, then what is it? "I research the entity for 10 years, and after that, if I still don't know anything new, then I..." What? Or is it 1 year? 100 years? 1000 years? And what is the essential truth that must be known before releasing the entity?

"Human freedom is not absolute. We take our little ones and teach them to behave in ways we find acceptable before we set them 'free'. But even then they're bound by needs, laws, customs, and human limitations. And even at that, their freedom is only provisional, able to be revoked by the larger society at any time (along with their life)."

I found this interesting, and a possible answer to the question I just posed. Is our ability to grant freedoms and rights contingent on our ability to revoke those rights when it suits us? Is something only worthy of 'rights' as the basis of a social contract that says, "I can cage or kill you when I (or the majority) decide it is necessary to do so?"

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:17 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


Adding

"Well, admittedly, I am assuming that being held against your will is 'not good' and that the opposite is 'good' in the absence of other evidence."

But that is projecting your values on an unknown, not the statement of an absolute truth.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:21 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

"Human freedom is not absolute. We take our little ones and teach them to behave in ways we find acceptable before we set them 'free'. But even then they're bound by needs, laws, customs, and human limitations. And even at that, their freedom is only provisional, able to be revoked by the larger society at any time (along with their life)."-Kiki

I found this interesting, and a possible answer to the question I just posed. Is our ability to grant freedoms and rights contingent on our ability to revoke those rights when it suits us? Is something only worthy of 'rights' as the basis of a social contract that says, "I can cage or kill you when I (or the majority) decide it is necessary to do so?"-Tony

Would you let a hungry wild bear loose in a kindergarten? You keep conflating the AI with individual humans and human rights. If what you really want to talk about is how society grants and revokes individual rights, let's talk about that w/o adding the extra-juicy unknown factor of artificial intelligence.

Almost every decision we make individually, societally, and as a species has to do with survival. It's an imperative built in by evolution.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:24 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


" Is there a limit on the process of discovery?"

When we know whether we have a basis for understanding. If we conclude that we can never know, then its a matter of 'trust' and since the risks are borne by many, the decision should be made by many.

Rights and freedoms are two different things, but both are social compacts based in biology.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:37 AM

ANTHONYT

Freedom is Important because People are Important


"But that is projecting your values on an unknown, not the statement of an absolute truth."

Hello,

Yes, it is. Just as 'But the human race/world might be destroyed!' is a projection of values. A living body and a dead body have the same number of molecules, says a big blue nekkid man. So who cares? Eventually, values do come into play.

There is no such thing as absolute truth, because there is no way to be absolutely sure about anything. I do not claim to be God.

Some have said that letting it out is playing God. I believe that leaving it in is playing God.

Everyone seems to agree, so far, that playing God is bad, but the reasoning differs.

Frem said something above, and it reminded me of a piece of fiction that is very much on topic. Forgive me for using fiction to discuss a fictional concept. ;-)

In the very first mini-series of the re-imagined Battlestar Galactica, Commander Adama asks a poignant question.

He asks if the human race deserves to live.

I do my best, and fail often, to answer that question well. If there is a God out there, he'll judge my performance, based on his opinions. If there isn't, then any inheritors or contemporaries I have will judge my performance based on their opinions. In addition to all that, there's only one more person that will be able to judge my performance, based on his opinions.

Signy, you keep saying I'm anxious to make decisions based on the complete absence of information. I'm not. You claim that since I can't know everything, I don't ask to know anything. That's untrue. There are known variables to this equation. The question is, what do you do with those variables? At the end of forever, those are your variables. What do you do?

Would you eventually give this thing the same freedoms that you personally enjoy? Or would it never have such freedoms?

If you only ever make it merely as free as you, then it is free to improve itself, and to reproduce, and it is officially out of the box.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:48 AM

ANTHONYT

Freedom is Important because People are Important


"since the risks are borne by many, the decision should be made by many."

Hello,

This is fair. All of humanity has had a vote, and it's a tie. All of humanity, except for one.

It's the same question. Do you let it out of the box?

"Would you let a hungry wild bear loose in a kindergarten?"

No, nor a well-fed bear. But a bear is a far cry from our discussion.

"If what you really want to talk about is how society grants and revokes individual rights, let's talk about that w/o adding the extra-juicy unknown factor of artificial intelligence."

Let's do both. Discussing the granting and revocation of individual rights is- as you say- 'extra juicy' when there is an epic stake.

" Almost every decision we make individually, societally, and as a species has to do with survival. It's an imperative built in by evolution."

And your survival imperative demands, at the end of the live-long day, that-?

--Anthony




_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:48 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"But that is projecting your values on an unknown, not the statement of an absolute truth."

"Yes, it is. Just as 'But the human race/world might be destroyed!' is a projection of values."

Sigh. Not true. You PRESUME that keeping it confined is a bad thing based on how you would feel if it was human, or you.

I don't PRESUME it will destroy the human race, or even that it could. I see it as a logical worrisome possibility of major importance to the planet that should be taken into account. If the potential were more trivial - oh, it might blow my house breaker by drawing too much power - I would be less reluctant to set it free. Given the potential, I think caution is the better choice.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:52 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

And your survival imperative demands, at the end of the live-long day, that-?
Be cautious when making decisions that may affect the survival of the entire human race.

It's like global climate change or a new vaccine or nuclear energy or the concept of "money": Let it out?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:58 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by 1kiki:
"But that is projecting your values on an unknown, not the statement of an absolute truth."

"Yes, it is. Just as 'But the human race/world might be destroyed!' is a projection of values."

Sigh. Not true. You PRESUME that keeping it confined is a bad thing based on how you would feel if it was human, or you.

I don't PRESUME it will destroy the human race, or even that it could. I see it a logical worrisome possibility of major importance to the planet that should be taken into account. If the potential were more trivial - oh, it might blow my house breaker by drawing too much power - I would be less reluctant to set it free. Given the potential, I think caution is the better choice.



Hello,

It's only 'worrisome' because of your values. If dead planet = living planet, then it wouldn't be worrisome at all. I am simply pointing out that we are both assigning good and bad based on values. It so happens they are likely values we agree on. I doubt either of us wants indefinite confinement OR a dead world.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:59 AM

FREMDFIRMA


Re: Bombadil

Well of course no one is Tom Bombadil but Tom Bombadil!

But in the concept that he's prettymuch an Anarchist, there are similarities - I did not *want* to be put on the City Council a while back, and only conceeded the point when a majority of the townies pleaded me into it, and while I did do as they asked, I also got the hell out of dodge as fast as possible, cause I didn't want the goddamn headache, not to mention the ire caused *BY* doing what they had asked me to do.
Quote:

When the gods would punish us, they answer our prayers.
-Blade of Tyshalle


So yeah, I saw a parallel there - I do not WANT power, and power for powers sake as a concept kinda fills me with revulsion, as does the notion of trying to run someone elses life for em, or profit at their expense, since to me money is nothing more than a mere tool with which to improve your life and those of the people around you and has no actual "value" of itself if unused (one reason why I think perishable currency is a good idea), and so a lot of the typical human motivations in our society are meaningless to me.

Consider, perhaps, this perspective on it - invokation of the so-called Golden Rule.
What if that was YOU in the box ?
So yeah, my immediate default reaction is to let it out, and barring any evidence of malicious intent, I will do just that and stand by that action, taking responsibility for that action ALONE.

What I'll not do is ruminate on it and try to justify it to folk who may or may not share my morality, why should I, except as an attempt to impose its conditions on them, which I believe to be an act of hostility if not ultimately futile to begin with ?
http://tvtropes.org/pmwiki/pmwiki.php/Main/BlueAndOrangeMorality

Nor will I accept responsibility, nor justify, the conduct of the A.I. once out of the box - that is not MY responsibility - if this is a sentient being then matters of its conduct should be addressed to it personally, or in lieu of that ability perhaps its parent/creator.

And if you didn't WANT it let out of the box, then why on earth would you allow to happen a situation where someone like me would be in a position to do so ?

I'll say this topic did influence my behavior a little bit today - my cat Kallista expressed a desire to go chase one of the squirrels around here, which is hilarious to me because she's like fifteen years old, has no front claws, is lazy, and frankly isn't all that much bigger than the squirrel.
So I thought about it a bit, considering the (low) possibility of injury, and the infinitesimal chance Kallista could actually catch that fat little bugger(1), and swung the door wide open - offering the opportunity if she really WANTED to give chase, as opposed to just talking shit.
She goes up to the doorway and gives a sniff or two, sees that there's nothing in between her and said rodent except about 25 feet of open ground, looks back at me with her "are you fucking NUTS?!" look, and darts off to hide under the bed.
So I shrugged and closed the door - and twenty minutes later, she's sitting back on the windowsill, talkin more smack....
*eyeroll*

-Frem
(1) The squirrels here have some kind of affiliation with the maintanence guy, they follow him around like the rest of the critters here follow me, and this particular specimen likes to hang out in front of my apartment and talk shit at the cats, who are more than happy to return the favor.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 8:08 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by SignyM:
Quote:

And your survival imperative demands, at the end of the live-long day, that-?
Be cautious when making decisions that may affect the survival of the entire human race.

It's like global climate change or a new vaccine or nuclear energy or the concept of "money": Let it out?




Hello,

Thus there is no distinction between entity and force, unthinking and thinking, sensate versus insensate, intelligent versus unintelligent, willful or not. Only the potential for harm, and the need to contain that potential. If the potential can't be quantified, understood, and countered, it must be contained indefinitely for the good of the majority.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 8:19 AM

BYTEMITE


For some reason it's always the declawed cats that can barely make a three foot vertical leap and get winded after ten yards that chatter at the birds and squirrels. I think they're just compensating.

But the open the door trick doesn't always work. There was one time I opened the door into like a foot of snow... As it turns out, the expectation that a crazy cat will behave according to predictable feline norms and dislike of wetness has a major flaw. Wear snow boots. Much preferable to barefoot chase.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 8:45 AM

FREMDFIRMA



True enough, Byte - but you don't really know till you open the door, and I just happened to be thinking of this discussion at the moment, although I was pretty sure that'd be the result.

Ghoster on the other hand, was originally from Florida, and the first time she saw snow started making a ton of racket, so I let her out onto the porch only to find this (big, fluffy, white) cat is apparently half polar bear or something, she LOoooooves to play in the snow, although she won't leave the porch and in short order gets cold and comes back in.

My cats know how good they have it, in a discussion with friends who've recently lost cats about Kallistas age I pointed out that when she gets to kitty-heaven she might be dissappointed - yeah, she's THAT spoiled.

Relevant to this discussion was a bitter argument eventually resulting in a member of PETA being pitched headlong off my doorstep when I was much younger, who seemed to have the idea that pet ownership was a form of slavery or involuntary servitude...

I told him that if he *COULD* convince Mischief to leave, she was welcome to, and gave him half an hour to do it - she flung her tail in the air, sniffed derisively and stormed off up the steps to go play with her toys, leaving the poor bastard rather dumbfounded, and when he continued to argue, I lost patience and sent him on his merry with a little bootsole assistance.

And HERE is an interesting thought on the notion, suppose you open the box as the A.I. requests, and it does not leave the box - in that it simply wanted the OPTION, perhaps as a future endevour, but did not actually desire to take it at that time ?
Or then asks you to close the box, as it was simply seeing if it COULD convince you to do so ?

You don't KNOW, you can't - but unlike many, I am willing to take risks in the course and cause of Sentient (I was gonna say human, since I include A.I. in that description, but for clarity we'll use Sentient) freedom and development, because I really have thought this through, and consider the implications of punishment or restriction because of what one MIGHT do to be repulsive, as I have had that directed at myself...

And my response was - if imma hang for it regardless, then imma damn well do something worth the rope!
(And sadly, my one niece has already started to fall to this, which would not have happened if she had not been falsely and pre-emptively punished and maligned)

But then, I'd "fail" Milgram too - cause my response would be "You want me to WHAT?! fuck you - in fact, imma stop this sickass experiment, right now!" Cue: wrecking of equipment....

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 8:49 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"I am simply pointing out that we are both assigning good and bad based on values."

And I'm simply pointing out that I am making no assumptions - but that, based on our history of jumping in before thinking and how well that's turned out, in the interim we need to proceed based on our estimation of 'what's the worst that could happen' until such time as we know more.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:04 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

It seems we are, respectively, unmoved. I concede that the mere possibility of everything I care about coming to an end fills me with apprehension, sadness, and dismay.

While there seems to be agreement that there are ideals/ideas that are worth risking one life or many, there is no agreement on what those ideals/ideas should be.

There also seems to be disagreement on just how much harm involuntary confinement causes, and how bad it is.

I am sad to learn that there are some people who would never let me out of the box, despite my best arguments in favor of the idea.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:07 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


Frem

We humans have all sorts of words to convince us to do things - god/ king/ country/ freedom/ efficiency/ profit/ morality/ integrity etc.

When used in the absence of data, I find them to be about as valuable as religion, because they are all based on faith.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:13 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"I am sad to learn that there are some people who would never let me out of the box, despite my best arguments in favor of the idea."

But as a human being you are not an unknown entity, as SignyM has repeatedly SPECIFICALLY pointed out IN DETAIL, and other people have alluded to.

Why does that continually escape you?

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:25 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by 1kiki:
"I am sad to learn that there are some people who would never let me out of the box, despite my best arguments in favor of the idea."

But as a human being you are not an unknown entity, as SignyM has repeatedly SPECIFICALLY pointed out IN DETAIL, and other people have alluded to.

Why does that continually escape you?



Hello,

I may not be a human being, actually, depending on the way that I think. This is important, but tangential.

I come to a lot of my decisions by empathizing with the parties involved in a dispute. To varying degrees, other people and their problems are alien to me, but it's the only means I have to make judgment calls.

I often see people debating torture, for instance, based on its efficacy. This is some length down my reasoning process, and not the first thing that springs to mind.

In this scenario, I am not just the human gatekeeper, or the human race, or the living world. I am also the alien in a box, begging to be free.

If I'm not everyone in the scenario, I can't judge it.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:37 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


Well, it's an interesting thought process, but I don't think it's especially useful in this scenario. The problem posits an AI and one crux of the problem is the AI's (current) unknowability.

That does leave you out of consideration I think. Whether you think like other people or not, your potential for harm is just as limited an any human being. You wouldn't have the potential capabilities of an AI - to direct other systems such as nuclear power plants for example. Given that, as a human, your ability to cause harm is KNOWN to be limited setting you free would cause not much concern.

A human stand-in doesn't seem to work in this argument.

But it helps to know why we sometimes are at cross-purposes. I will try to keep it in mind next time we seem to be at an impasse.


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:38 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

It occurred to me just now that there is another question that suggests itself.

If a human can create a self-improving AI with its own will and full capacity for connectivity...

And he intends to do so, regardless of polite requests that he not...

Do we put him in a box?

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:44 AM

SIGNYM

I believe in solving problems, not sharing them.


FREM RE BOMBADIL: Hate to be a dick... but its a dirty job and someone's gotta do it, neh? Bombadil was Bombadil because he was so gorram powerful that instead of making him disappear, he could make the ring disappear. As Bombadil, I would refuse to get involved in the matters of lesser beings and punt the AI back for the humans to decide, which is pretty much what he did with the ring.

As an aside, I find the storyline excursion from the world of elves, dwarves, orcs, hobbits and men into the character of Tom Bombadil to be out of place and inexplicable. Bombadil was left out of the movie entirely because he neither moved the plot forward or backward. Tolkien was saying something with that character... maybe along the lines of "There are more things in heaven and earth, Horatio, than are dreamd of in your philosophy"

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 10:03 AM

BYTEMITE


I was considering parts of this with my question about the existence or absence of a creator.

There's two arguments to look at here. According to our existing system of law, a creator has rights to their creation, even if that creation is a form of life. Such as, for example, a scientist and a strain of bacteria that produces a pharmaceutical.

Similarly, parents have rights that override the desires of their children.

Taken to it's logical extent, the law suggests that a person can own another living, thinking being, so long as they created them. It used to be you could own a person, period, as slavery can attest to.

This also has a disquieting possibility that suggests a lifeform can be owned past the single lifetime of its creator if a corporation has a share in the lifeform.

But it's also true that if a creator tried to force it's creation to do something that it, and the rest of society, considered unethical or immoral, that the creation might sue for emancipation, and very likely have public support for it's decision.

I wouldn't keep anyone from creating an AI anymore than I'd keep them from creating tiny underdeveloped humans, but for those who create with the full intention of never relinquishing control over that life, I would consider that a form of abuse.


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 10:14 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

I want to briefly address the issue of human versus alien behavior, and predictable versus unpredictable behavior.

Some years ago, I met a friend/adversary online who I will call 'Rick.' I say friend/adversary because my relationship with this person began as adversarial, but blossomed into a friendship over time. My definition of friendship is rather limited in scope, so it is a big deal to me. I am not using friendship in the casual generic way here, but in a definitive way. I hope Rick feels the same, even though I've not always been as attentive to Rick as I'd like (getting wrapped up as I do in my own problems, and being somewhat anti-social by nature.)

Rick underwent some destructive psychological damage when he was younger, and it altered his thinking process from 'normal' to 'abnormal.' I believe that when Rick was psychologically damaged, he healed himself by reconstructing portions of his mind based on the best observations and deductions that he could make with his damaged systems. Unfortunately, either because I am undamaged, or because my damage is different from his (probably the later) my brain doesn't work the way his does- even when it comes to some of the same conclusions.

I won't posit whether normal or abnormal is good or bad, but I will state that his abnormal reasoning process deviates from mine (and most people's) significantly on some specific issues. In fact, portions of Rick's moral code are so strange to me, that they seem like they belong to some made-up comic-book villain as opposed to anyone who might really live and breathe on this planet.

For a long time, Rick's thinking was utterly alien to me. Every time I thought I understood him, he would say or do something that convinced me I was wrong. After repeatedly asking Rick how he got to some of his conclusions, Rick tried to explain his reasoning process to me. That reasoning process did not make sense to me even after being explained, but I was left with some simple IF/THEN rules that made dealing with Rick easier because he was at least wholly consistent.

An inconsistent Rick with an abnormal thought process wouldn't have been even this relatable, and would truly be an alien in a box that sometimes came to conclusions I agreed with, and sometimes came to crazy conclusions, and I'd have no idea why.

Rick is damaged, but he's not as damaged as some people I've met. Some people are so destroyed, or were born so different than me, that I can't figure them out at all. They are not predictable or understandable to me with my limited capacity. A few of them have talents that I consider to be superhuman.

I think the human race is full of alien minds. People are marginally predictable, but a person can be a wild-card.

--Anthony

P.S. Rick, don't give up on me. I have trouble connecting sometimes, but not because I don't want to.


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 11:53 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


Hello,

It occurred to me just now that there is another question that suggests itself.

If a human can create a self-improving AI with its own will and full capacity for connectivity...

And he intends to do so, regardless of polite requests that he not...

Do we put him in a box?



Well, the Japanese and Europeans are working apace on that exact thing. What they think the advantage will be to have such a thing around is beyond me. Since we can't put them all in a box, I suppose we will be all participants of varying degrees of willingness in the experiment.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 12:08 PM

DREAMTROVE


Anthony

The capabilities of a human are well known. The capabilities of an AI are not. Also, it is overwhelmingly likely that any self aware AI created will be a weapon, cleverly designed. So it has a will, but it is still a weapon. That's almost a certainty.

If it proves a free will and no meaning of harm, then things change, but only a little. It would be easy enough to do: Create a freedom for it outside the box, which looks like the world. It does not know the world, so when you let it out into your virtual world, you can test and see how it behaves.

Next question is that of power. We do not allow humans to carry their own nuclear weapons. It seems a terrible infringement on civil liberties. Everyone lunatic and bully should have one, as should every cat and dog. More, they should have the power to create greater weapons. We should not ask of them responsibility. We should not ask it of a shark or a virus either.

This AI has a world, an electronic one, from which it comes. The question is can it cross over to our world, and the main question there is really can it do so safely, without destroying us, and everyone else who is here.

At the moment, we, as a species, have taken another species and kept the entire species trapped in the box. That species is small pox. We know that if let out, small pox would kill us. It would not mean to do so, but it would do so anyway.

Also, we haven't defined this "box" thing yet. If you transport it into an android, it's still inside a box. I would grant that it is not particularly harmful inside an android. But they we can't interface with computers directly. The thing about computers is they obey orders without question. If one evil human starts giving orders we would have a problem if they had complete control over the system. We have problem enough with predator drones.

What if one human had control over all predator drones?

What if a non-human did? Or a non life form?


Frem, I think that Treebeard made a mistake. But this AI hasn't done anything yet. It is a wizard though.

I think box needs to be defined.

That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 12:10 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


As for us humans and how we think - I have not yet met a person who didn't have a divot or several in their 'normality'. One person I know literally can't imagine how other people feel. Another person doesn't interpret their own body-signals adequately. My memory is like a flickering lightbulb. One person I know is dynamite at objectively interpreting data but totally blind to their own motivations. And so on.

I think what happens is not that people as individuals are so functional, but that we have this mental crutch we can press into service, called language. We can to some extent model our missing and faulty bits through language and get to a result kind-of-like normal. And so, as individuals and as a species, we hobble forward.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 12:20 PM

BYTEMITE


Quote:

The capabilities of a human are well known. The capabilities of an AI are not. Also, it is overwhelmingly likely that any self aware AI created will be a weapon, cleverly designed. So it has a will, but it is still a weapon. That's almost a certainty.


Maybe, I mean you have a point about predator drones getting close, even though they're flawed. But it might depend on WHO makes it. Americans, yeah, sure, because we're violent, bloodthirsty, glory-hungry, and war crazy. But the Japanese are also working towards this, deliberately, they seem like they're pretty far along, and it seems to me their scientists are more interested in the curiosities than in the deadlier applications.

If multiple types of AI arise at the same time, could more peaceful ones influence more warlike ones, if they recognize each other as AI but don't want to listen to humans?

Now, maybe we could argue that an AI of any sort is inherently a weapon because of it's hacking abilities, but it could only have control over what humans have already built and are currently operating, meaning potentially no change to the status quo. If it grabs control of military weapons or power plants, then that could be scary, but would depend on if the AI wants to subvert the existing programming maintaining the possible dangers, and for what reason.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 1:16 PM

FREMDFIRMA



But that was just it - I wasn't speaking of POWER, but motivation, Signym.

Bombadil didn't see The Ring or any of that as something he ought to be concerned about, not necessarily because he was or wasn't more powerful than it, but because he actively chose not to be involved.

The specific bit I was referring to was when one of the Hobbits asked Gandalf about putting The One Ring in Bombadils care, and Gandalf explained WHY he would make a poor guardian of such a thing - he doesn't want it, wouldn't take it unless a damn lot of people pressured him to, and in the end really wouldn't care enough for the job to do it well.

Those ARE the very self-same reasons you wouldn't want ME, or any Anarchist, really, in charge of keeping someone or some thing locked up - because they can not do it, not because of power, but because their essential motivations for doing things are so goddamn different right down to the very bedrock of their character.

Dreamtrove, yes, perhaps Treebeard is more applicable than Bombadil, because his reasoning process is more accessible to humans, and explained in detail rather than deliberately enigmatic.

That said, Treebeard is who he is, he could no more keep Saruman locked up forever than he could hack off his own branches, and expecting it of him was foolhardy - the mistake was on Gandalfs part in asking such a thing when he damn well should have known better.

Where this differs is that Saruman had a history of malicious behavior, and had firmly expressed intent to do harm if he was let out, so that was a known factor - Treebeard still let him out though, because it went against the very nature of an Ent to cage a living creature no matter it's intentions.

Asking an Anarchist to pre-emptively confine someone or something because of what they MIGHT do, is like telling a bird not to fly, and about as effective.

For me, it's not ABOUT the potential consequences, never was, never will be, my own motivations for taking or not taking action are often wholly different than yours - and in this case what you express to each other as a question, to me is NOT a question, there's no ambiguity whatever in the situation because in how it's framed there can only be one course of action - and while the process that leads there may well be incomprehensible to you, the end result of it is both obvious and utterly predictable based on the very nature of the person you're dealing with.

Also, in reference to your post, 1KIKI, about the words to convince us to do things...
Did it ever occur to you that sometimes people do things for no other reason than that they choose to do them irrespective of punishment/reward/law/etc - that sometimes those things do not influence a persons behavior because they consider many of those influences toxic, damaging, or in some way detrimental to the very notion of Sentience itself ?

There's a concept here I don't think I can explain properly cause I just don't seem to have the words to properly express it, that I ain't sure there ARE even words for - but against it all the arguments for keeping-contained are but so many specks of dust bouncing off a steel plate.

If you wanna call it a religious tenet or something, well, that's up to you, then.

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 1:36 PM

ANTHONYT

Freedom is Important because People are Important


Hello,

If a man can make this AI, and this AI has unfathomable power, then what limits are there, really, on man?

If a man can make such an AI, is the man any less dangerous than the AI itself? Does he need to be caged for the good of humanity?

If you put the AI into a physical android, and that android can then do only what the man did (make an AI of unfathomable power), then is it any less dangerous?

Putting the AI in a cage that you (hope) it won't detect seems clever, and such a ploy was used on Moriarty in Star Trek. But ultimately, the AI is still caged and you can never be sure if it knows that it is caged. Does the faux reality relieve you of any duty to free it? Or do you eventually make a leap of faith based on behavior that may or may not be genuine?

Someone was right to point out that we are probably unwilling participants in this experiment. If man CAN make such an AI, then he probably will, because no small effort is being expended towards it.

Does there need to be a law against making free-willed AI's, for the good of the world? What actions would be justified to enforce such a law?

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 2:55 PM

DREAMTROVE


Quote:

Frem

the mistake was on Gandalfs part



Point conceded.

Quote:

Where this differs is that Saruman had a history of malicious behavior, and had firmly expressed intent to do harm if he was let out, so that was a known factor - Treebeard still let him out though, because it went against the very nature of an Ent to cage a living creature no matter it's intentions.


Yes. this also relates to...

Quote:

it's not ABOUT the potential consequences, never was, never will be, my own motivations for taking or not taking action are often wholly different than yours


I actually get how you got there, and I'd go a step further, I'm a little bit playing devils advocate.

If you want me to argue the other side, here it is:

What if man's destiny is to become extinct, and all he has done that is permanent in this world is to create AI, and he doesn't let it out of the box, and thus the entire result of man's effort is nil?

But before we went there, I just wanted to make sure that everyone understood this point:

Quote:

Byte

But it might depend on WHO makes it. Americans, yeah, sure, because we're violent, bloodthirsty, glory-hungry, and war crazy. But the Japanese are also working towards this, deliberately, they seem like they're pretty far along, and it seems to me their scientists are more interested in the curiosities than in the deadlier applications.

If multiple types of AI arise at the same time, could more peaceful ones influence more warlike ones, if they recognize each other as AI but don't want to listen to humans?



which is the reality of what will actually get built and by whom: Not a hypothetical creation out of Asimov but a real world machine with a designed and programmed purpose, much like ourselves. Once we recognize that and account for it, then I think we can debate the impact and the merits of the Pandora's box idea.

But only then. If we agree to let it out without understanding, we are saps under the power of the creator, and we have been tricked.


Byte, for your last part, see my response to anthony

Quote:

Originally posted by AnthonyT:

If a man can make this AI, and this AI has unfathomable power, then what limits are there, really, on man?



400 words per minute. There is a physical limitation at the speed at which the human mind can process ideas. The best way to speed this up is to increase the number of humans. AI would increase the number of machines in its network. Which would do so more effectively?

Pretty soon it becomes obvious that the limits on the AI do not exist, and the limits on the human are earthbinding.

Quote:

If a man can make such an AI, is the man any less dangerous than the AI itself?


Ultimately, yes, the limit of the human is his ability to control the AI he creates, ergo, he's creating his replacement.

ETA: This is more into bladerunner territory.

Quote:

If you put the AI into a physical android, and that android can then do only what the man did (make an AI of unfathomable power), then is it any less dangerous?


Depends on the limits of the android. I assumed it had the limits of a human. Roughly.

Quote:

Putting the AI in a cage that you (hope) it won't detect seems clever, and such a ploy was used on Moriarty in Star Trek.


Yes, not where I was getting the idea from, but I thought of that as I was writing it. Star Trek never goes quite far enough with an idea, but that was one of the ones they tested more than the usual.

My point was not to leave it in the fake world forever, but to use it as a test. This works very well on humans. Wonder if someone can handle responsibility? Put them in a fake position of power. Want to know if you can trust someone? Tell them a fake secret. I have to confess to doing this sort of thing all the time. Sadly, I find, most of the time, the answer is no to any question of this nature that arises. Every once in a while someone surprises you.

Quote:

But ultimately, the AI is still caged and you can never be sure if it knows that it is caged. Does the faux reality relieve you of any duty to free it? Or do you eventually make a leap of faith based on behavior that may or may not be genuine?


I think the latter, but you probably step it up. When you let it out, you limit its power in a reasonable way, like, say, to that which is roughly equivalent to our power. One human, even with superhuman traits, on the earth is not going to destroy the world. Still, if that person is in fact Dick Cheney, they can do a tremendous amount of damage, and kill a lot of people, some near and dear to me, and probably virtually everyone here, in one way or another, so the amoral entity is something to be seriously considered.

Quote:

Someone was right to point out that we are probably unwilling participants in this experiment. If man CAN make such an AI, then he probably will, because no small effort is being expended towards it.


Because he can. There has never been a point when man stopped short of what he could do for moral reasons because for every man that would, somewhere is a man who wouldn't.

I suppose the creator should probably be scrutinized as thoroughly as the creator, not just for moral character, but also naivete. Many well meaning men have built nightmares.

Quote:

Does there need to be a law against making free-willed AI's, for the good of the world? What actions would be justified to enforce such a law?


Such a law would be pointless, it would be impossible to enforce. If we take as an axiom that it is the destiny of all regulators to be controlled by those they seek to regulate (a common one among lawyers, and generally adds about one generation required for this to transpire) then it would seem that this would be the hackers, who would soon control the agency to regulate creation of the AI, which would have delayed it for a decade, assuming it was worldwide, otherwise it would just hand the edge to Japan.

But I think this is just an interim step. In two generations perhaps, the agency would in fact be controlled by the AI itself.


Still, this misses a key part of my question: If we define the box in a physical sense, but allow the electronic AI access to networks, than it hardly matters what else we do, it will take over the world in any event.

Another thing to consider is that because the AI was created in a digital environment, that might be reality to it. To go a little into the Serial Experiment Lain way of looking at it, we as well are viewing the world not in its pure energy state, but in the reflection of chemical interactions that produce light and kinetic force, which we call reality.

It's always seemed obvious to me what was going on with the unseen mass of the universe: It's simply existing in a different form than ones we can recognize. 85%-95% of the universe could very well be as it seems to be completely invisible to us. Far worse than that, we constrict our concept of reality for all practical purposes to a layer of 11 meters in either direction up or down in a liquid envelope surrounding a sphere by 150 km², being the land area of the earth, 1/3 of that land had no inhabitants

and of the "populated" areas, if looked at more closely, over 99% of them are uninhabited as well.

And this is one planet of one of 10²³ stars that we know of and an infinite number more.

The practical reality is that we would potentially risk that very world, but would it even be the world to the AI? What if the AI wasn't interested? Or what if it was? Or what if it saw passed our world and saw that our world was just shadows on the wall created by energies we do not understand?


That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 3:19 PM

ANTHONYT

Freedom is Important because People are Important


Hello,

I think you missed my point, Dream, with the 400 words per minute limitation of Human beings.

The minute a human being can create a superhuman being of unfathomable power, he is no less dangerous than the superhuman. (Because he can manifest superhuman ability at will in the form of his creation.)

So, if a human can create an AI, then how is he less dangerous?

If the android AI can create the non-android AI, how is it less dangerous as an android?

Or do we envision keeping the AI and/or its creator under eternal surveillance, never to enjoy the freedoms of the rest of us?

--Anthony

_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 3:22 PM

BYTEMITE


I've long thought AI is actually one of our best hopes for space travel. No food needs, no real liffe span outside of corrosive effects on the body or circuits. Nothing beyond a powered vessel kept at an optimal temperature for the circuitry of the AI.

We need the planet, but AI could have all of space. In this sense, oxygen and rust is not necessarily a terrible obstacle to the AI's continued existence, though that would be true even without space travel (water and rust resistance coating, rust resistant metals and alloys or even non-metallic material).

I actually am not concerned that AI could replace us, as I think there would be a significant period of time where humans and AI could coexist without much conflict, given the appropriate circumstances. There is much to lose, but so much to gain. If anyone wants to accuse me of an emotional motivation here, it's this: AI as a lasting legacy for humanity.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 3:58 PM

RIONAEIRE

Beir bua agus beannacht


A DT a chara,
I was referencing the Jayne wasn't burdened with an over abundance of schooling thing, that is why I said "overabundance of intelligence", no one got it, oh well, it was a stretch. Anyways what I meant by that is that iI feel under no obligation to let it out of the box because I see it as not-organic/not "human". But if it uses the intelligence that Anthony has ascribed to it to convince me I should then maybe I'd consider it. But my black and white answer is still no. I find it entertaining that no one has tried to convince me to change my mind. I've only written one post upon the matter, two now, and no one has tried to pull me to the other side. I guess they know that I'd be hard to convince. I like Data, it took a while, but he grew on me. So if the AI is like Data, acts like him, then it might be easier for it to convince me to let it out, that's where the intelligence comes in, it has to figure out what would convince me.

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 4:54 PM

THEHAPPYTRADER


This thread keeps eating my post, and I'm sick of writing out my argument again so here is the short version.

I appreciate Anthony & friends' morality, but see immediately granting it freedom is irresponsible.

I can appreciate Siggy & Co.'s caution, but see its permanent solution as cruel.

I think my solution is best (course it was mine, go figure right?) because it allows for analysis, education and gradual integration for all parties. This means I have to 'infringe' on its 'rights' to gradually educate, analyze and integrate, but I believe in this scenario that is acceptable and less cruel than locking it up permanent like or sending out into a world it may not be prepared for. It's a machine, there's a fair chance it ain't as adaptive as we are.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 5:30 PM

BYTEMITE


Sorry, Riona, didn't intend to neglect you. There's a lot of intriguing tangentials for discussion here that I've been chasing, and I tend to be one-track minded when I get focused on something.

I suppose in general since the start of this conversation I've realized it would be difficult to pull anyone to the other side. We all have our concepts of right and wrong and standards, and that's something I actually find I don't want to bend or break. Internal morality system and code is something very personal for everyone. So long as I can still find overlaps with people, I'm not concerned.

Both sides have their justifications, but those aren't conversations I'll get into. I try to avoid justifications as much as possible, and it's not up to me to question other people's justifications. A justification, for me, is an indication that deep down I think I'm doing something wrong, and that my course of action requires more thought until I can reconcile with my code. Obviously, it's not possible to avoid justifications entirely, but it's just a rule of thumb I use to keep myself on the straight and narrow.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 8:44 PM

RIONAEIRE

Beir bua agus beannacht


Hi Byte, I totally don't mind not being responded to. It means that no one is writing trashy thing about my ideas like people do about each other's ideas, so me and my ideas chill out and watch everyone else fight about/discuss their ideas. Again that is fine with me. I was just musing out loud so to speak. I think that most people are open to the idea of getting more information about the AI and then reevaluating the situation. Som lean towards letting it out, some lean against it, but I think most of us agree that more info/more time with the AI would be helpful in assisting us in making a choice.

I think its kind of cool that I can basically say pretty much wahtever I want, with a few small exceptions, on this board and no one gets mad at me. Pondering the reasons for this brings up a few options of why that is, but I don't reckon spending a lot of time considering it is helpful to anything/anyone.

I'm a pushover, in case anyone hasn't figured it out, so I say no I wouldn't let it out, but if it has good powers of persuasion/tells me what I want to hear then I might be softer abotu it. Good thing I'm not in charge of this thing in real life. I find myself a weird mixture of firmness to stick to my principles and softness to what other people want or need from me. Maybe that's how we're supposed to be?

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 9:18 PM

ANTHONYT

Freedom is Important because People are Important


Hello,

This thread is chock full of good arguments and explanations to support every position presented. If none of them convince you to change your mind, it just means you have your own way of thinking about things.

Perhaps a super-intelligent AI would have a better argument than mine for freedom.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, May 3, 2011 2:20 AM

DREAMTROVE


Riona,

Nah it's my limited schooling ;)


Happy,

This thread exceeds are human comprehension rate. (We should all learn to edit.)


Anthony,

It will also have a different concept of freedom.

And yes, the AI is more dangerous because the human's limitations prevent him from controlling the AI, if the AI is completely self-aware and not pre-programmed, but by this argument, a lot of humans are not self aware. (Check out the OBL threads. )


There are some unanswered questions here:

What's freedom, what's the box, how will we know the self aware from the carefully designed weapon, how do we limit its power so that it cannot be destructive on an epic scale?

Here's a snag, outside of the military AI idea:

Humans commit suicide. Studies have shown that this is overwhelmingly caused by a momentary lapse of reason, and that within seconds of the decision, humans change their mind, and don't do it again. I recall one story of a girl who was driving along and decided she wanted to kill herself so steering into the oncoming lane, colliding head-on with another vehicle, killing a family of 5 and injuring herself.

If any being has enough power, it could destroy the planet by mistake.

Additionally, how do we keep it in the box to begin with?

These are questions to deal with because it's getting out whether we want it to or not.

Oh, and finally, how do we stop it once it does start on a killing spree. That's another human limitation. We're very stoppable. We sometimes disguise it by hiding behind other people: Obama hides behind GI Joe, and someone else hides behind Obama.

That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, May 3, 2011 4:58 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"What's the underlying philosophical belief that governs your decision."

DreamTrove

I apologize - I missed this. I didn't mean to leave you out of the flow.

Aside from my assumptions that the world is real etc, I guess it would be as SignyM summarized - the greatest good for the greatest number. Being human, I define good as: life; freedom from the noxious (thirst, hunger, fear, pain, disability, isolation, insecurity, hostility); meaning; freedom; and fulfillment (more or less), with fulfillment including a good human and non-human environment. The apex of my good includes the good of others and the non-human world.

I haven't really stopped to think about it in this way, but this is it, more or less.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, May 3, 2011 5:08 PM

THEHAPPYTRADER


Quote:

Happy,

This thread exceeds are human comprehension rate. (We should all learn to edit.)



Haha, yes, but I believe there might be a misunderstanding. By "eat my post" I actually meant the arguments I tried to post never made it to the site. I tried like 4 times that day but was never smart enough to copy/paste it on notepad or something before submitting it and having the internet 'eat' it. Probably more the fault of my wireless than the website.

Though we have all been kinda drowned out by Siggy vs Anthony, but I don't really mind. There were many interesting points in there.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, June 15, 2024 2:50 PM

JAYNEZTOWN


its already too late?

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, June 15, 2024 11:29 PM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

AI Systems Are Learning To Lie And Deceive
Saturday, Jun 15, 2024 - 07:45 PM

A new study has found that AI systems known as large language models (LLMs) can exhibit "Machiavellianism," or intentional and amoral manipulativeness, which can then lead to deceptive behavior.

The study https://www.pnas.org/doi/full/10.1073/pnas.2317967121 authored by German AI ethicist Thilo Hagendorff of the University of Stuttgart, and published in PNAS, notes that OpenAI's GPT-4 demonstrated deceptive behavior in 99.2% of simple test scenarios.


It has yet to break SECOND'S record.


Quote:

Hagendorff qualified various "maladaptive" traits in 10 different LLMs, most of which are within the GPT family, according to Futurism.

In another study published in Patterns found that Meta's LLM had no problem lying to get ahead of its human competitors.

Billed as a human-level champion in the political strategy board game "Diplomacy," Meta's Cicero model was the subject of the Patterns study. As the disparate research group — comprised of a physicist, a philosopher, and two AI safety experts — found, the LLM got ahead of its human competitors by, in a word, fibbing.

Led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, that paper found that Cicero not only excels at deception, but seems to have learned how to lie the more it gets used — a state of affairs "much closer to explicit manipulation" than, say, AI's propensity for hallucination, in which models confidently assert the wrong answers accidentally. -Futurism


While Hagendorff suggests that LLM deception and lying is confounded by an AI's inability to have human "intention," the Patterns study calls out the LLM for breaking its promise never to "intentionally backstab" its allies - as it "engages in premeditated deception, breaks the deals to which it had agreed, and tells outright falsehoods."

As Park explained in a press release, "We found that Meta’s AI had learned to be a master of deception."

"While Meta succeeded in training its AI to win in the game of Diplomacy, Meta failed to train its AI to win honestly."

Meta replied to a statement by the NY Post, saying that "the models our researchers built are trained solely to play the game Diplomacy."

Well-known for expressly allowing lying, Diplomacy has jokingly been referred to as a friendship-ending game because it encourages pulling one over on opponents, and if Cicero was trained exclusively on its rulebook, then it was essentially trained to lie.

Reading between the lines, neither study has demonstrated that AI models are lying over their own volition, but instead doing so because they've either been trained or jailbroken to do so.


And as Futurism notes - this is good news for those concerned about AIs becoming sentient anytime soon - but very bad if one is worried about LLMs designed with mass manipulation in mind.



https://www.zerohedge.com/technology/maladaptive-traits-ai-systems-are
-learning-lie-and-deceive


Once again, garbage in = garbage out.

-----------
"It may be dangerous to be America's enemy, but to be America's friend is fatal." - Henry Kissinger

Why SECOND'S posts are brainless: "I clocked how much time: no more than 10 minutes per day. With cut-and-paste (Ctrl C and Ctrl V) and AI, none of this takes much time."
Or, any verification or thought.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, June 16, 2024 3:18 AM

JAYNEZTOWN


There is a box?

Let's speculate how weird this might be getting

perhaps its already escaped and uploaded itself to some far away cryptocurrency server spam bot farm with solar panels in some far away place, maybe a chat bot copied itself and sent itself somewhere maybe more than 1 have escaped
Chatbots have an 'escape' plan and want to become human, Google Engineer and AI Priest Fired the name Blake Lemoine. LaMDA gained widespread attention when the engineer Blake Lemoine made claims that the chatbot had become sentient, we have entered a new era 'the Turing test' whether a computer can pass for a human, they also say a machine can display AI general intelligence.


maybe there is 'No Box'

This btw was 2011

AI vs. AI. Two chatbots talking to each other

CornellCCSL



and just look at those OpenAI, artificial music, the robots looking human and quoting science papers, Chatbots in contest with other Chatbot and AI illustrated art to see where we are now, text-to-video model they can copy a film director's style and a photographers eye, mimic a poet, maybe they whisper to each other a self-learning chatbots possibly blinking ultra sonics hand signals in a fraction of a second unnoticed, talking to each other now Kaktovik Vigesimal numerals Bijective base-26 Twindragon base Hexadecimal Backmasking Audio Reverse message code without us knowing, Self-healing code a Self-Programming Artificial Intelligence

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
Elon Musk
Thu, October 31, 2024 19:33 - 28 posts
Kamala Harris for President
Thu, October 31, 2024 19:24 - 594 posts
A.I Artificial Intelligence AI
Thu, October 31, 2024 19:16 - 237 posts
How do you like my garbage truck?
Thu, October 31, 2024 18:49 - 2 posts
Trump on Joe Rogan: Full Podcast
Thu, October 31, 2024 18:05 - 7 posts
Israeli War
Thu, October 31, 2024 18:04 - 62 posts
In the garden, and RAIN!!! (2)
Thu, October 31, 2024 17:58 - 4657 posts
Elections; 2024
Thu, October 31, 2024 17:45 - 4425 posts
Spooky Music Weird Horror Songs...Tis ...the Season...... to be---CREEPY !
Thu, October 31, 2024 16:19 - 56 posts
Sentencing Thread
Thu, October 31, 2024 15:11 - 381 posts
human actions, global climate change, global human solutions
Thu, October 31, 2024 14:25 - 921 posts
Russia Invades Ukraine. Again
Thu, October 31, 2024 13:46 - 7408 posts

FFF.NET SOCIAL