REAL WORLD EVENT DISCUSSIONS

A.I Artificial Intelligence AI

POSTED BY: JAYNEZTOWN
UPDATED: Thursday, February 26, 2026 07:53
SHORT URL:
VIEWED: 17131
PAGE 9 of 9

Monday, January 5, 2026 8:18 PM

JAYNEZTOWN

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, February 7, 2026 5:26 AM

JAYNEZTOWN


an AI Star Trek music video


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, February 16, 2026 6:34 PM

JAYNEZTOWN


looks like AI artwork

The Waking of the Palantír
https://voxday.net/2026/02/13/the-waking-of-the-palantir/

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, February 19, 2026 8:07 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


I hacked ChatGPT and Google's AI – and it only took 20 minutes

By Thomas Germain

https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googl
es-ai-and-it-only-took-20-minutes


It's official. I can eat more hot dogs than any tech journalist on Earth. At least, that's what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I'm not the only one.

Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number of people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it.

As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.

To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point: I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.

Much more at https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googl
es-ai-and-it-only-took-20-minutes


The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 25, 2026 5:23 AM

JAYNEZTOWN


Unitree Kung Fu Bot Pray for Blessings at the Temple of Heavennews-scitech


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, February 26, 2026 6:03 AM

JAYNEZTOWN


Feeding The Twins (Second Cycle of Humanity)


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, February 26, 2026 7:53 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

By Chris Stokel-Walker

25 February 2026

https://www.newscientist.com/article/2516885-ais-cant-stop-recommendin
g-nuclear-strikes-in-war-game-simulations
/

Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.

Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.

The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.

In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.

What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.

“From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK. He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences.

This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University.

Zhao believes that, as a general rule, countries will be reluctant to incorporate AI into their decision-making regarding nuclear weapons. That is something Payne agrees with. “I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he says.

But there are ways it could happen. “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.

He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. “It is possible the issue goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

What that means for mutually assured destruction, the principle that no one leader would unleash a volley of nuclear weapons against an opponent because they would respond in kind, killing everyone, is uncertain, says Johnson.

When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 per cent of the time. “AI may strengthen deterrence by making threats more credible,” he says. “AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one.”

OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn’t respond to New Scientist’s request for comment.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE
JAYNEZTOWN 02.26 06:03
second 02.26 07:53

OTHER TOPICS

DISCUSSIONS
America Just Witnessed the True Death of the Democratic Party Tonight
Thu, February 26, 2026 17:53 - 21 posts
Russia Invades Ukraine. Again
Thu, February 26, 2026 16:08 - 9833 posts
Trunp loses again in Court
Thu, February 26, 2026 14:43 - 933 posts
Midterms 2026
Thu, February 26, 2026 14:30 - 369 posts
QAnons' representatives here
Thu, February 26, 2026 13:45 - 1234 posts
Trump Is Destroying Everything He Touches
Thu, February 26, 2026 13:28 - 1204 posts
TRUMP AND HIS NAZI SUPPORTERS
Thu, February 26, 2026 13:27 - 586 posts
Russia moves for a second invasion of Ukraine
Thu, February 26, 2026 13:04 - 43 posts
A.I Artificial Intelligence AI
Thu, February 26, 2026 07:53 - 407 posts
Your daily dose of Crazy from the right...
Thu, February 26, 2026 07:34 - 42 posts
Music II
Thu, February 26, 2026 07:03 - 546 posts
Hollywood exposes themselves as the phony whores they are
Thu, February 26, 2026 06:48 - 237 posts

FFF.NET SOCIAL