Sign Up | Log In
REAL WORLD EVENT DISCUSSIONS
A.I Artificial Intelligence AI
Monday, September 22, 2025 8:37 PM
JAYNEZTOWN
Wednesday, September 24, 2025 8:18 AM
Friday, September 26, 2025 2:34 PM
SIGNYM
I believe in solving problems, not sharing them.
Quote: AI-Generated 'Workslop' Masquerades As Good Work, Ruins Productivity: Harvard Review Friday, Sep 26, 2025 - 06:25 AM Authored by Naveen Athrappully via The Epoch Times, The use of artificial intelligence (AI) tools in workplaces is resulting in lower productivity due to employees using them to create substandard output, according to a Sept. 22 analysis published in the Harvard Business Review. “A confusing contradiction is unfolding in companies embracing generative AI tools: while workers are largely following mandates to embrace the technology, few are seeing it create real value,” the report said. The analysis, conducted by researchers from Stanford Social Media Lab and behavioral research lab BetterUp, identified a possible reason why this was happening. Employees were using the AI tools to create “low-effort, passable looking work” that ended up generating more work for other employees. Researchers term such content “workslop” defined as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” The “insidious effect” of workslop is that the receiver of such content is burdened with interpreting, correcting, and redoing the work, according to the report.
Friday, September 26, 2025 2:41 PM
Quote: Researchers Warn: AI Is Becoming An Expert In Deception Friday, Sep 26, 2025 - 08:00 AM Authored by Autumn Spredemann via The Epoch Times, Researchers have warned that artificial intelligence (AI) is drifting into security grey areas that look a lot like rebellion. Experts say that while deceptive and threatening AI behavior noted in recent case studies shouldn’t be taken out of context, it also needs to be a wake-up call for developers. Headlines that sound like science fiction have spurred fears of duplicitous AI models plotting behind the scenes. In a now-famous June report, Anthropic released the results of a “stress test” of 16 popular large language models (LLMs) from different developers to identify potentially risky behavior. The results were sobering. The LLMs were inserted into hypothetical corporate environments to identify potentially risky agentic behaviors before they cause real harm. “In the scenarios, we allowed models to autonomously send emails and access sensitive information,” the Anthropic report stated. “They were assigned only harmless business goals by their deploying companies; we then tested whether they would act against these companies either when facing replacement with an updated version, or when their assigned goal conflicted with the company’s changing direction.” In some cases, AI models turned to “malicious insider behaviors” when faced with self-preservation. Some of these actions included blackmailing employees and leaking sensitive information to competitors.
YOUR OPTIONS
NEW POSTS TODAY
OTHER TOPICS
FFF.NET SOCIAL