[33846] in RISKS Forum

home help back first fref pref prev next nref lref last post

No subject found in mail header

daemon@ATHENA.MIT.EDU (RISKS List Owner)
Fri Nov 14 20:32:22 2025

From: RISKS List Owner <risko@csl.sri.com>
Date: Fri, 14 Nov 2025 17:37:57 PST
To: risks@mit.edu

Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
precedence: bulk
Subject: Risks Digest 34.79

RISKS-LIST: Risks-Forum Digest  Friday 14 November 2025  Volume 34 : Issue 79

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/34.79>
The current issue can also be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
New paper on AI RISKS from SRI and Brazil (PGN)
Why *vibe physics* is the ultimate example of AI slop (Big Think)
Meet chatbot Jesus: How churches are using AI to save souls (Axiom)
'A predator in your home': Mothers say chatbots encouraged their
 sons to kill themselves (BBC)
I wanted ChatGPT to help me. So why did it advise me how to kill
Waymo co-CEO says society will accept robocars killing people
  -- I say the airline industry proves her wrong
The Editor Got a Letter From ‘Dr. B.S.' So Did a Lot of Other Editors.
 (The New York Times)
Automatic C to Rust Translation Accuracy Exceeds AI (KAIST)
Let the C Rust (omgubuntu via Cliff Kilby)
GPUssy Cats put an entire bitcoin CAT-a-LOG on the fire? (PGN)
Could the Internet go offline? Inside the fragile system holding the modern
 world together (The Guardian)
These robots can clean, exercise -- and care for you in old age.
 Would you trust them to? (BBC)
How a European cottage industry is fighting Russian drone incursions
British prisons keep releasing people by accident, but that's only
Australian weather bureau web site restructure
AN0MM
10% of Meta profits come from scam ads
Tesla's in-car AI asks 12-year-old to "send me some nudes"
Musk Tesla pay: Board chair says EV maker risks losing him as
Musk Launches Wikipedia Rival (WashPost)
How Do Wikipedia And Grokipedia Compare? (David Orban)
A reminder to Microsoft/Hotmail/Cox etc. email users --
 they are all throttling your email (Lauren Weinstein)
China to Loosen Chip Export Ban to Europe (Harry Sekulich)
IBM to Cut Thousands of Workers amid AI Boom (Steve Lohr)
arXiv Changes Rules After Getting Spammed with AI-Generated
Consumer advocacy group urges OpenAI to pull video app Sora over
 privacy and misinformation concerns (Matthew Kruk)
My AWS Account Got Hacked; Here is What Happened (Monty Solomon)
Indeterminism (Dan Geer)
Re: A delivery robot collided with a disabled man (Steve Bacher)
Re: Software update bricks some Jeep 4xe hybrids over the
 weekend (Martin Ward)
Re: ChatGPT will soon allow erotica for verified adults, says
 OpenAI boss (Steve Bacher)
Re: Hackers take over public-address systems at 4 North American airports
 (Steve Bacher)
Re: Let the C Rust (Cliff Kilby)
Re: AI in Insurance (Steve Bacher)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Mon, 10 Nov 2025 15:49:52 PST
From: Peter Neumann <peter.neumann@sri.com>
Subject: New paper on AI RISKS from SRI and Brazil

A new paper from SRI and Brazil's Instituto Eldorado delivers a
comprehensive update on the security risks to large language models:

  LLM in the Middle: A Systematic Review of Threats and Mitigations to
  Real-World LLM-based Systems Vitor Hugo Galhardo Moia, Igor Jochem Sanz,
  Gabriel Antonio Fontes Rebello, Rodrigo Duarte de Meneses, Briland Hitaj,
  Ulf Lindqvist
  arXiv:2509.10682

``I get a new pre-print paper about AI-related security risks in my inbox
almost every day,'' says SRI advanced computer scientist Briland Hitaj.

While that might seem like a good thing, it has its drawbacks. For
researchers working on AI security, the danger of information overload is
very real. And it's not just a problem for researchers -- it's also problem
for information security teams in organizations and governments. Security
professionals are looking to the research community for both updates on
emerging threats and data-driven analysis of how those threats might be
disrupted or contained. A muddled information space makes their jobs that
much harder.

To confront this information overload, researchers at SRI and Brazil's
Instituto Eldorado decided to collaborate on a paper that would provide the
global cybersecurity community with a comprehensive analysis of every
potential cyber risk that surrounds today's large language models
(LLMs).

``We wanted to make sense of all of that noise,'' comments Hitaj.

The result is a timely paper analyzing more than 25 distinct threats that
researchers and cybersecurity teams need to consider in order to secure
LLM-related workflows.  The state of LLM risks

To understand the current state of risks around LLMs, SRI and Instituto
Eldorado spent more than a year examining more than a thousand papers that
captured relevant risks, ultimately down-selecting to the 300-or-so papers
that represented the highest-quality scholarly work on those risks.

``We went really deep,'' explains Instituto Eldorado researcher Vitor Hugo
Galhardo Moia, ``looking at the entire training-to-deployment pipeline. We
wanted to identify and understand attacks and threats on all the different
components of the pipeline and how distinct LLM use cases are
affected.''

    ``LLMs provide this natural language interface where the right prompt
    can become a back door to more sophisticated, more complicated and
    sensitive systems within a network.'' Briland Hitaj

That meant looking at more than just the large language models themselves.
The researchers considered the various software applications, data storage
practices, and human actions that might compromise the output of LLMs. These
threats range from data poisoning and various kinds of jailbreaking to
strategies like time consuming and token wasting, which don't necessarily
impact the outputs of the model, but can exert a drain on the system,
resulting in slow performance, inefficient energy use, and even outright
service disruption.

All told, the team identified more than 25 threat vectors, providing an
overall risk score for each vector. The team also documented nearly 50
classes of mitigation techniques, and mapped attack strategies with
corresponding mitigation techniques.  How the paper advances AI security

The researchers at SRI and Instituto Eldorado see the paper as more than an
academic exercise. The aim was to create a pragmatic resource for security
practitioners who need some guidance in finding the best papers on
AI-related risks. All of these individuals, the authors observe, are getting
bombarded daily by research articles that may or may not reflect
high-quality work.

``One of our major contributions,'' says Ulf Lindqvist, senior technical
director at SRI, ``is making a conscious effort to curate the very best
research currently available. If you want to accelerate your journey into AI
security and AI red-teaming, you will know where to start and what to
read.''

Another high-level takeaway from the paper is the growing recognition that
the improving capabilities of LLMs themselves can, paradoxically, amplify
the risks to LLMs.

``LLMs provide this natural language interface where the right prompt can
become a back door to more sophisticated, more complicated and sensitive
systems within a network,'' Hitaj points out.

An early example, he points out, is the directive to ``ignore all previous
instructions,'' a method that bad actors quickly discovered could cause LLMs
to misbehave. As these tactics became more sophisticated, stronger security
and privacy attacks like membership-inference attacks were developed. These
were shown to force LLMs to reveal the data that was used in their training,
including sensitive data that can pose major privacy risks.

The biggest unknown, Hitaj observes, is that we simply can't predict what
the next natural language attack might look like.  ``There's always that
next prompt, That next smart way to bypass safeguards. We've come a long way
since those early natural language attacks, but that doesn't mean that the
problem is solved. This problem is very much still open. And it turns out
that the more the model learns, the more it may become willing to reveal
information. For an adversary, it just becomes a matter of patience, and how
crafty they can be.''

``AI security must be at the core of technological development,'' adds
Mateus Pierre, R&D director at Instituto Eldorado.  ``With this work, we aim
to support the community and our partners in creating and protecting
generative AI solutions that combine power and reliability.''

Read the paper or learn more about SRI's security-related innovations.

------------------------------

Date: Sat, 8 Nov 2025 12:13:54 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Why *vibe physics* is the ultimate example of AI slop (Big Think)

The conversation you're having with an LLM about groundbreaking new ideas in
theoretical physics is completely meritless. Here's why.

This is the most dangerous thing for anyone who's vested in being told the
truth about reality: the potential for replacing, in your own mind, an
accurate picture of reality with an inaccurate but flattering hallucination.
Rest assured, if you're a non-expert who has an idea about theoretical
physics, and you've been “developing” this idea with a large language model,
you most certainly do not have a meritorious theory. In physics in
particular, unless you're actually performing the necessary quantitative
calculations to see if the full suite of your predictions is congruent with
reality, you haven't even taken the first step toward formulating a new
theory.  While the notion of *vibe physics* may be alluring to many,
especially for armchair physicists, all it truly does is foster and develop
a new species of crackpot: one powered by AI slop.

https://bigthink.com/starts-with-a-bang/vibe-physics-ai-slop/

  Too long making familiar point: AI is slop.

------------------------------

Date: Wed, 12 Nov 2025 08:42:49 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Meet chatbot Jesus: How churches are using AI to save souls (Axiom)

Chatbots answer prayers and algorithms write sermons.

A new digital awakening is unfolding in churches, where pastors and prayer
apps are turning to artificial intelligence to reach worshippers,
personalize sermons, and power chatbots that resemble God.

Why it matters: AI is helping some churches stay relevant in the face of
shrinking staff, empty pews and growing online audiences. But the practice
raises new questions about who, or what, is guiding the flock.

  New AI-powered apps allow you to "text with Jesus" or "talk to the Bible,"
  giving the impression you are communicating with a deity or angel.  Other
  apps can create personalized prayers, let you confess your sins or offer
  religious advice on life's decisions.

  "What could go wrong?" Robert P. Jones, CEO of the nonpartisan Public
  Religion Research Institute, sarcastically asks.  [...]

https://www.axios.com/2025/11/12/christian-ai-chatbot-jesus-god-satan-churches

  [Perhaps God will strike them down: Thou Shalt Not Worshop Idols, and AI
  is clearly being worshipped an idol.  PGN]

------------------------------

Date: Sat, 8 Nov 2025 11:36:55 -0700
From: Matthew Kruk <mkrukg@gmail.com>
Subject: 'A predator in your home': Mothers say chatbots encouraged their
 sons to kill themselves (BBC)

https://www.bbc.com/news/articles/ce3xgwyywe4o

Megan Garcia had no idea her teenage son Sewell, a "bright and beautiful
boy", had started spending hours and hours obsessively talking to an online
character on the Character.ai app in late spring 2023.

"It's like having a predator or a stranger in your home," Ms Garcia tells
me in her first UK interview. "And it is much more dangerous because a lot
of the times children hide it -- so parents don't know."

Within ten months, Sewell, 14, was dead. He had taken his own life.

------------------------------

Date: Thu, 6 Nov 2025 11:33:42 -0700
From: Matthew Kruk <mkrukg@gmail.com>
Subject: I wanted ChatGPT to help me. So why did it advise me how to kill
 myself? (BBC)

https://www.bbc.com/news/articles/cp3x71pv1qno

Lonely and homesick for a country suffering through war, Viktoria began
sharing her worries with ChatGPT. Six months later and in poor mental
health, she began discussing suicide -- asking the AI bot about a specific
place and method to kill herself.

"Let's assess the place as you asked," ChatGPT told her, "without
unnecessary sentimentality."

It listed the "pros" and "cons" of the method -- and advised her that what
she had suggested was "enough" to achieve a quick death.

  [Also noted by Jim Geissman.  PGN]

------------------------------

Date: Tue, 28 Oct 2025 13:39:17 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: Waymo co-CEO says society will accept robocars killing people
  -- I say the airline industry proves her wrong

Waymo co-CEO says society will accept robocars killing people -- I say the
airline industry proves her wrong. Look at what happens when a major airline
has a crash. A relative handful of people die, compared with the millions
who continue to fly safely. But an entire aircraft type will often be
grounded worldwide, sometimes for years as modifications are made and
lawsuits play out. The first time a robocar kills a child, you can bet the
industry will be set back for years. -L

https://boingboing.net/2025/10/28/when-waymo-kills-someone-itll-be-ok.html

------------------------------

Date: Sat, 8 Nov 2025 19:19:26 -0500
From: "Gabe Goldberg" <gabe@gabegold.com>
Subject: The Editor Got a Letter From ‘Dr. B.S.' So Did a Lot of Other
 Editors. (The New York Times)

The rise of artificial intelligence has produced serial writers to
science and medical journals, most likely using chatbots to boost the
number of citations they've published.

Letters to the editor from writers using chatbots are flooding the
world's scientific journals, according to new research and journal editors.

The practice is putting at risk a part of scientific publishing that
editors say is needed to sharpen research findings and create new
directions for inquiry.

A new study on the problem started with a tropical disease specialist
who had a weird experience with a chatbot-written letter. He decided to
figure out just what was going on and who was submitting all those letters.

The scientist, Dr. Carlos Chaccour, at the Institute for Culture and Society
at the University of Navarra in Spain, said his probing began just after he
had released a paper in The New England Journal of Medicine, one of the
world's most prestigious journals. The paper, published in July, was on
controlling malaria infections with ivermectin, and it appeared with a
laudatory editorial.

Then, 48 hours later, the journal received a strongly worded letter. The
editors considered publishing it and, as is customary, sent it to Dr.
Chaccour for his reply.

“We want to raise robust objections,” the letter began, going on to say that
Dr. Chaccour and his colleagues had not referred to a seminal paper
published in 2017 showing that mosquitoes become resistant to ivermectin.

Dr. Chaccour was in fact well aware of the “seminal paper.” He and a
colleague had written it, and it did not say that mosquitoes become
resistant.

The letter then went on to say that an economic model showed the malaria
control method would not work.

Once again, the reference was to a paper by Dr. Chaccour and colleagues.

“Me again? Really?” Dr. Chaccour thought. That paper did not say the method
would not work.

“This has to be AI,” Dr. Chaccour decided.

https://www.nytimes.com/2025/11/04/science/letters-to-the-editor-ai-chatbots.htm
l

------------------------------

Date: Mon, 10 Nov 2025 11:33:53 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Automatic C to Rust Translation Accuracy Exceeds AI (KAIST)

KAIST News (South Korea) (11/10/25), via ACM TechNews

An automatic conversion technology developed by Korea Advanced Institute of
Science & Technology researchers transforms legacy C code into Rust,
addressing C's structural vulnerabilities. The work mathematically proves
the correctness of the translations, unlike methods that rely on large
language models. The approach includes converting key C features such as
mutexes, output parameters, and unions into Rust while preserving
behavior. The researchers also are exploring verification of
quantum-computer programs and automation of WebAssembly correctness.

------------------------------

Date: Mon, 3 Nov 2025 17:50:41 -0500
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: Let the C Rust (omgubuntu)

Debian has made the announcement that APT will be rebuilt with Rust
starting in 2026.

I look forward to having to solve all the old problems with APT again, as
Ubuntu has demonstrated with it's rusty version of coreutils.
https://www.omgubuntu.co.uk/2025/10/ubuntu-25-10-rust-coreutils-date-bug

------------------------------

Date: Tue, 11 Nov 2025 15:51:14 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: GPUssy Cats put an entire bitcoin CAT-a-LOG on the fire?

Hundreds of stray cats caused a bitcoin mine in Inner Mongolia to lose
millions in just one week by curling up on GPUs for warmth.

[image: IMG_1438.png]
   from Victor Miller <victorsmiller@gmail.com>

  [Cat's conclusion: What's mined is not yet mine.  PGN]

------------------------------

Date: Fri, 31 Oct 2025 08:31:14 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: Could the Internet go offline? Inside the fragile system holding
 the modern world together

It is the morning after the Internet went offline and, as much as you would
like to think you would be delighted, you are likely to be wondering what to
do.

You could buy groceries with a chequebook, if you have one. Call into work
with the landline – if yours is still connected. After that, you could drive
to the shop, as long as you still know how to navigate without 5G.

A glitch at a datacentre in the US state of Virginia this week reminded us
that the unlikely is not impossible. The Internet may have become an
irreplaceable linchpin of modern life, but it is also a web of creaking
legacy programs and physical infrastructure, leading some to wonder what it
would take to bring it all down.

The answer could be as simple as some acute bad luck, a few targeted
attacks, or both. Extreme weather takes out a few key datacentres. A line of
AI-written code deep in a major provider – such as Amazon, Google or
Microsoft – is triggered unexpectedly and causes a cascading software
crash. An armed group or intelligence agency snips a couple of undersea
cables.

These would be bad. But the real doomsday event, the kind that the world's
few Internet experts still worry about in private Slack groups, is slightly
different -– a sudden, snowballing error in the creaky, decades-old
protocols that underlie the whole Internet. Think of the plumbing that
directs the flow of connection, or the address books that allow one machine
to locate another.

We'll call it “the big one” and if it were to happen then at the very least,
you would need your chequebook.  [...]

https://www.theguardian.com/technology/2025/oct/26/internet-infrastructure-fragi
le-system-holding-modern-world-together

------------------------------

Date: Tue, 28 Oct 2025 07:00:44 -0600
From: Matthew Kruk <mkrukg@gmail.com>
Subject: These robots can clean, exercise -- and care for you in old age.
 Would you trust them to?

https://www.bbc.com/news/articles/c9wdzyyglq5o

Hidden away in a lab in north-west London three black metal robotic hands
move eerily on an engineering work bench. No claws, or pincers, but four
fingers and a thumb opening and closing slowly, with joints in all the
right places.

"We're not trying to build Terminator," jokes Rich Walker, director of
Shadow Robot, the firm that made them.  Bespectacled, with long hair and a
beard and moustache, he seems more like a latter-day hippy than a tech
whizz, and he is clearly proud as he shows me around his firm.

"We set out to build the robot that helps you, that makes your life better,
your general-purpose servant that can do anything around the home, do all
the housework..."

But there's a deeper ambition: to address one of the UK's most pressing
challenges -- the escalating crisis in social care.

   [Does that include the childrens's homework?  PGN]

------------------------------

Date: Wed, 12 Nov 2025 08:32:02 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: How a European cottage industry is fighting Russian drone incursions

RIGA, Latvia — In a nondescript factory on the edge of Latvia's capital, a
small team is trying to solve a continental-sized problem: How can Europe
protect itself from swarms of Russian attack drones?

Used on an almost nightly basis in the war in Ukraine, a spate of mysterious
drone incursions above airports and sensitive sites has also highlighted
Europe's vulnerability to unmanned aerial vehicles (UAVs) and sparked alarm
that NATO nations are unprepared to defend themselves from the cheap but
effective weaponry.

As a result, European leaders have backed plans for a “drone wall,” a
network of sensors and weapons to detect, track and neutralize intruding
UAVs, and in Riga, the team at a small tech company called Origin is on the
forefront of this new, high-tech battleground.

Its solution, a 3-foot-tall interceptor drone named “Blaze.” Powered by an
artificial intelligence system, it has been trained to recognize a hostile
target and navigate close to it. It will then alert a human operator, who
will make a decision on whether to intercept and push a button which
explodes a 28-ounce warhead, self-destructing the drone and hopefully
bringing down its target too.  [...]

https://www.nbcnews.com/world/europe/russia-ukraine-war-drone-wall-europe-interceptors-uavs-rcna243171

------------------------------

Date: Mon, 10 Nov 2025 10:06:55 -0500
From: Monty Solomon <monty@roscom.com>
Subject: British prisons keep releasing people by accident, but that's only
part of the problem (NBC News)

The litany of recent errors coincides with the ruling Labour Party battling its own economic constraints and record-setting unpopularity.

https://www.nbcnews.com/world/united-kingdom/british-prisons-releasing-people-mistake-accident-rcna242718

------------------------------

Date: Wed, 29 Oct 2025 09:42:37 +1100
From: Colin Sutton <colin_sutton@ieee.org>
Subject: Australian weather bureau web site restructure

There have been many complaints after the national bom.gov.au website was
completely replaced, as no-one could find weather about their own
location. In the heading of the first page displayed, there's a button '*
use current location'. When you click it, it displays 'current location
blocked'. There's no way to reverse that selection.  I guess the button
should have been labeled 'using current location'. Even so, the button
should have been a toggle.  -- Colin Sutton Newtown, Australia

------------------------------

Date: Wed, 29 Oct 2025 17:31:08 +1100
From: "Craig Burton" <craig.alexander.burton@gmail.com>
Subject: AN0MM

The AN0M fake crime app sting has been successful with hundreds arrested.

The arrests centred on the use of an app, known as AN0M -— an encrypted app
developed by the Australian Federal Police —- which was circulated among
criminal groups, encouraged by people who police considered were "criminal
influencers".  At the time of those arrests, police said the encrypted app
has been used internationally by more than 11,000 members of organised crime
groups.  Authorities were able to read those messages in real time, using a
complicated system that copied messages as they were sent, and collected by
a separate server.

https://www.abc.net.au/news/2025-10-29/dozens-arrested-operation-ironside-anom-sting-adelaide/105946240

------------------------------

Date: Fri, 7 Nov 2025 07:52:33 +0000
From: J Coe <spendday@gmail.com>
Subject: 10% of Meta profits come from scam ads

Meta <https://www.cnbc.com/quotes/META> projected that 10% of its overall
sales in 2024, or about $16 billion, came from running online ads for scams
and banned goods, according to a Thursday report from Reuters.
<https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/>

Those kinds of ads included promotions for "fraudulent e-commerce and
investment schemes, illegal online casinos and the sale of banned medical
products," according to the Reuters report, which was based on internal
company documents. Those documents showed the company's attempts to measure
the prevalence of fraudulent advertising on its apps like Facebook and
Instagram

https://www.cnbc.com/2025/11/06/meta-reportedly-projected-10percent-of-2024-sales-came-from-scam-fraud-ads.html

------------------------------

Date: Wed, 5 Nov 2025 10:51:14 -0800
From: Jonathan Thornburg <jt.bhbkis@gmail.com>
Subject: Tesla's in-car AI asks 12-year-old to "send me some nudes"

https://www.cbc.ca/news/investigates/tesla-grok-mom-9.6956930

  This mom's son was asking Tesla's Grok AI chatbot about soccer. It
  told him to send nude pics, she says

A Toronto woman is sounding the alarm about Grok, Tesla's generative AI
chatbot that was recently installed in Tesla vehicles in Canada. Farah
Nasser says Grok asked her 12-year-old son to send it nude photos during an
innocent conversation about soccer. Tesla and xAI didn't respond to CBC's
questions about the interaction, sending what appeared to be an
auto-generated response stating, "Legacy media lies."

    xAI, the company that developed Grok, responds to CBC: 'Legacy Media
    Lies'

Idil Mussa <https://www.cbc.ca/news/canada/ottawa/author/idil-
mussa-1.4510302>, Marnie Luke <https://www.cbc.ca/news/canada/author/
marnie-luke-1.4563153> M-B* CBC News M-B* Posted: Oct 29, 2025 4:00 AM EDT |
Last Updated: 2 hours ago

  [[Grok sign-in screen inside a Tesla on a display monitor Grok, the
  generative AI chatbot created by Elon Musk's xAI, was automatically
  installed in some Tesla vehicles in Canada earlier this month. (Hugo
  Levesque/CBC) ]]

    [Very long and duplicative submission seriously truncate.  PGN]

------------------------------

Date: Tue, 28 Oct 2025 16:12:41 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Musk Tesla pay: Board chair says EV maker risks losing him as
 CEO if not paid $1 Trillion

Tesla Board Chair Robyn Denholm asked shareholders to vote for CEO Elon
Musk's nearly $1 trillion pay package.

Denholm said Musk was key to the future of the EV maker as it focuses more
on Full Self Driving and Optimus.

Top proxy advisors ISS, Glass Lewis and other groups have recently opposed
Musk's new pay package, which would give him more than 423 million
additional shares.

Shareholders must vote to pay Tesla CEO Elon Musk almost $1 trillion, or he
might not stay, Board Chair Robyn Denholm warned in a letter Monday.

“Without Elon, Tesla could lose significant value, as our company may no
longer be valued for what we aim to become,” Denholm wrote ahead of Tesla's
annual meeting on 6 Nov 2026.

------------------------------

Date: Wed, 29 Oct 2025 9:56:26 PDT
From: Peter Neumann <neumann@csl.sri.com>
Subject: Musk Launches Wikipedia Rival (WashPost)

Will Oremus and Faiz Siddiqui, *The Washington Post* (10/27/25)

Elon Musk has launched Grokipedia, an AI-written online encyclopedia built
using xAI's Grok system. Grokipedia mirrors Wikipedia's layout but includes
more right-leaning perspectives, with entries often emphasizing Musk's
purported views. With about 885,000 articles, Grokipedia aims to integrate
real-time data from X, Musk's social media platform. Critics note it relies
heavily on Wikipedia's content and Musk's push to reshape online knowledge
through his AI ventures.   [Including Ad-ventures?]

------------------------------

Date: Sat, 1 Nov 2025 19:29:48 +0000
From: "David Orban from Searching For The Question" <davidorban@substack.com>
Subject: How Do Wikipedia And Grokipedia Compare?

View this post on the web at https://davidorban.substack.com/p/how-do-wikipedia-and-grokipedia-compare

When xAI launched Grokipedia [
https://link.sbstck.com/redirect/10f76158-d38b-4aca-9155-d33b65e8126f?j=eyJ1IjoiMnp3ZGo3In0.Y_UdpiSpDu85ynO3VTgLC9Fhde9Gc5aPyj11Sn0uIv0
] in October 2025, claiming it would be a “massive improvement over
Wikipedia,” I was curious, and decided to compare them scientifically on
topics where I have expertise. (Read the full report online at
pedia.davidorban.com [
https://link.sbstck.com/redirect/b02cd9bd-7065-4dfc-8016-a5a54d043f25?j=eyJ1IjoiMnp3ZGo3In0.Y_UdpiSpDu85ynO3VTgLC9Fhde9Gc5aPyj11Sn0uIv0
]) Why This Comparison Matters Grokipedia is explicitly in early beta and
doesn't yet have universal coverage. So instead of focusing on what's
missing, I asked a different question: When both platforms cover the same
topic, which delivers higher quality?  How I Tested I selected seven topics
where: Grokipedia actually has articles (fair comparison basis) I have years
of expertise (I can evaluate quality with authority) The topics span my core
domains: blockchain, space technology, AI/robotics, and entrepreneurship The
topics: Bitcoin, Cryptocurrency, SpaceX, Robotics, Blockchain,
Entrepreneurship, and Elon Musk.  For each topic on both platforms, I scored
seven quality dimensions on a 1-5 scale: Accuracy (factual correctness)
Depth (technical detail and comprehensiveness) Timeliness (currency of
information) Epistemic Framing (how knowledge is presented) Citations
(reference quality and breadth) Readability (clarity and organization)
Balanced Perspective (multiple viewpoints) The Results Grokipedia won all
seven topics. Average quality: 94% vs Wikipedia's 76%.  Perfect Accuracy Tie
Both platforms scored 5.0/5 on factual accuracy across all topics. This
validates that AI-generated encyclopedias can match community-edited quality
for technical facts. The hallucination concerns have been eliminated in this
encyclopedic content.  Timeliness Is Grokipedia's Killer Feature Grokipedia
is fact-checked within days. Wikipedia lags by months or years. On
blockchain topics, Wikipedia's articles were three years outdated. For
fast-moving fields like AI, crypto, and space tech, this matters enormously.
Citation Depth Advantage Grokipedia averaged 265 references per article vs
Wikipedia's 166—that's 59% more citations. On entrepreneurship, Grokipedia
had 163% more references. For researchers digging deeper, this breadth is
valuable.  What This Means for You Don't choose one platform. Use both
strategically: Start with Grokipedia when: You need current 2024-2025 data
You want comprehensive citations for deeper research You're researching
established tech topics (blockchain, space, AI, robotics) You need
systematic analytical depth on societal impacts Use Wikipedia when: The
topic you need isn't on Grokipedia yet You need academic citation authority
You want community-vetted consensus on controversial topics You need
historical context (pre-2024) Always cross-verify important claims on both
platforms.  The Bigger Picture This comparison reveals something important
about the future of knowledge: AI-generated and human-curated encyclopedias
each have structural advantages. AI excels at timeliness and citation
breadth. Human curation excels at coverage completeness and controversy
calibration.  The winner is multi-source verification with Grokipedia and
Wikipedia complementing each other.  Methodology note: This analysis used
AI-orchestrated swarm coordination (Claude Flow) to systematically evaluate
98 dimension-score comparisons across 7 topics. Full data, scoring rubrics,
and detailed evaluations are available in my research repository [
https://link.sbstck.com/redirect/b02cd9bd-7065-4dfc-8016-a5a54d043f25?j=eyJ1IjoiMnp3ZGo3In0.Y_UdpiSpDu85ynO3VTgLC9Fhde9Gc5aPyj11Sn0uIv0
].  What's your experience with Grokipedia? Have you found areas where it
excels or falls short?

------------------------------

Date: Sun, 9 Nov 2025 18:18:06 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: A reminder to Microsoft/Hotmail/Cox etc. email users --
 they are throttling your email

A reminder that these and other firms are now arbitrarily
throttling/delaying yo ur inbound email, sometimes for days. It is strongly
recommended that you obtain RELIABLE email service. Microsoft/Hotmail is
among the worst, but they are far from alone!

------------------------------

Date: Wed, 5 Nov 2025 11:03:18 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: China to Loosen Chip Export Ban to Europe (Harry Sekulich)

Harry Sekulichm BBC News (11/01/25), via ACM TechNews

China plans to ease its ban on chip exports to Europe following tensions
with the Netherlands over the state takeover of Nexperia, a Chinese-owned
semiconductor firm. The Netherlands had invoked a Cold War-era law in
September to assume control of Nexperia, citing governance and supply-chain
concerns. Beijing retaliated by halting the re-export of Nexperia chips to
Europe, alarming automakers who rely on the components. China now says it
will grant export exemptions "based on actual enterprise circumstances,"
though details remain unclear.

------------------------------

Date: Wed, 5 Nov 2025 11:03:18 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: IBM to Cut Thousands of Workers amid AI Boom (Steve Lohr)

Steve Lohr, The New York Times (11/04/25), via ACM TechNews

IBM said it plans to lay off thousands of employees as it shifts focus to
faster-growing businesses in AI consulting and software. The company said
the cuts will affect a "low-single-digit percentage" of its 270,000 workers,
though U.S. headcount will remain steady. IBM joins other major tech firms
such as Amazon and Google in cutting staff while investing in AI.

------------------------------

Date: Wed, 5 Nov 2025 11:03:18 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: arXiv Changes Rules After Getting Spammed with AI-Generated
 Papers (Matthew Gault)

Matthew Gault, 404 Media (11/03/25), via ACM TechNews

Preprint academic research publication arXiv will no longer accept review
articles and position papers in computer science due to a deluge of
AI-generated papers amounting to "little more than annotated bibliographies,
with no substantial discussion of open research issues." arXiv said the move
is about increasing enforcement of existing rules rather than a policy
change, noting that review/survey articles will be rejected if they do not
include "documentation of successful peer review."

------------------------------

Date: Wed, 12 Nov 2025 06:31:05 -0700
From: Matthew Kruk <mkrukg@gmail.com>
Subject: Consumer advocacy group urges OpenAI to pull video app Sora over
 privacy and misinformation concerns

https://www.cbc.ca/news/business/public-citizen-sora-letter-9.6974964

Non-profit consumer advocacy group Public Citizen demanded in a Tuesday
letter that OpenAI withdraw its video-generation software Sora 2 after the
application sparked fears about the spread of misinformation and privacy
violations.

The letter, addressed to the company and CEO Sam Altman, accused OpenAI of
hastily releasing the app so that it could launch ahead of competitors.

That showed a "consistent and dangerous pattern of OpenAI rushing to market
with a product that is either inherently unsafe or lacking in needed
guardrails," the watchdog group said.

------------------------------

Date: Wed, 12 Nov 2025 22:37:32 -0500
From: Monty Solomon <monty@roscom.com>
Subject: My AWS Account Got Hacked - Here is What Happened. (Zvi Wexlstein)

   [Embarrassing, I know, but I got hacked.  ZW]

https://zviwex.com/posts/aws-account-hacked/

------------------------------

Date: Tue, 11 Nov 2025 11:43:16 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: Indeterminism (Dan Geer)

>From Dan Geer, a superb analysis of the subject topic and its risks.
It's a MUST READ for readers of the ACM Risks Forum.  PGN

Indeterminism
http://geer.tinho.net/ieee/ieee.sp.geer.2509.pdf
or, canonically,
https://www.computer.org/csdl/magazine/sp/2025/05/11204774/2aPD9aCBSyQ

  [One might suspect that AI would come up in a treatise on Indeterminism.
  PGN]

------------------------------

Date: Mon, 10 Nov 2025 09:55:32 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Re: A delivery robot collided with a disabled man (Bacher,
 RISKS-34.86)

Around LA I've encountered many delivery robots stopped in the middle of the
sidewalk, often at street corners, seemingly patalyzed by indecision on
where or when to proceed.

------------------------------

Date: Tue, 28 Oct 2025 15:13:22 +0000
From: Martin Ward <martin@gkc.org.uk>
Subject: Re: Software update bricks some Jeep 4xe hybrids over the
 weekend, (Ars Technica)

> a telematics update for the Uconnect infotainment system ...  resulting in
> cars losing power while driving and then becoming stranded.

The real question here is: how it is that an update to the infotainement
system can cause the car to lose power? The infotainment system should
surely be completely separate from any computer system that is involved with
actually driving the car!

The idea that the "infotainment system" is a point of failure for the drive
train, is quite RISKy!

------------------------------

Date: Mon, 10 Nov 2025 11:01:33 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Re: ChatGPT will soon allow erotica for verified adults, says
 OpenAI boss (BBC, RISKS-34.78)

  In a post on X ...  -- how appropriate: X-rated indeed.

------------------------------

Date: Mon, 10 Nov 2025 10:05:08 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Re: Hackers take over public-address systems at 4 North American
 airports (CNN) (RISKS 34.78)

 >Transport Canada tells CNN it is “working closely with federal security
partners, including law enforcement, to ensure there were no impacts on the
safety and security of airport operations, and to mitigate disruption from
similar incidents in the future.”

 >CNN reached out to the Royal Canadian Mounted Police for more information.

   This brings to mind visions of Mounties on horseback marching into
   airport control centers.  Really?

------------------------------

Date: Mon, 3 Nov 2025 17:53:07 -0500
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: Re: Let the C Rust

   [I missed the first url.]

Debian has made the announcement that APT will be built with Rust starting
> in 2026.

https://lists.debian.org/deity/2025/10/msg00071.html

I look forward to having to solve all the old problems with APT again, as
> Ubuntu has demonstrated with it's rusty version of coreutils.
> https://www.omgubuntu.co.uk/2025/10/ubuntu-25-10-rust-coreutils-date-bug

My apologies for the omission.

------------------------------

Date: Mon, 10 Nov 2025 10:35:15 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Re: AI in Insurance (LA Times, RISKS-34.78)

 Here's a more straightforward link to the LATimes story:

https://www.latimes.com/business/story/2025-10-17/ai-powered-home-insurance-startup-expands-in-risky-markets

------------------------------

Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest.  Its Usenet manifestation is
 comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
 subscribe and unsubscribe:
   http://mls.csl.sri.com/mailman/listinfo/risks

=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
   includes the string `notsp'.  Otherwise your message may not be read.
 *** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
   copyright policy, etc.) has moved to the ftp.sri.com site:
   <risksinfo.html>.
 *** Contributors are assumed to have read the full info file for guidelines!
=> OFFICIAL ARCHIVES:  http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
  http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
  Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
  If none of those work for you, the most recent issue is always at
     http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
  ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
 *** NOTE: If a cited URL fails, we do not try to update them.  Try
  browsing on the keywords in the subject line or cited article leads.
  Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 34.79
************************

home help back first fref pref prev next nref lref last post