news2.txt Part7
From
Sean Rima@618:500/1 to
All on Wed Oct 15 10:49:50 2025
rol they have over AI in their lives. A Democratic party desperate to regain relevance for and approval from young voters might turn to AI as both a tool and a topic for engaging them.
Voters and politicians alike should recognize that AI is no longer just an outside influence on elections. It's not an uncontrollable natural disaster raining deepfakes down on a sheltering electorate. It's more like a fire: a force that political actors can harness and manipulate for both mechanical and symbolic purposes.
A party willing to intervene in the world of corporate AI and shape the future of the technology should recognize the legitimate fears and opportunities it presents, and offer solutions that both address and leverage AI.
This essay was written with Nathan E. Sanders, and originally appeared in Time.
** *** ***** ******* *********** *************
AI-Enabled Influence Operation Against Iran
[2025.10.07] Citizen Lab has uncovered a coordinated AI-enabled influence operation against the Iranian government, probably conducted by Israel.
Key Findings
A coordinated network of more than 50 inauthentic X profiles is conducting an AI-enabled influence operation. The network, which we refer to as "PRISONBREAK," is spreading narratives inciting Iranian audiences to revolt against the Islamic Republic of Iran. While the network was created in 2023, almost all of its activity was conducted starting in January 2025, and continues to the present day. The profiles' activity appears to have been synchronized, at least in part, with the military campaign that the Israel Defense Forces conducted against Iranian targets in June 2025.
While organic engagement with PRISONBREAK's content appears to be limited, some of the posts achieved tens of thousands of views. The operation seeded such posts to large public communities on X, and possibly also paid for their promotion.
After systematically reviewing alternative explanations, we assess that the hypothesis most consistent with the available evidence is that an unidentified agency of the Israeli government, or a sub-contractor working under its close supervision, is directly conducting the operation. News article.
** *** ***** ******* *********** *************
Flok License Plate Surveillance
[2025.10.08] The company Flok is surveilling us as we drive:
A retired veteran named Lee Schmidt wanted to know how often Norfolk, Virginia's 176 Flock Safety automated license-plate-reader cameras were tracking him. The answer, according to a U.S. District Court lawsuit filed in September, was more than four times a day, or 526 times from mid-February to early July. No, there's no warrant out for Schmidt's arrest, nor is there a warrant for Schmidt's co-plaintiff, Crystal Arrington, whom the system tagged 849 times in roughly the same period.
You might think this sounds like it violates the Fourth Amendment, which protects American citizens from unreasonable searches and seizures without probable cause. Well, so does the American Civil Liberties Union. Norfolk, Virginia Judge Jamilah LeCruise also agrees, and in 2024 she ruled that plate-reader data obtained without a search warrant couldn't be used against a defendant in a robbery case.
** *** ***** ******* *********** *************
Autonomous AI Hacking and the Future of Cybersecurity
[2025.10.10] AI agents are now hacking computers. They're getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything.
Over the summer, hackers proved the concept, industry institutionalized it, and criminals operationalized it. In June, AI company XBOW took the top spot on HackerOne's US leaderboard after submitting over 1,000 new vulnerabilities in just a few months. In August, the seven teams competing in DARPA's AI Cyber Challenge collectively found 54 new vulnerabilities in a target system, in four hours (of compute). Also in August, Google announced that its Big Sleep AI found dozens of new vulnerabilities in open-source projects.
It gets worse. In July Ukraine's CERT discovered a piece of Russian malware that used an LLM to automate the cyberattack process, generating both system reconnaissance and data theft commands in real-time. In August, Anthropic reported that they disrupted a threat actor that used Claude, Anthropic's AI model, to automate the entire cyberattack process. It was an impressive use of the AI, which performed network reconnaissance, penetrated networks, and harvested victims' credentials. The AI was able to figure out which data to steal, how much money to extort out of the victims, and how to best write extortion emails.
Another hacker used Claude to create and market his own ransomware, complete with "advanced evasion capabilities, encryption, and anti-recovery mechanisms." And in September, Checkpoint reported on hackers using HexStrike-AI to create autonomous agents that can scan, exploit, and persist inside target networks. Also in September, a research team showed how they can quickly and easily reproduce hundreds of vulnerabilities from public information. These tools are increasingly free for anyone to use. Villager, a recently released AI pentesting tool from Chinese company Cyberspike, uses the Deepseek model to completely automate attack chains.
This is all well beyond AIs capabilities in 2016, at DARPA's Cyber Grand Challenge. The annual Chinese AI hacking challenge, Robot Hacking Games, might be on this level, but little is known outside of China.
Tipping point on the horizon
AI agents now rival and sometimes surpass even elite human hackers in sophistication. They automate operations at machine speed and global scale. The scope of their capabilities allows these AI agents to completely automate a criminal's command to maximize profit, or structure advanced attacks to a government's precise specifications, such as to avoid detection.
In this future, attack capabilities could accelerate beyond our individual and collective capability to handle. We have long taken it for granted that we have time to patch systems after vulnerabilities become known, or that withholding vulnerability details prevents attackers from exploiting them. This is no longer the case.
The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We're potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance.
The most skilled will likely retain an edge for now. But AI agents don't have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws
--- BBBS/LiR v4.10 Toy-7
* Origin: TCOB1: https/binkd/telnet binkd.rima.ie (618:500/1)