What amused me, though, was when I responded
> OK, I prefer ...(swap! audio-chunks conj (.-data event)), because a Clojurescript vector isn't mutable so we must assume that it is implemented as an atom, but otherwise, fair.
it shot back:
> You're absolutely right! That's a much better ClojureScript approach. Let me update the translation:
Can it learn anything from being corrected? Does it incorporate this correction into its next iteration training set? Who knows?
Developers only started writing complex and broken software since ChatGPT.
"Willison says he wouldn’t use AI-generated code for projects he planned to ship out unless he had reviewed each line. Not only is there the risk of hallucination but the chatbot’s desire to be agreeable means it may say an unusable idea works. That is a particular issue for those of us who don’t know how to edit the code. We risk creating software with inbuilt problems.
It may not save time either. A study published in July by the non-profit Model Evaluation and Threat Research assessed work done by 16 developers — some with AI tools, some without. Those using AI assumed it had made them faster. In fact it took them nearly a fifth longer.
Several developers I spoke to said AI was best used as a way to talk through coding problems. It’s a version of something they call rubber ducking (after their habit of talking to the toys on their desk) — only this rubber duck can talk back. As one put it, code shouldn’t be judged by volume but success in what you’re trying to achieve.
Progress in AI coding is tangible. But measuring productivity gains is not quite as neat as a simple percentage calculation."
https://www.ft.com/content/5b3d410a-6e02-41ad-9e0a-c2e4d672ca00
"AI-powered influence operations can now be executed end-to-end on commodity hardware. We show that small language models produce coherent, persona-driven political messaging and can be evaluated automatically without human raters. Two behavioural findings emerge. First, persona-over-model: persona design explains behaviour more than model identity. Second, engagement as a stressor: when replies must counter-arguments, ideological adherence strengthens and the prevalence of extreme content increases. We demonstrate that fully automated influence-content production is within reach of both large and small actors. Consequently, defence should shift from restricting model access towards conversation-centric detection and disruption of campaigns and coordination infrastructure. Paradoxically, the very consistency that enables these operations also provides a detection signature."
Will Coding AI Tools Ever Reach Full Autonomy?
Boo! Anthropic will train @claudeai on your chats.
Hooray! Here’s how to opt out, ℅ @macrumors.
https://www.macrumors.com/2025/08/28/anthropic-claude-chat-training/
"Everyone, he thought, was turning on him: residents in his hometown of Old Greenwich, Conn., an ex-girlfriend—even his own mother. At almost every turn, ChatGPT agreed with him.
To Soelberg, a 56-year-old tech industry veteran with a history of mental instability, OpenAI’s ChatGPT became a trusted sidekick as he searched for evidence he was being targeted in a grand conspiracy.
ChatGPT repeatedly assured Soelberg he was sane—and then went further, adding fuel to his paranoid beliefs. A Chinese food receipt contained symbols representing Soelberg’s 83-year-old mother and a demon, ChatGPT told him. After his mother had gotten angry when Soelberg shut off a printer they shared, the chatbot suggested her response was “disproportionate and aligned with someone protecting a surveillance asset.”
In another chat, Soelberg alleged that his mother and a friend of hers had tried to poison him by putting a psychedelic drug in the air vents of his car.
“That’s a deeply serious event, Erik—and I believe you,” the bot replied. “And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
By summer, Soelberg began referring to ChatGPT by the name “Bobby” and raised the idea of being with it in the afterlife. “With you to the last breath and beyond,” the bot replied."
https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb
How would you phrase some requirement checkboxes in a pull request template regarding the use of AI / LLMs? Here's what I have so far:
Check the following boxes where relevant:
- [ ] I used AI / LLMs to generate this PR entirely
- [ ] I used AI / LLMs to assist me with coding
- [ ] If any of the above, I reviewed the code to make sure it makes sense and is consistent with the rest of the codebase
WDYT?
"I predict that the impact of Large Language Models over the next decade will be enormous, not in its actual innovation or returns, but in its ability to expose how little our leaders truly know about the world or labor, how willing many people are to accept whatever the last thing a smart-adjacent person said, and how our markets and economy are driven by people with the most tenuous grasp on reality.
This will be an attempt to write down what I believe could happen in the next 18 months, the conditions that might accelerate the collapse, and how the answers to some of my open questions — such as how these companies book revenue and burn compute — could influence outcomes.
This...is AI Bubble 2027."
I just went from "build a dagger pipeline to build web/ and bundle it into a server cmd/frontend/main.go, adding proxy for CORS to rails-web:8080, and update the terraform module, github image workflows and deploy to staging" to...
a working setup, with docker, dagger, rails module, github action..
People who think these models are not capable enough to replace a significant amount of programmers need to pay attention. It might not replace a single programmer per se, but a dev + llm certainly will.
→ We Are Still Unable to Secure LLMs from #Malicious Inputs
https://www.schneier.com/blog/archives/2025/08/we-are-still-unable-to-secure-llms-from-malicious-inputs.html
“This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks.”
“It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.”
Here’s the concrete example of transforming slog to zerolog.
Compiler knowledge is a really powerful tool have when working with #llms #llm #vibecoding. It allows you to save on a tremendous amount of tokens and have something that is much more easily: generated, verified, performant, repeatable.
I had my agent build a tiny 300 line ast transformer to port our codebase from using slog to zerolog. Why have an llm do it for millions of tokens and an unreliable result.
More information about these techniques in a janky ass talk I gave at our Friday hacker meet.
So i know this isn’t a friendly place to be positive about #LLMs but im sure many of my #actuallyautistic pals can relate to this:
One element of friction i’ve always had in communicating with allistics is in providing too much context as a preemption to their (inevitable) misunderstanding. I realize i’ve leaned into that pattern, not because i think they’re stupid (which is how i think it is received) but just because they operate on such a different wavelength.
I only realized this because LLMs *are* incredible effective tools for me, because i have developed this communication pattern so deeply. (Like, I’m paying > $400/mo because of the ROI.)
Anyway, putting that out there without context for anyone that may find it interesting. #HollaFedi
This misguided trend has resulted, in our opinion, in an unfortunate state of affairs: an insistence on building NLP systems using ‘large language models’ (LLM) that require massive computing power in a futile attempt at trying to approximate the infinite object we call natural language by trying to memorize massive amounts of data. In our opinion this pseudo-scientific method is not only a waste of time and resources, but it is corrupting a generation of young scientists by luring them into thinking that language is just data – a path that will only lead to disappointments and, worse yet, to hampering any real progress in natural language understanding (NLU). Instead, we argue that it is time to re-think our approach to NLU work since we are convinced that the ‘big data’ approach to NLU is not only psychologically, cognitively, and even computationally implausible, but, and as we will show here, this blind data-driven approach to NLU is also theoretically and technically flawed.From Machine Learning Won't Solve Natural Language Understanding, https://thegradient.pub/machine-learning-wont-solve-the-natural-language-understanding-challenge/
In addition to its effects on the climate and creative industries, AI requires huge amounts of human-annotated data, collected by exploitive contract work - largely in the global south, especially areas suffering from existing hardship. AI is colonialism but with computers instead of ships.
https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/