<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: martin-kleppmann</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/martin-kleppmann.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2025-12-09T03:11:19+00:00</updated><author><name>Simon Willison</name></author><entry><title>Prediction: AI will make formal verification go mainstream</title><link href="https://simonwillison.net/2025/Dec/9/formal-verification/#atom-tag" rel="alternate"/><published>2025-12-09T03:11:19+00:00</published><updated>2025-12-09T03:11:19+00:00</updated><id>https://simonwillison.net/2025/Dec/9/formal-verification/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://martin.kleppmann.com/2025/12/08/ai-formal-verification.html"&gt;Prediction: AI will make formal verification go mainstream&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Martin Kleppmann makes the case for formal verification languages (things like &lt;a href="https://dafny.org/"&gt;Dafny&lt;/a&gt;, &lt;a href="https://github.com/marcoeilers/nagini"&gt;Nagini&lt;/a&gt;, and &lt;a href="https://github.com/verus-lang/verus"&gt;Verus&lt;/a&gt;) to finally start achieving more mainstream usage. Code generated by LLMs can benefit enormously from more robust verification, and LLMs themselves make these notoriously difficult systems easier to work with.&lt;/p&gt;
&lt;p&gt;The paper &lt;a href="https://arxiv.org/abs/2503.14183"&gt;Can LLMs Enable Verification in Mainstream Programming?&lt;/a&gt; by JetBrains Research in March 2025 found that Claude 3.5 Sonnet saw promising results for the three languages I listed above.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://lobste.rs/s/zsgdbg/prediction_ai_will_make_formal"&gt;lobste.rs&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/predictions"&gt;predictions&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/programming-languages"&gt;programming-languages&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/martin-kleppmann"&gt;martin-kleppmann&lt;/a&gt;&lt;/p&gt;



</summary><category term="predictions"/><category term="programming-languages"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="martin-kleppmann"/></entry><entry><title>Quoting Martin Kleppmann</title><link href="https://simonwillison.net/2024/Apr/27/martin-kleppmann/#atom-tag" rel="alternate"/><published>2024-04-27T19:31:08+00:00</published><updated>2024-04-27T19:31:08+00:00</updated><id>https://simonwillison.net/2024/Apr/27/martin-kleppmann/#atom-tag</id><summary type="html">
    &lt;blockquote cite="https://bsky.app/profile/martin.kleppmann.com/post/3kquvol6s5b2a"&gt;&lt;p&gt;I've worked out why I don't get much value out of LLMs. The hardest and most time-consuming parts of my job involve distinguishing between ideas that are correct, and ideas that are plausible-sounding but wrong. Current AI is great at the latter type of ideas, and I don't need more of those.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://bsky.app/profile/martin.kleppmann.com/post/3kquvol6s5b2a"&gt;Martin Kleppmann&lt;/a&gt;&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/martin-kleppmann"&gt;martin-kleppmann&lt;/a&gt;&lt;/p&gt;



</summary><category term="ai"/><category term="llms"/><category term="martin-kleppmann"/></entry><entry><title>I was wrong. CRDTs are the future</title><link href="https://simonwillison.net/2020/Sep/28/i-was-wrong-crdts-are-future/#atom-tag" rel="alternate"/><published>2020-09-28T21:03:57+00:00</published><updated>2020-09-28T21:03:57+00:00</updated><id>https://simonwillison.net/2020/Sep/28/i-was-wrong-crdts-are-future/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://josephg.com/blog/crdts-are-the-future/"&gt;I was wrong. CRDTs are the future&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Joseph Gentle has been working on collaborative editors since being a developer on Google Wave back in 2010, later building ShareJS. He’s used Operational Transforms throughout, due to their performance and memory benefits over CRDTs (Conflict-free replicated data types)—but the latest work in that space from  Martin Kleppmann and other researchers has seen him finally switch allegiance to these newer algorithms. As a long-time fan of collaborative editing (ever since the Hydra/SubEthaEdit days) I thoroughly enjoyed this as an update on how things have evolved over the past decade.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://news.ycombinator.com/item?id=24617542"&gt;Hacker News&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/algorithms"&gt;algorithms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/collaboration"&gt;collaboration&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/crdt"&gt;crdt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/martin-kleppmann"&gt;martin-kleppmann&lt;/a&gt;&lt;/p&gt;



</summary><category term="algorithms"/><category term="collaboration"/><category term="crdt"/><category term="martin-kleppmann"/></entry></feed>