<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: releases</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/releases.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2025-05-27T20:35:20+00:00</updated><author><name>Simon Willison</name></author><entry><title>Large Language Models can run tools in your terminal with LLM 0.26</title><link href="https://simonwillison.net/2025/May/27/llm-tools/#atom-tag" rel="alternate"/><published>2025-05-27T20:35:20+00:00</published><updated>2025-05-27T20:35:20+00:00</updated><id>https://simonwillison.net/2025/May/27/llm-tools/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;&lt;a href="https://llm.datasette.io/en/stable/changelog.html#v0-26"&gt;LLM 0.26&lt;/a&gt;&lt;/strong&gt; is out with the biggest new feature since I started the project: &lt;a href="https://llm.datasette.io/en/stable/tools.html"&gt;&lt;strong&gt;support for tools&lt;/strong&gt;&lt;/a&gt;. You can now use the LLM &lt;a href="https://llm.datasette.io/en/stable/usage.html"&gt;CLI tool&lt;/a&gt; - and &lt;a href="https://llm.datasette.io/en/stable/python-api.html"&gt;Python library&lt;/a&gt; - to grant LLMs from OpenAI, Anthropic, Gemini and local models from Ollama with access to any tool that you can represent as a Python function.&lt;/p&gt;
&lt;p&gt;LLM also now has &lt;a href="https://llm.datasette.io/en/stable/plugins/directory.html#tools"&gt;tool plugins&lt;/a&gt;, so you can install a plugin that adds new capabilities to whatever model you are currently using.&lt;/p&gt;
&lt;p&gt;There's a lot to cover here, but here are the highlights:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LLM can run tools now&lt;/strong&gt;! You can &lt;strong&gt;install tools from plugins&lt;/strong&gt; and load them by name with &lt;code&gt;--tool/-T name_of_tool&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You can also &lt;strong&gt;pass in Python function code on the command-line&lt;/strong&gt; with the &lt;code&gt;--functions&lt;/code&gt; option.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Python API supports tools too&lt;/strong&gt;: &lt;code&gt;llm.get_model("gpt-4.1").chain("show me the locals", tools=[locals]).text()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Tools work in &lt;strong&gt;both async and sync contexts&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here's what's covered in this post:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://simonwillison.net/2025/May/27/llm-tools/#trying-it-out"&gt;Trying it out&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://simonwillison.net/2025/May/27/llm-tools/#more-interesting-tools-from-plugins"&gt;More interesting tools from plugins&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://simonwillison.net/2025/May/27/llm-tools/#ad-hoc-command-line-tools-with-functions"&gt;Ad-hoc command-line tools with --functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://simonwillison.net/2025/May/27/llm-tools/#tools-in-the-llm-python-api"&gt;Tools in the LLM Python API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://simonwillison.net/2025/May/27/llm-tools/#why-did-this-take-me-so-long-"&gt;Why did this take me so long?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://simonwillison.net/2025/May/27/llm-tools/#is-this-agents-then-"&gt;Is this agents then?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://simonwillison.net/2025/May/27/llm-tools/#what-s-next-for-tools-in-llm-"&gt;What's next for tools in LLM?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;h4 id="trying-it-out"&gt;Trying it out&lt;/h4&gt;
&lt;p&gt;First, &lt;a href="https://llm.datasette.io/en/stable/setup.html"&gt;install the latest LLM&lt;/a&gt;. It may not be on Homebrew yet so I suggest using &lt;code&gt;pip&lt;/code&gt; or &lt;code&gt;pipx&lt;/code&gt; or &lt;code&gt;uv&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uv tool install llm&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If you have it already, &lt;a href="https://llm.datasette.io/en/stable/setup.html#upgrading-to-the-latest-version"&gt;upgrade it&lt;/a&gt;.&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uv tool upgrade llm&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Tools work with other vendors, but let's stick with OpenAI for the moment. Give LLM an OpenAI API key&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm keys &lt;span class="pl-c1"&gt;set&lt;/span&gt; openai
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Paste key here&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Now let's run our first tool:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm --tool llm_version &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;What version?&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; --td&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Here's what I get:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/llm-tools.gif" alt="Animated demo. I run that command, LLM shows Tool call: llm_version({}) in yellow, then 0.26a1 in green, then streams out the text The installed version is 0.26a1" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;llm_version&lt;/code&gt; is a very simple demo tool that ships with LLM. Running &lt;code&gt;--tool llm_version&lt;/code&gt; exposes that tool to the model - you can specify that multiple times to enable multiple tools, and it has a shorter version of &lt;code&gt;-T&lt;/code&gt; to save on typing.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--td&lt;/code&gt; option stands for &lt;code&gt;--tools-debug&lt;/code&gt; - it causes LLM to output information about tool calls and their responses so you can peek behind the scenes.&lt;/p&gt;
&lt;p&gt;This is using the default LLM model, which is usually &lt;code&gt;gpt-4o-mini&lt;/code&gt;. I switched it to &lt;code&gt;gpt-4.1-mini&lt;/code&gt; (better but fractionally more expensive) by running:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm models default gpt-4.1-mini&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can try other models using the &lt;code&gt;-m&lt;/code&gt; option. Here's how to run a similar demo of the &lt;code&gt;llm_time&lt;/code&gt; built-in tool using &lt;code&gt;o4-mini&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm --tool llm_time &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;What time is it?&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; --td -m o4-mini&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Outputs:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Tool call: llm_time({})&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight highlight-source-json"&gt;&lt;pre&gt;  {
    &lt;span class="pl-ent"&gt;"utc_time"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;2025-05-27 19:15:55 UTC&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"utc_time_iso"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;2025-05-27T19:15:55.288632+00:00&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"local_timezone"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;PDT&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"local_time"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;2025-05-27 12:15:55&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"timezone_offset"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;UTC-7:00&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"is_dst"&lt;/span&gt;: &lt;span class="pl-c1"&gt;true&lt;/span&gt;
  }&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The current time is 12:15 PM PDT (UTC−7:00) on May 27, 2025, which corresponds to 7:15 PM UTC.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Models from (tool supporting) plugins work too. Anthropic's Claude Sonnet 4:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm install llm-anthropic -U
llm keys &lt;span class="pl-c1"&gt;set&lt;/span&gt; anthropic
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Paste Anthropic key here&lt;/span&gt;
llm --tool llm_version &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;What version?&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; --td -m claude-4-sonnet&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Or Google's Gemini 2.5 Flash:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm install llm-gemini -U
llm keys &lt;span class="pl-c1"&gt;set&lt;/span&gt; gemini
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Paste Gemini key here&lt;/span&gt;
llm --tool llm_version &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;What version?&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; --td -m gemini-2.5-flash-preview-05-20&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can even run simple tools with Qwen3:4b, a &lt;em&gt;tiny&lt;/em&gt; (2.6GB) model that I run using &lt;a href="https://ollama.com/"&gt;Ollama&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;ollama pull qwen3:4b
llm install &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;llm-ollama&amp;gt;=0.11a0&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;
llm --tool llm_version &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;What version?&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; --td -m qwen3:4b&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Qwen 3 calls the tool, thinks about it a bit and then prints out a response:
&lt;img src="https://static.simonwillison.net/static/2025/llm-tools-qwen.jpg" alt="Tool call: llm_version({}) 0.26a1&amp;lt;think&amp;gt; Okay, the user asked, &amp;quot;What version?&amp;quot; I need to respond with the version of the LLM. The tool provided is llm_version, which returns the installed version. I called that function and got the response 0.26a1. Now I should present this information clearly. Let me check if there's any additional context needed, but the user just asked for the version, so a straightforward answer should work. I'll state the version number and maybe mention that it's the installed version. Keep it simple and precise. &amp;lt;/think&amp;gt; The installed version of the LLM is 0.26a1." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;h4 id="more-interesting-tools-from-plugins"&gt;More interesting tools from plugins&lt;/h4&gt;
&lt;p&gt;This demo has been pretty weak so far. Let's do something a whole lot more interesting.&lt;/p&gt;
&lt;p&gt;LLMs are notoriously bad at mathematics. This is deeply surprising to many people: supposedly the most sophisticated computer systems we've ever built can't multiply two large numbers together?&lt;/p&gt;
&lt;p&gt;We can fix that with tools.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/simonw/llm-tools-simpleeval"&gt;llm-tools-simpleeval&lt;/a&gt; plugin exposes the &lt;a href="https://github.com/danthedeckie/simpleeval"&gt;simpleeval&lt;/a&gt; "Simple Safe Sandboxed Extensible Expression Evaluator for Python" library by Daniel Fairhead. This provides a robust-enough sandbox for executing simple Python expressions.&lt;/p&gt;
&lt;p&gt;Here's how to run a calculation:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm install llm-tools-simpleeval
llm -T simpleeval &lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Trying that out:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm -T simple_eval &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Calculate 1234 * 4346 / 32414 and square root it&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --td&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;I got back this - it tried &lt;code&gt;sqrt()&lt;/code&gt; first, then when that didn't work switched to &lt;code&gt;** 0.5&lt;/code&gt; instead:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Tool call: simple_eval({'expression': '1234 * 4346 / 32414'})
  165.45208860368976


Tool call: simple_eval({'expression': 'sqrt(1234 * 4346 / 32414)'})
  Error: Function 'sqrt' not defined, for expression 'sqrt(1234 * 4346 / 32414)'.


Tool call: simple_eval({'expression': '(1234 * 4346 / 32414) ** 0.5'})
  12.862818066181678

The result of (1234 * 4346 / 32414) is approximately
165.45, and the square root of this value is approximately 12.86.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I've released four tool plugins so far:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/simonw/llm-tools-simpleeval"&gt;llm-tools-simpleeval&lt;/a&gt;&lt;/strong&gt; - as shown above, simple expression support for things like mathematics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/simonw/llm-tools-quickjs"&gt;llm-tools-quickjs&lt;/a&gt;&lt;/strong&gt; - provides access to a sandboxed QuickJS JavaScript interpreter, allowing LLMs to run JavaScript code. The environment persists between calls so the model can set variables and build functions and reuse them later on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/simonw/llm-tools-sqlite"&gt;llm-tools-sqlite&lt;/a&gt;&lt;/strong&gt; - read-only SQL query access to a local SQLite database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/simonw/llm-tools-datasette"&gt;llm-tools-datasette&lt;/a&gt;&lt;/strong&gt; - run SQL queries against a remote &lt;a href="https://datasette.io/"&gt;Datasette&lt;/a&gt; instance!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let's try that Datasette one now:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm install llm-tools-datasette
llm -T &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Datasette("https://datasette.io/content")&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --td &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;What has the most stars?&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The syntax here is slightly different: the Datasette plugin is what I'm calling a "toolbox" - a plugin that has multiple tools inside it and can be configured with a constructor.&lt;/p&gt;
&lt;p&gt;Specifying &lt;code&gt;--tool&lt;/code&gt; as &lt;code&gt;Datasette("https://datasette.io/content")&lt;/code&gt; provides the plugin with the URL to the Datasette instance it should use - in this case the &lt;a href="https://datasette.io/content"&gt;content database&lt;/a&gt; that powers the Datasette website.&lt;/p&gt;
&lt;p&gt;Here's the output, with the schema section truncated for brevity:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/datasette-tool.jpg" alt="I run that command. It first does a Tool call to Datasette_query with SELECT name, stars, FROM repos ORDER BY stars DESC LIMIT 1. This returns an error message because there is no such column stars. It calls the Datasette_schema() function which returns a whole load of CREATE TABLE statements. Then it executes Datasette_query again this time with SELECT name, stargazers_count FROM repos ORDER BY stargazers_count DESC LIMIT 1. This returns name=datasette a count of 10020, so the model replies and says The repository with the most stars is &amp;quot;datasette&amp;quot; with 10,020 stars." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;This question triggered three calls. The model started by guessing the query! It tried &lt;code&gt;SELECT name, stars FROM repos ORDER BY stars DESC LIMIT 1&lt;/code&gt;, which failed because the &lt;code&gt;stars&lt;/code&gt; column doesn't exist.&lt;/p&gt;
&lt;p&gt;The tool call returned an error, so the model had another go - this time calling the &lt;code&gt;Datasette_schema()&lt;/code&gt; tool to get the schema of the database.&lt;/p&gt;
&lt;p&gt;Based on that schema it assembled and then executed the correct query, and output its interpretation of the result:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The repository with the most stars is "datasette" with 10,020 stars.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Getting to this point was a real &lt;a href="https://www.penny-arcade.com/comic/2010/09/17/mine-all-mine-part-one"&gt;Penny Arcade Minecraft moment&lt;/a&gt; for me. The possibilities here are &lt;em&gt;limitless&lt;/em&gt;. If you can write a Python function for it, you can trigger it from an LLM.&lt;/p&gt;
&lt;h4 id="ad-hoc-command-line-tools-with-functions"&gt;Ad-hoc command-line tools with &lt;code&gt;--functions&lt;/code&gt;
&lt;/h4&gt;
&lt;p&gt;I'm looking forward to people building more plugins, but there's also much less structured and more ad-hoc way to use tools with the LLM CLI tool: the &lt;code&gt;--functions&lt;/code&gt; option.&lt;/p&gt;
&lt;p&gt;This was inspired by a similar feature &lt;a href="https://sqlite-utils.datasette.io/en/stable/cli.html#defining-custom-sql-functions"&gt;I added to sqlite-utils&lt;/a&gt; a while ago.&lt;/p&gt;
&lt;p&gt;You can pass a block of literal Python code directly to the CLI tool using the &lt;code&gt;--functions&lt;/code&gt; option, and any functions defined there will be made available to the model as tools.&lt;/p&gt;
&lt;p&gt;Here's an example that adds the ability to search my blog:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm --functions &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;
&lt;span class="pl-s"&gt;import httpx&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;/span&gt;
&lt;span class="pl-s"&gt;def search_blog(q):&lt;/span&gt;
&lt;span class="pl-s"&gt;    "Search Simon Willison blog"&lt;/span&gt;
&lt;span class="pl-s"&gt;    return httpx.get("https://simonwillison.net/search/", params={"q": q}).content&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --td &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Three features of sqlite-utils&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; -s &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;use Simon search&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This is &lt;em&gt;such a hack&lt;/em&gt; of an implementation! I'm literally just hitting &lt;a href="https://simonwillison.net/search/?q=pelicans"&gt;my search page&lt;/a&gt; and dumping the HTML straight back into tho model.&lt;/p&gt;
&lt;p&gt;It totally works though - it helps that the GPT-4.1 series all handle a million tokens now, so crufty HTML is no longer a problem for them.&lt;/p&gt;
&lt;p&gt;(I had to add "use Simon search" as the system prompt because without it the model would try to answer the question itself, rather than using the search tool I provided. System prompts for tools are clearly a &lt;em&gt;big topic&lt;/em&gt;, Anthropic's own web search tool has &lt;a href="https://simonwillison.net/2025/May/25/claude-4-system-prompt/#search-instructions"&gt;6,471 tokens of instructions&lt;/a&gt;!)&lt;/p&gt;
&lt;p&gt;Here's the output I got just now:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Three features of sqlite-utils are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It is a combined CLI tool and Python library for manipulating SQLite databases.&lt;/li&gt;
&lt;li&gt;It can automatically add columns to a database table if you attempt to insert data that doesn't quite fit (using the alter=True option).&lt;/li&gt;
&lt;li&gt;It supports plugins, allowing the extension of its functionality through third-party or custom plugins.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;p&gt;A better search tool would have more detailed instructions and would return relevant snippets of the results, not just the headline and first paragraph for each result. This is pretty great for just four lines of Python though!&lt;/p&gt;
&lt;h4 id="tools-in-the-llm-python-api"&gt;Tools in the LLM Python API&lt;/h4&gt;
&lt;p&gt;LLM is both a CLI tool and a Python library at the same time (similar to my other project &lt;a href="https://sqlite-utils.datasette.io/"&gt;sqlite-utils&lt;/a&gt;). The LLM Python library &lt;a href="https://llm.datasette.io/en/stable/python-api.html#tools"&gt;grew tool support&lt;/a&gt; in LLM 0.26 as well.&lt;/p&gt;
&lt;p&gt;Here's a simple example solving one of the previously hardest problems in LLMs: counting the number of Rs in "strawberry":&lt;/p&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;llm&lt;/span&gt;

&lt;span class="pl-k"&gt;def&lt;/span&gt; &lt;span class="pl-en"&gt;count_char_in_text&lt;/span&gt;(&lt;span class="pl-s1"&gt;char&lt;/span&gt;: &lt;span class="pl-smi"&gt;str&lt;/span&gt;, &lt;span class="pl-s1"&gt;text&lt;/span&gt;: &lt;span class="pl-smi"&gt;str&lt;/span&gt;) &lt;span class="pl-c1"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="pl-smi"&gt;int&lt;/span&gt;:
    &lt;span class="pl-s"&gt;"How many times does char appear in text?"&lt;/span&gt;
    &lt;span class="pl-k"&gt;return&lt;/span&gt; &lt;span class="pl-s1"&gt;text&lt;/span&gt;.&lt;span class="pl-c1"&gt;count&lt;/span&gt;(&lt;span class="pl-s1"&gt;char&lt;/span&gt;)

&lt;span class="pl-s1"&gt;model&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;llm&lt;/span&gt;.&lt;span class="pl-c1"&gt;get_model&lt;/span&gt;(&lt;span class="pl-s"&gt;"gpt-4.1-mini"&lt;/span&gt;)
&lt;span class="pl-s1"&gt;chain_response&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;model&lt;/span&gt;.&lt;span class="pl-c1"&gt;chain&lt;/span&gt;(
    &lt;span class="pl-s"&gt;"Rs in strawberry?"&lt;/span&gt;,
    &lt;span class="pl-s1"&gt;tools&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;[&lt;span class="pl-s1"&gt;count_char_in_text&lt;/span&gt;],
    &lt;span class="pl-s1"&gt;after_call&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-s1"&gt;print&lt;/span&gt;
)
&lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;chunk&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;chain_response&lt;/span&gt;:
    &lt;span class="pl-en"&gt;print&lt;/span&gt;(&lt;span class="pl-s1"&gt;chunk&lt;/span&gt;, &lt;span class="pl-s1"&gt;end&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;""&lt;/span&gt;, &lt;span class="pl-s1"&gt;flush&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-c1"&gt;True&lt;/span&gt;)&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;after_call=print&lt;/code&gt; argument is a way to peek at the tool calls, the Python equivalent of the &lt;code&gt;--td&lt;/code&gt; option from earlier.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;model.chain()&lt;/code&gt; method is new: it's similar to &lt;code&gt;model.prompt()&lt;/code&gt; but knows how to spot returned tool call requests, execute them and then prompt the model again with the results. A &lt;code&gt;model.chain()&lt;/code&gt; could potentially execute dozens of responses on the way to giving you a final answer.&lt;/p&gt;
&lt;p&gt;You can iterate over the &lt;code&gt;chain_response&lt;/code&gt; to output those tokens as they are returned by the model, even across multiple responses.&lt;/p&gt;
&lt;p&gt;I got back this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Tool(name='count_char_in_text', description='How many times does char appear in text?', input_schema={'properties': {'char': {'type': 'string'}, 'text': {'type': 'string'}}, 'required': ['char', 'text'], 'type': 'object'}, implementation=&amp;lt;function count_char_in_text at 0x109dd4f40&amp;gt;, plugin=None) ToolCall(name='count_char_in_text', arguments={'char': 'r', 'text': 'strawberry'}, tool_call_id='call_DGXcM8b2B26KsbdMyC1uhGUu') ToolResult(name='count_char_in_text', output='3', tool_call_id='call_DGXcM8b2B26KsbdMyC1uhGUu', instance=None, exception=None)&lt;/code&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;There are 3 letter "r"s in the word "strawberry".&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;LLM's Python library also supports &lt;code&gt;asyncio&lt;/code&gt;, and tools can be &lt;code&gt;async def&lt;/code&gt; functions &lt;a href="https://llm.datasette.io/en/latest/python-api.html#tool-functions-can-be-sync-or-async"&gt;as described here&lt;/a&gt;. If a model requests multiple async tools at once the library will run them concurrently with &lt;code&gt;asyncio.gather()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The Toolbox form of tools is supported too: you can pass &lt;code&gt;tools=[Datasette("https://datasette.io/content")]&lt;/code&gt; to that &lt;code&gt;chain()&lt;/code&gt; method to achieve the same effect as the &lt;code&gt;--tool 'Datasette(...)&lt;/code&gt; option from earlier.&lt;/p&gt;
&lt;h4 id="why-did-this-take-me-so-long-"&gt;Why did this take me so long?&lt;/h4&gt;
&lt;p&gt;I've been tracking &lt;a href="https://simonwillison.net/tags/llm-tool-use/"&gt;llm-tool-use&lt;/a&gt; for a while. I first saw the trick described in &lt;a href="https://arxiv.org/abs/2210.03629"&gt;the ReAcT paper&lt;/a&gt;, first published in October 2022 (a month before the initial release of ChatGPT). I built &lt;a href="https://til.simonwillison.net/llms/python-react-pattern"&gt;a simple implementation of that&lt;/a&gt; in a few dozen lines of Python. It was clearly a very neat pattern!&lt;/p&gt;
&lt;p&gt;Over the past few years it has become &lt;em&gt;very&lt;/em&gt; apparent that tool use is the single most effective way to extend the abilities of language models. It's such a simple trick: you tell the model that there are tools it can use, and have it output special syntax (JSON or XML or &lt;code&gt;tool_name(arguments)&lt;/code&gt;, it doesn't matter which) requesting a tool action, then stop.&lt;/p&gt;
&lt;p&gt;Your code parses that output, runs the requested tools and then starts a new prompt to the model with the results.&lt;/p&gt;
&lt;p&gt;This works with almost &lt;strong&gt;every model&lt;/strong&gt; now. Most of them are specifically trained for tool usage, and there are leaderboards like the &lt;a href="https://gorilla.cs.berkeley.edu/leaderboard.html"&gt;Berkeley Function-Calling Leaderboard&lt;/a&gt; dedicated to tracking which models do the best job of it.&lt;/p&gt;
&lt;p&gt;All of the big model vendors - OpenAI, Anthropic, Google, Mistral, Meta - have a version of this baked into their API, either called tool usage or function calling. It's all the same underlying pattern.&lt;/p&gt;
&lt;p&gt;The models you can run locally are getting good at this too. Ollama &lt;a href="https://ollama.com/blog/tool-support"&gt;added tool support&lt;/a&gt; last year, and it's baked into the &lt;a href="https://github.com/ggml-org/llama.cpp/blob/master/docs/function-calling.md"&gt;llama.cpp&lt;/a&gt; server as well.&lt;/p&gt;
&lt;p&gt;It's been clear for a while that LLM absolutely needed to grow support for tools. I released &lt;a href="https://simonwillison.net/2025/Feb/28/llm-schemas/"&gt;LLM schema support&lt;/a&gt; back in February as a stepping stone towards this. I'm glad to finally have it over the line.&lt;/p&gt;
&lt;p&gt;As always with LLM, the challenge was designing an abstraction layer that could work across as many different models as possible. A year ago I didn't feel that model tool support was mature enough to figure this out. Today there's a very definite consensus among vendors about how this should work, which finally gave me the confidence to implement it.&lt;/p&gt;
&lt;p&gt;I also presented a workshop at PyCon US two weeks ago about &lt;a href="https://simonwillison.net/2025/May/15/building-on-llms/"&gt;Building software on top of Large Language Models&lt;/a&gt;, which was exactly the incentive I needed to finally get this working in an alpha! Here's the &lt;a href="https://building-with-llms-pycon-2025.readthedocs.io/en/latest/tools.html"&gt;tools section&lt;/a&gt; from that tutorial.&lt;/p&gt;
&lt;h4 id="is-this-agents-then-"&gt;Is this agents then?&lt;/h4&gt;
&lt;p&gt;&lt;em&gt;Sigh&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I still &lt;a href="https://simonwillison.net/2024/Dec/31/llms-in-2024/#-agents-still-haven-t-really-happened-yet"&gt;don't like&lt;/a&gt; using the term "agents". I worry that developers will think &lt;a href="https://simonwillison.net/2025/May/22/tools-in-a-loop/"&gt;tools in a loop&lt;/a&gt;, regular people will think virtual AI assistants &lt;a href="https://en.m.wikipedia.org/wiki/Her_(2013_film)"&gt;voiced by Scarlett Johansson&lt;/a&gt; and academics will &lt;a href="https://simonwillison.net/2025/Mar/19/worms-and-dogs-and-countries/"&gt;grumble about thermostats&lt;/a&gt;. But in the LLM world we appear to be converging on "tools in a loop", and that's absolutely what this.&lt;/p&gt;
&lt;p&gt;So yes, if you want to build "agents" then LLM 0.26 is a great way to do that.&lt;/p&gt;
&lt;h4 id="what-s-next-for-tools-in-llm-"&gt;What's next for tools in LLM?&lt;/h4&gt;
&lt;p&gt;I already have a &lt;a href="https://github.com/simonw/llm/milestone/13"&gt;LLM tools v2 milestone&lt;/a&gt; with 13 issues in it, mainly around improvements to how tool execution logs are displayed but with quite a few minor issues I decided shouldn't block this release. There's a bunch more stuff in the &lt;a href="https://github.com/simonw/llm/issues?q=is%3Aissue%20state%3Aopen%20label%3Atools"&gt;tools label&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I'm most excited about the potential for plugins.&lt;/p&gt;
&lt;p&gt;Writing tool plugins is &lt;em&gt;really fun&lt;/em&gt;. I have an &lt;a href="https://github.com/simonw/llm-plugin-tools"&gt;llm-plugin-tools&lt;/a&gt; cookiecutter template that I've been using for my own, and I plan to put together a tutorial around that soon.&lt;/p&gt;
&lt;p&gt;There's more work to be done adding tool support to more model plugins. I added &lt;a href="https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#supporting-tools"&gt;details of this&lt;/a&gt; to the advanced plugins documentation. This commit &lt;a href="https://github.com/simonw/llm-gemini/commit/a7f1096cfbb733018eb41c29028a8cc6160be298"&gt;adding tool support for Gemini&lt;/a&gt; is a useful illustratino of what's involved.&lt;/p&gt;

&lt;p&gt;And yes, &lt;strong&gt;Model Context Protocol&lt;/strong&gt; support is clearly on the agenda as well. MCP is emerging as the standard way for models to access tools at a frankly bewildering speed. Two weeks ago it wasn't directly supported by the APIs of any of the major vendors. In just the past eight days &lt;a href="https://simonwillison.net/2025/May/27/mistral-agents-api/"&gt;it's been added&lt;/a&gt; by OpenAI, Anthropic &lt;em&gt;and&lt;/em&gt; Mistral! It's feeling like a lot less of a moving target today.&lt;/p&gt;
&lt;p&gt;I want LLM to be able to act as an MCP client, so that any of the MCP servers people are writing can be easily accessed as additional sources of tools for LLM.&lt;/p&gt;
&lt;p&gt;If you're interested in talking more about what comes next for LLM, &lt;a href="https://datasette.io/discord-llm"&gt;come and chat to us in our Discord&lt;/a&gt;.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/anthropic"&gt;anthropic&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-tool-use"&gt;llm-tool-use&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-agents"&gt;ai-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ollama"&gt;ollama&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="projects"/><category term="releases"/><category term="ai"/><category term="openai"/><category term="generative-ai"/><category term="llms"/><category term="llm"/><category term="anthropic"/><category term="gemini"/><category term="llm-tool-use"/><category term="ai-agents"/><category term="ollama"/></entry><entry><title>llm-openrouter 0.3</title><link href="https://simonwillison.net/2024/Dec/8/llm-openrouter-03/#atom-tag" rel="alternate"/><published>2024-12-08T23:56:14+00:00</published><updated>2024-12-08T23:56:14+00:00</updated><id>https://simonwillison.net/2024/Dec/8/llm-openrouter-03/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/simonw/llm-openrouter/releases/tag/0.3"&gt;llm-openrouter 0.3&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
New release of my &lt;a href="https://github.com/simonw/llm-openrouter"&gt;llm-openrouter&lt;/a&gt; plugin, which allows &lt;a href="https://llm.datasette.io/"&gt;LLM&lt;/a&gt; to access models hosted by &lt;a href="https://openrouter.ai/"&gt;OpenRouter&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quoting the release notes:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Enable image attachments for models that support images. Thanks, &lt;a href="https://github.com/montasaurus"&gt;Adam Montgomery&lt;/a&gt;. &lt;a href="https://github.com/simonw/llm-openrouter/issues/12"&gt;#12&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Provide async model access. &lt;a href="https://github.com/simonw/llm-openrouter/issues/15"&gt;#15&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fix documentation to list correct &lt;code&gt;LLM_OPENROUTER_KEY&lt;/code&gt; environment variable. &lt;a href="https://github.com/simonw/llm-openrouter/issues/10"&gt;#10&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/plugins"&gt;plugins&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openrouter"&gt;openrouter&lt;/a&gt;&lt;/p&gt;



</summary><category term="plugins"/><category term="releases"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="llm"/><category term="openrouter"/></entry><entry><title>Prompts.js</title><link href="https://simonwillison.net/2024/Dec/7/prompts-js/#atom-tag" rel="alternate"/><published>2024-12-07T20:35:26+00:00</published><updated>2024-12-07T20:35:26+00:00</updated><id>https://simonwillison.net/2024/Dec/7/prompts-js/#atom-tag</id><summary type="html">
    &lt;p&gt;I've been putting the &lt;a href="https://openai.com/index/openai-o1-system-card/"&gt;new o1 model&lt;/a&gt; from OpenAI through its paces, in particular for code. I'm very impressed - it feels like it's giving me a similar code quality to Claude 3.5 Sonnet, at least for Python and JavaScript and Bash... but it's returning output noticeably faster.&lt;/p&gt;
&lt;p&gt;I decided to try building a library I've had in mind for a while - an &lt;code&gt;await ...&lt;/code&gt; based alternative implementation of the browser's built-in &lt;code&gt;alert()&lt;/code&gt;, &lt;code&gt;confirm()&lt;/code&gt; and &lt;code&gt;prompt()&lt;/code&gt; functions.&lt;/p&gt;
&lt;p&gt;Short version: it lets you do this:&lt;/p&gt;
&lt;div class="highlight highlight-source-js"&gt;&lt;pre&gt;&lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-v"&gt;Prompts&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;alert&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;
    &lt;span class="pl-s"&gt;"This is an alert message!"&lt;/span&gt;
&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;

&lt;span class="pl-k"&gt;const&lt;/span&gt; &lt;span class="pl-s1"&gt;confirmedBoolean&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-v"&gt;Prompts&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;confirm&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;
    &lt;span class="pl-s"&gt;"Are you sure you want to proceed?"&lt;/span&gt;
&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;

&lt;span class="pl-k"&gt;const&lt;/span&gt; &lt;span class="pl-s1"&gt;nameString&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-v"&gt;Prompts&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;prompt&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;
    &lt;span class="pl-s"&gt;"Please enter your name"&lt;/span&gt;
&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here's &lt;a href="https://github.com/simonw/prompts-js/blob/main/index.js"&gt;the source code&lt;/a&gt; and a &lt;a href="https://tools.simonwillison.net/prompts-js"&gt;a live demo&lt;/a&gt; where you can try it out:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Animated demo of Prompts.js - three buttons, one for show alert, one for show confirm and one for show prompt. The alert one shows an alert message, the confirm one askes if you want to proceed with OK and Cancel buttons that return true or false, the prompt one asks for your name and returns that as a string or null if you cancel it." src="https://static.simonwillison.net/static/2024/prompts-js-cropped.gif" /&gt;&lt;/p&gt;
&lt;p&gt;I think there's something really interesting about using &lt;code&gt;await&lt;/code&gt; in this way.&lt;/p&gt;
&lt;p&gt;In the past every time I've used it in Python or JavaScript I've had an expectation that the thing I'm awaiting is going to return as quickly as possible - that I'm really just using this as a performance hack to unblock the event loop and allow it to do something else while I'm waiting for an operation to complete.&lt;/p&gt;
&lt;p&gt;That's not actually necessary at all! There's no reason not to use &lt;code&gt;await&lt;/code&gt; for operations that could take a long time to complete, such as a user interacting with a modal dialog.&lt;/p&gt;
&lt;p&gt;Having LLMs around to help prototype this kind of library idea is really fun. This is another example of something I probably wouldn't have bothered exploring without a model to do most of the code writing work for me.&lt;/p&gt;
&lt;p&gt;I didn't quite get it with a single prompt, but after a little bit of back-and-forth with o1 I got what I wanted - the main thing missing at first was sensible keyboard support (in particular the Enter and Escape keys).&lt;/p&gt;
&lt;p id="opening-prompt"&gt;My opening prompt was the following:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Write me a JavaScript library - no extra dependencies - which gives me the following functions:&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;await Prompts.alert("hi there"); -&amp;gt; displays a modal with a message and waits for you to click OK on it&lt;/code&gt;&lt;br /&gt;
&lt;code&gt;await Prompts.confirm("Are you sure") -&amp;gt; an OK and cancel option, returns true or false&amp;lt;br&amp;gt;&lt;/code&gt;&lt;br /&gt;
&lt;code&gt;await Prompts.prompt("What is your name?") -&amp;gt; a form asking the user's name, an OK button and cancel - if cancel returns null otherwise returns a string&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;These are equivalent to the browser builtin alert() and confirm() and prompt() - but I want them to work as async functions and to implement their own thing where they dull out the screen and show as a nicely styled modal&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;All CSS should be set by the Javascript, trying to avoid risk of existing CSS interfering with it&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here's &lt;a href="https://chatgpt.com/share/67539c28-4df0-8006-b021-4f468e011fd9"&gt;the full shared ChatGPT/o1 transcript&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I then got Google's new &lt;code&gt;gemini-exp-1206&lt;/code&gt; model to write the first draft of the README, this time via my LLM tool:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cat index.js | llm -m gemini-exp-1206 -s \
  'write a readme for this suitable for display on npm'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here's &lt;a href="https://gist.github.com/simonw/7b8e33b9a5009e103707e3412c5c3246"&gt;the response&lt;/a&gt;. I ended up editing this quite a bit.&lt;/p&gt;
&lt;p&gt;I published the result &lt;a href="https://www.npmjs.com/package/prompts-js"&gt;to npm as prompts-js&lt;/a&gt;, partly to exercise those muscles again - this is only the second package I've ever published there (the first was &lt;a href="https://til.simonwillison.net/npm/publish-web-component"&gt;a Web Component&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;This means it's available via CDNs such as &lt;a href="https://www.jsdelivr.com/"&gt;jsDelivr&lt;/a&gt; - so you can load it into a page and start using it like this:&lt;/p&gt;
&lt;div class="highlight highlight-text-html-basic"&gt;&lt;pre&gt;&lt;span class="pl-kos"&gt;&amp;lt;&lt;/span&gt;&lt;span class="pl-ent"&gt;script&lt;/span&gt;
  &lt;span class="pl-c1"&gt;src&lt;/span&gt;="&lt;span class="pl-s"&gt;https://cdn.jsdelivr.net/npm/prompts-js&lt;/span&gt;"
&lt;span class="pl-kos"&gt;&amp;gt;&lt;/span&gt;&lt;span class="pl-kos"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="pl-ent"&gt;script&lt;/span&gt;&lt;span class="pl-kos"&gt;&amp;gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I haven't yet figured out how to get it working as an ES module - there's an &lt;a href="https://github.com/simonw/prompts-js/issues/1"&gt;open issue for that here&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="showModal"&gt;Update: 0.0.3 switches to dialog.showModal()&lt;/h4&gt;
&lt;p&gt;I got some excellent feedback &lt;a href="https://fedi.simonwillison.net/@simon/113613496313842184"&gt;on Mastodon&lt;/a&gt; and &lt;a href="https://twitter.com/SimonLydell/status/1865501003757277643"&gt;on Twitter&lt;/a&gt; suggesting that I improve its accessibility by switching to using the built-in browser &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLDialogElement/showModal"&gt;dialog.showModal()&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This was a great idea! I ran &lt;a href="https://chatgpt.com/share/6754c0f1-b1a4-8006-9350-7b8a0d812dcc"&gt;a couple of rounds more&lt;/a&gt; with o1 and then switched to Claude 3.5 Sonnet for &lt;a href="https://gist.github.com/simonw/18bf524f56972e11887bc089befbc153"&gt;one last bug fix&lt;/a&gt;. Here's &lt;a href="https://github.com/simonw/prompts-js/pull/7"&gt;a PR&lt;/a&gt; where I reviewed those changes.&lt;/p&gt;
&lt;p&gt;I shipped that as &lt;a href="https://github.com/simonw/prompts-js/releases/tag/0.0.3"&gt;release 0.0.3&lt;/a&gt;, which is now powering &lt;a href="https://tools.simonwillison.net/prompts-js"&gt;the demo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I also hit this message, so I guess I won't be using o1 as often as I had hoped!&lt;/p&gt;

&lt;img src="https://static.simonwillison.net/static/2024/o1-limit.jpg" alt="You have 5 responses from 01 remaining. If you hit the limit, responses will switch to another model until it resets December 10, 2024." /&gt;

&lt;p&gt;Upgrading to unlimited o1 currently costs $200/month with &lt;a href="https://openai.com/index/introducing-chatgpt-pro/"&gt;the new ChatGPT Pro&lt;/a&gt;.&lt;/p&gt;

&lt;h4 id="things-i-learned-from-this-project"&gt;Things I learned from this project&lt;/h4&gt;
&lt;p&gt;Outsourcing code like this to an LLM is a great way to get something done quickly, and for me often means the difference between doing a project versus not bothering at all.&lt;/p&gt;
&lt;p&gt;Paying attention to what the model is writing - and then iterating on it, spotting bugs and generally trying to knock it into shape - is also a great way to learn new tricks.&lt;/p&gt;
&lt;p&gt;Here are some of the things I've learned from working on Prompts.js so far:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;const name = await askUserSomething()&lt;/code&gt; pattern really does work, and it feels &lt;em&gt;great&lt;/em&gt;. I love the idea of being able to &lt;code&gt;await&lt;/code&gt; a potentially lengthy user interaction like this.&lt;/li&gt;
&lt;li&gt;HTML &lt;code&gt;&amp;lt;dialog&amp;gt;&lt;/code&gt; elements are usable across multiple browsers now.&lt;/li&gt;
&lt;li&gt;Using a &lt;code&gt;&amp;lt;dialog&amp;gt;&lt;/code&gt; means you can skip implementing an overlay that dims out the rest of the screen yourself - that will happen automatically.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;&amp;lt;dialog&amp;gt;&lt;/code&gt; also does the right thing with respect to accessibility and preventing keyboard access to other elements on the page while that dialog is open.&lt;/li&gt;
&lt;li&gt;If you set &lt;code&gt;&amp;lt;form method="dialog"&amp;gt;&lt;/code&gt; in a form inside a dialog, submitting that form will close the dialog automatically.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;dialog.returnValue&lt;/code&gt; will be set to the value of the button used to submit the form.&lt;/li&gt;
&lt;li&gt;I also learned how to create a no-dependency, no build-step single file NPM package &lt;em&gt;and&lt;/em&gt; how to ship that to NPM automatically using GitHub Actions and GitHub Releases. I wrote that up in this TIL: &lt;a href="https://til.simonwillison.net/npm/npm-publish-github-actions"&gt;Publishing a simple client-side JavaScript package to npm with GitHub Actions&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/npm"&gt;npm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-3-5-sonnet"&gt;claude-3-5-sonnet&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/o1"&gt;o1&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="javascript"/><category term="projects"/><category term="releases"/><category term="npm"/><category term="openai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="llm"/><category term="gemini"/><category term="claude-3-5-sonnet"/><category term="o1"/></entry><entry><title>New Gemini model: gemini-exp-1206</title><link href="https://simonwillison.net/2024/Dec/6/gemini-exp-1206/#atom-tag" rel="alternate"/><published>2024-12-06T18:05:30+00:00</published><updated>2024-12-06T18:05:30+00:00</updated><id>https://simonwillison.net/2024/Dec/6/gemini-exp-1206/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://twitter.com/JeffDean/status/1865079431544607089"&gt;New Gemini model: gemini-exp-1206&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Google's Jeff Dean:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Today’s the one year anniversary of our first Gemini model releases!  And it’s never looked better.&lt;/p&gt;
&lt;p&gt;Check out our newest release, Gemini-exp-1206, &lt;a href="https://aistudio.google.com/app/prompts/new_chat?model=gemini-exp-1206"&gt;in Google AI Studio&lt;/a&gt; and the Gemini API!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I &lt;a href="https://github.com/simonw/llm-gemini/commit/c8b55d9435d95a821173071643f911ec9274eb08"&gt;upgraded my llm-gemini plugin&lt;/a&gt; to support the new model and released it as version 0.6 - you can install or upgrade it like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm install -U llm-gemini
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Running my &lt;a href="https://simonwillison.net/2024/Oct/25/pelicans-on-a-bicycle/"&gt;SVG pelican on a bicycle&lt;/a&gt; test prompt:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm -m gemini-exp-1206 "Generate an SVG of a pelican riding a bicycle"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Provided this result, which is the best I've seen &lt;a href="https://github.com/simonw/pelican-bicycle?tab=readme-ov-file#pelicans-on-a-bicycle"&gt;from any model&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/gemini-exp-1206.svg" style="width: 100%" alt="Blue sky, green grass, bicycle looks good, bird riding it is almost recognizable as a pelican"&gt;&lt;/p&gt;
&lt;p&gt;Here's &lt;a href="https://gist.github.com/simonw/6141a282e4d3fea856f582b9ed0a6f88"&gt;the full output&lt;/a&gt; - I enjoyed these two pieces of commentary from the model:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;&amp;lt;polygon&amp;gt;&lt;/code&gt;: Shapes the distinctive pelican beak, with an added line for the lower mandible.&lt;br&gt;
[...]&lt;br&gt;
&lt;code&gt;transform="translate(50, 30)"&lt;/code&gt;: This attribute on the pelican's &lt;code&gt;&amp;lt;g&amp;gt;&lt;/code&gt; tag moves the entire pelican group 50 units to the right and 30 units down, positioning it correctly on the bicycle.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The new model is also currently &lt;a href="https://twitter.com/lmarena_ai/status/1865080944455225547"&gt;in top place&lt;/a&gt; on the &lt;a href="https://lmarena.ai/"&gt;Chatbot Arena&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Update: a delightful bonus, here's what I got from the follow-up prompt:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm -c "now animate it"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/gemini-exp-1206-animated.svg" style="width: 100%" alt="The pelican is now animated - it is pedaling and its wing moves"&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gist.github.com/simonw/4728316a9e4854c6e62fa25c40759bb6#response-1"&gt;Transcript here&lt;/a&gt;.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/svg"&gt;svg&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pelican-riding-a-bicycle"&gt;pelican-riding-a-bicycle&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/chatbot-arena"&gt;chatbot-arena&lt;/a&gt;&lt;/p&gt;



</summary><category term="google"/><category term="releases"/><category term="svg"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="llm"/><category term="gemini"/><category term="pelican-riding-a-bicycle"/><category term="llm-release"/><category term="chatbot-arena"/></entry><entry><title>datasette-enrichments-llm</title><link href="https://simonwillison.net/2024/Dec/5/datasette-enrichments-llm/#atom-tag" rel="alternate"/><published>2024-12-05T23:46:48+00:00</published><updated>2024-12-05T23:46:48+00:00</updated><id>https://simonwillison.net/2024/Dec/5/datasette-enrichments-llm/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/datasette/datasette-enrichments-llm"&gt;datasette-enrichments-llm&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Today's new alpha release is &lt;strong&gt;datasette-enrichments-llm&lt;/strong&gt;, a plugin for Datasette 1.0a+ that provides an &lt;a href="https://simonwillison.net/2023/Dec/1/datasette-enrichments/"&gt;enrichment&lt;/a&gt; that lets you run prompts against data from one or more column and store the result in another column.&lt;/p&gt;
&lt;p&gt;So far it's a light re-implementation of the existing &lt;a href="https://github.com/datasette/datasette-enrichments-gpt"&gt;datasette-enrichments-gpt&lt;/a&gt; plugin, now using the new &lt;a href="https://llm.datasette.io/en/stable/python-api.html#python-api-listing-models"&gt;llm.get_async_models()&lt;/a&gt; method to allow users to select any async-enabled model that has been registered by a plugin - so currently any of the models from OpenAI, Anthropic, Gemini or Mistral via their &lt;a href="https://llm.datasette.io/en/stable/plugins/directory.html#remote-apis"&gt;respective plugins&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Still plenty to do on this one. Next step is to integrate it with &lt;a href="https://simonwillison.net/2024/Dec/2/datasette-llm-usage/"&gt;datasette-llm-usage&lt;/a&gt; and use it to drive a design-complete stable version of that.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/plugins"&gt;plugins&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/enrichments"&gt;enrichments&lt;/a&gt;&lt;/p&gt;



</summary><category term="plugins"/><category term="projects"/><category term="releases"/><category term="ai"/><category term="datasette"/><category term="generative-ai"/><category term="llms"/><category term="llm"/><category term="enrichments"/></entry><entry><title>First impressions of the new Amazon Nova LLMs (via a new llm-bedrock plugin)</title><link href="https://simonwillison.net/2024/Dec/4/amazon-nova/#atom-tag" rel="alternate"/><published>2024-12-04T15:50:50+00:00</published><updated>2024-12-04T15:50:50+00:00</updated><id>https://simonwillison.net/2024/Dec/4/amazon-nova/#atom-tag</id><summary type="html">
    &lt;p&gt;Amazon released &lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/"&gt;three new Large Language Models&lt;/a&gt; yesterday at their AWS re:Invent conference. The new model family is called Amazon Nova and comes in three sizes: Micro, Lite and Pro.&lt;/p&gt;
&lt;p&gt;I built a new &lt;a href="https://llm.datasette.io/"&gt;LLM&lt;/a&gt; plugin called &lt;a href="https://github.com/simonw/llm-bedrock"&gt;llm-bedrock&lt;/a&gt; for accessing the models in the terminal via &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html"&gt;boto3&lt;/a&gt; and the &lt;a href="https://aws.amazon.com/bedrock/"&gt;Amazon Bedrock API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;My initial impressions from trying out the models are that they're mainly competitive with the Google Gemini family. They are &lt;em&gt;extremely&lt;/em&gt; inexpensive - Nova Micro slightly undercuts even previously cheapest model &lt;a href="https://developers.googleblog.com/en/gemini-15-flash-8b-is-now-generally-available-for-use/"&gt;Gemini 1.5 Flash-8B&lt;/a&gt; - can handle quite large context and the two larger models can handle images, video and PDFs.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2024/Dec/4/amazon-nova/#pricing-and-capabilities"&gt;Pricing and capabilities&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2024/Dec/4/amazon-nova/#using-nova-from-the-terminal-with-llm"&gt;Using Nova from the terminal with LLM&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2024/Dec/4/amazon-nova/#multi-modal-prompting-with-images-and-documents"&gt;Multi-modal prompting with images and documents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2024/Dec/4/amazon-nova/#gamoa"&gt;Closing thoughts: GAMOA&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="pricing-and-capabilities"&gt;Pricing and capabilities&lt;/h4&gt;
&lt;p&gt;Amazon list &lt;a href="https://aws.amazon.com/bedrock/pricing/"&gt;their pricing&lt;/a&gt; in price per 1,000 input tokens. Almost everyone else uses price per million, so I've done the conversion.&lt;/p&gt;
&lt;p&gt;Here's a table comparing the least expensive models from the largest providers:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Cents per million input&lt;/th&gt;
&lt;th&gt;Cents per million output&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;GPT-4o Mini&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Claude 3 Haiku&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;125&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Claude 3.5 Haiku&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;Gemini 1.5 Flash-8B&lt;/td&gt;
&lt;td&gt;3.75&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;Gemini 1.5 Flash&lt;/td&gt;
&lt;td&gt;7.5&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon&lt;/td&gt;
&lt;td&gt;Nova Micro&lt;/td&gt;
&lt;td&gt;3.5&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon&lt;/td&gt;
&lt;td&gt;Nova Lite&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;As you can see, Nova Micro is .25 of a cent cheaper on input and 1 cent cheaper on output than previous champion Gemini 1.5 Flash-8B.&lt;/p&gt;
&lt;p&gt;And the more expensive models:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Cents per million input&lt;/th&gt;
&lt;th&gt;Cents per million output&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;GPT-4o&lt;/td&gt;
&lt;td&gt;250&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;o1-mini&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;1200&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;o1-preview&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;6000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Claude 3.5 Sonnet&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Claude 3 Opus&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;7500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;Gemini 1.5 Pro&lt;/td&gt;
&lt;td&gt;125&lt;/td&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon&lt;/td&gt;
&lt;td&gt;Nova Pro&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;td&gt;320&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;So Nova Pro isn't priced the same as those top-tier models, in fact it's priced about the same as Claude 3.5 Haiku (after their &lt;a href="https://simonwillison.net/2024/Dec/5/claude-35-haiku-price-drops-by-20/"&gt;new 20% price drop&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The best model in the Nova family is still to come - &lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/"&gt;from their post&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Amazon Nova Premier&lt;/strong&gt; – Our most capable multimodal model for complex reasoning tasks and for use as the best teacher for distilling custom models. Amazon Nova Premier is still in training. We’re targeting availability in early 2025.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And from &lt;a href="https://press.aboutamazon.com/2024/12/introducing-amazon-nova-a-new-generation-of-foundation-models"&gt;the press release&lt;/a&gt;, which presumably is talking about Nova Premier as well:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;In early 2025, Amazon will support context length of over 2M input tokens.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Comparing models based on token pricing doesn't capture the entire picture because models use different tokenizers and hence may use up different numbers of tokens for the same input, especially with images and video added to the mix. I'd like to implement a good way to compare those counts in the future.&lt;/p&gt;
&lt;p&gt;In terms of capabilities: &lt;em&gt;all&lt;/em&gt; of the above models can handle image inputs now, with the exception of Amazon Nova Micro and o1-mini and o1-preview which are text only. Video support is currently unique to the Gemini and Nova models - Anthropic and OpenAI don't have any that handle video yet. See &lt;a href="https://docs.aws.amazon.com/nova/latest/userguide/prompting-vision-limitations.html"&gt;Amazon Nova video understanding limitations&lt;/a&gt; for more details.&lt;/p&gt;
&lt;p&gt;Nova models can't handle audio, and in fact ignore audio in video that they analyze (treating it as a sequence of images). Gemini models have audio input, as does OpenAI's &lt;code&gt;gpt-4o-audio-preview&lt;/code&gt; model ($100/$200 for input/output of those audio tokens).&lt;/p&gt;
&lt;p&gt;Nova mini has a 128,000 input token limit, while Lite and Pro are both 300,000 tokens. This isn't quite in the same league as Gemini (2 million for Pro and Flash, 1 million for Flash-8B) but beats the standard models from both OpenAI and Anthropic. Anthropic have an "Enterprise" tier with 500,000 but I've not had a chance to try that myself.&lt;/p&gt;
&lt;h4 id="using-nova-from-the-terminal-with-llm"&gt;Using Nova from the terminal with LLM&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://llm.datasette.io/en/stable/setup.html"&gt;Install LLM&lt;/a&gt;, then install the plugin like this:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm install llm-bedrock&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Next comes the hard part: you'll need AWS credentials that have permission to access Bedrock models on your behalf.&lt;/p&gt;
&lt;p&gt;If you've previously configured the AWS CLI you may be able to use a shortcut: &lt;a href="https://us-west-2.console.aws.amazon.com/bedrock/home?region=us-west-2#/"&gt;request access to the Bedrock models&lt;/a&gt; (Nova models are approved instantly) and the tool should be able to pick up your existing credentials.&lt;/p&gt;
&lt;p&gt;I got stuck on this for a while, then Nils Durner &lt;a href="https://twitter.com/ndurner/status/1864068962247393601"&gt;came to the rescue&lt;/a&gt; with &lt;a href="https://ndurner.github.io/amazon-nova"&gt;step-by-step instructions&lt;/a&gt; (17 steps in total, screenshots included) for requesting access to the models, creating an IAM user, creating a JSON IAM policy that allows access to the &lt;code&gt;bedrock:&lt;/code&gt; actions, assigning that policy to the new user and then creating and storing an access key and access secret for calling the API.&lt;/p&gt;
&lt;p&gt;Now that you have those credentials, you can provide them to LLM by running:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm keys &lt;span class="pl-c1"&gt;set&lt;/span&gt; bedrock&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;And then pasting in the &lt;code&gt;access_key:secret_access_key&lt;/code&gt; - the two keys with a colon in the middle.&lt;/p&gt;
&lt;p&gt;Run &lt;code&gt;llm models&lt;/code&gt; to confirm that the models are installed correctly - you should see output that ends like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Bedrock: us.amazon.nova-micro-v1:0 (aliases: nova-micro)
Bedrock: us.amazon.nova-lite-v1:0 (aliases: nova-lite)
Bedrock: us.amazon.nova-pro-v1:0 (aliases: nova-pro)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And now you can run prompts! A simple text prompt looks like this - I'm setting an optional system prompt too:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm -m nova-micro &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Three ways to impress a pelican&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; -s &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;short sentences&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Offer a fish.&lt;/li&gt;
&lt;li&gt;Dance like a clown.&lt;/li&gt;
&lt;li&gt;Wear bright colors.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;p&gt;That's using &lt;code&gt;nova-micro&lt;/code&gt;, the cheapest and fastest model. Trying that again with &lt;code&gt;-m nova-pro&lt;/code&gt; (the best model) gave me:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Offer fresh fish.&lt;/li&gt;
&lt;li&gt;Perform mimicry of pelican calls.&lt;/li&gt;
&lt;li&gt;Create a calm, respectful environment.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;h4 id="multi-modal-prompting-with-images-and-documents"&gt;Multi-modal prompting with images and documents&lt;/h4&gt;
&lt;p&gt;My plugin &lt;a href="https://simonwillison.net/2024/Oct/29/llm-multi-modal/"&gt;supports LLM attachments&lt;/a&gt;, so you can attach images, video and PDF files to a prompt using the &lt;code&gt;-a&lt;/code&gt; option which accepts either a URL or a file path.&lt;/p&gt;
&lt;p&gt;Trying against my &lt;a href="https://static.simonwillison.net/static/2024/pelicans.jpg"&gt;standard low-resolution pelican group photo&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/pelicans.jpg" alt="A confusing photo of a group of pelicans on some rocks" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm -m nova-lite &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;describe image&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; \
  -a https://static.simonwillison.net/static/2024/pelicans.jpg -u&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;-u&lt;/code&gt; option there outputs token usage. I got back:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The image depicts a large gathering of birds on a rocky cliff adjacent to a body of water, likely a sea or a large lake. The birds, primarily pelicans, appear to be resting and socializing in a tightly packed group. Some of the birds are standing while others are perched on the rocks, and a few are near the edge of the cliff, possibly preparing to dive into the water below. The cliff is rocky and has a rugged appearance, with some areas showing signs of erosion or weathering. The water beneath the birds is calm and appears to be a deep blue color, reflecting the clear sky above. The birds' feathers are visible in various shades of brown, gray, and white, adding to the natural and picturesque scene.&lt;/p&gt;
&lt;p&gt;Token usage: 1,668 input, 148 output&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Using my &lt;a href="https://tools.simonwillison.net/llm-prices"&gt;LLM pricing calculator&lt;/a&gt; (now upgraded to support Nova) I get 0.0136 cents - that's just over 1/100th of a cent!&lt;/p&gt;
&lt;p&gt;For the same prompt using &lt;code&gt;-m nova-pro&lt;/code&gt; for the more expensive Nova Pro model:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The image depicts a large gathering of birds on a rocky shore or beach, adjacent to a body of water. The scene is bustling with a variety of birds, primarily pelicans, which are identifiable by their large size, long necks, and distinctive beaks. The pelicans are positioned both standing and sitting on the rocks, with some standing tall and others hunched over.&lt;/p&gt;
&lt;p&gt;In addition to pelicans, there are numerous smaller birds, likely seagulls, characterized by their gray and white plumage. These birds are densely packed together, creating a vivid scene of avian life. The seagulls are scattered throughout the rocky area, some standing, others sitting, and a few engaged in various activities such as preening their feathers or searching for food.&lt;/p&gt;
&lt;p&gt;The rocky shore is uneven and covered with a mix of pebbles and larger stones, providing a natural habitat for the birds. The body of water in the background appears calm, with a reflection of the sky and the rocky shore. The water's surface is smooth, indicating little to no wind.&lt;/p&gt;
&lt;p&gt;The sky above is clear, suggesting fair weather conditions. The lighting in the image suggests it is daytime, likely in the early morning or late afternoon, given the soft light and the absence of harsh shadows.&lt;/p&gt;
&lt;p&gt;In summary, the image captures a lively scene of wildlife, with a focus on pelicans and seagulls congregating on a rocky shore next to a calm body of water under a clear sky. The diverse bird population and their interactions with the environment create a dynamic and engaging visual experience.&lt;/p&gt;
&lt;p&gt;Token usage: 1,668 input, 305 output&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That came in at a whopping 0.231 cents - nearly a quarter of a cent!&lt;/p&gt;
&lt;p&gt;Let's try something absurdly ambitious. I ran some random Google searches for government military PDFs and stumbled across &lt;a href="https://www.darpa.mil/attachments/DARPA_FY18_Presidents_Budget_Request.pdf"&gt;this one&lt;/a&gt;. Let's convert that to Markdown:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm -u -m nova-pro &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;convert this document to markdown&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; \
  -a https://www.darpa.mil/attachments/DARPA_FY18_Presidents_Budget_Request.pdf&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can see &lt;a href="https://gist.github.com/simonw/300cc0f00f0da3cf6e1d00fc19e36b81"&gt;the result here&lt;/a&gt;. It stopped around page 9 and missed some of the tables, but this was an absurdly challenging document so I'm not surprised it didn't manage a perfect conversion. It cost me 176,386 input, 4,283 output which is a full 15.4814 cents.&lt;/p&gt;
&lt;p&gt;Finally, let's see if it can code:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;llm -m nova-pro &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;python function to domnload a CSV file from a URL, create a corresponding table in SQLite and import the data&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;24 input, 616 output = 0.0087 cents.&lt;/p&gt;
&lt;p&gt;Full &lt;a href="https://gist.github.com/simonw/05c7e59d381e96acab08baa6006845e5"&gt;output here&lt;/a&gt;, but this is the code it wrote:&lt;/p&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;requests&lt;/span&gt;
&lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;csv&lt;/span&gt;
&lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;sqlite3&lt;/span&gt;
&lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;os&lt;/span&gt;
&lt;span class="pl-k"&gt;from&lt;/span&gt; &lt;span class="pl-s1"&gt;io&lt;/span&gt; &lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-v"&gt;StringIO&lt;/span&gt;

&lt;span class="pl-k"&gt;def&lt;/span&gt; &lt;span class="pl-en"&gt;download_csv_and_import_to_sqlite&lt;/span&gt;(&lt;span class="pl-s1"&gt;url&lt;/span&gt;, &lt;span class="pl-s1"&gt;db_name&lt;/span&gt;, &lt;span class="pl-s1"&gt;table_name&lt;/span&gt;):
    &lt;span class="pl-c"&gt;# Download the CSV file from the URL&lt;/span&gt;
    &lt;span class="pl-s1"&gt;response&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;requests&lt;/span&gt;.&lt;span class="pl-en"&gt;get&lt;/span&gt;(&lt;span class="pl-s1"&gt;url&lt;/span&gt;)
    &lt;span class="pl-s1"&gt;response&lt;/span&gt;.&lt;span class="pl-en"&gt;raise_for_status&lt;/span&gt;()  &lt;span class="pl-c"&gt;# Ensure we notice bad responses...&lt;/span&gt;
    
    &lt;span class="pl-c"&gt;# Read the CSV data&lt;/span&gt;
    &lt;span class="pl-s1"&gt;csv_data&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;response&lt;/span&gt;.&lt;span class="pl-s1"&gt;content&lt;/span&gt;.&lt;span class="pl-en"&gt;decode&lt;/span&gt;(&lt;span class="pl-s"&gt;'utf-8'&lt;/span&gt;)
    &lt;span class="pl-s1"&gt;csv_reader&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;csv&lt;/span&gt;.&lt;span class="pl-en"&gt;reader&lt;/span&gt;(&lt;span class="pl-v"&gt;StringIO&lt;/span&gt;(&lt;span class="pl-s1"&gt;csv_data&lt;/span&gt;))
    
    &lt;span class="pl-c"&gt;# Get the header (column names) from the CSV&lt;/span&gt;
    &lt;span class="pl-s1"&gt;header&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-en"&gt;next&lt;/span&gt;(&lt;span class="pl-s1"&gt;csv_reader&lt;/span&gt;)
    
    &lt;span class="pl-c"&gt;# Create the SQLite database and connect to it&lt;/span&gt;
    &lt;span class="pl-s1"&gt;conn&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;sqlite3&lt;/span&gt;.&lt;span class="pl-en"&gt;connect&lt;/span&gt;(&lt;span class="pl-s1"&gt;db_name&lt;/span&gt;)
    &lt;span class="pl-s1"&gt;cursor&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;conn&lt;/span&gt;.&lt;span class="pl-en"&gt;cursor&lt;/span&gt;()
    
    &lt;span class="pl-c"&gt;# Create the table&lt;/span&gt;
    &lt;span class="pl-s1"&gt;create_table_query&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s"&gt;f"CREATE TABLE IF NOT EXISTS &lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;&lt;span class="pl-s1"&gt;table_name&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt; ("&lt;/span&gt;
    &lt;span class="pl-s1"&gt;create_table_query&lt;/span&gt; &lt;span class="pl-c1"&gt;+=&lt;/span&gt; &lt;span class="pl-s"&gt;","&lt;/span&gt;.&lt;span class="pl-en"&gt;join&lt;/span&gt;([&lt;span class="pl-s"&gt;f"&lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;&lt;span class="pl-s1"&gt;col&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt; TEXT"&lt;/span&gt; &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;col&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;header&lt;/span&gt;])
    &lt;span class="pl-s1"&gt;create_table_query&lt;/span&gt; &lt;span class="pl-c1"&gt;+=&lt;/span&gt; &lt;span class="pl-s"&gt;")"&lt;/span&gt;
    &lt;span class="pl-s1"&gt;cursor&lt;/span&gt;.&lt;span class="pl-en"&gt;execute&lt;/span&gt;(&lt;span class="pl-s1"&gt;create_table_query&lt;/span&gt;)
    
    &lt;span class="pl-c"&gt;# Insert the data into the table&lt;/span&gt;
    &lt;span class="pl-s1"&gt;insert_query&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s"&gt;f"INSERT INTO &lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;&lt;span class="pl-s1"&gt;table_name&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt; VALUES (&lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;&lt;span class="pl-s"&gt;','&lt;/span&gt;.&lt;span class="pl-en"&gt;join&lt;/span&gt;([&lt;span class="pl-s"&gt;'?'&lt;/span&gt; &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;_&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;header&lt;/span&gt;])&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt;)"&lt;/span&gt;
    &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;row&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;csv_reader&lt;/span&gt;:
        &lt;span class="pl-s1"&gt;cursor&lt;/span&gt;.&lt;span class="pl-en"&gt;execute&lt;/span&gt;(&lt;span class="pl-s1"&gt;insert_query&lt;/span&gt;, &lt;span class="pl-s1"&gt;row&lt;/span&gt;)
    
    &lt;span class="pl-c"&gt;# Commit the changes and close the connection&lt;/span&gt;
    &lt;span class="pl-s1"&gt;conn&lt;/span&gt;.&lt;span class="pl-en"&gt;commit&lt;/span&gt;()
    &lt;span class="pl-s1"&gt;conn&lt;/span&gt;.&lt;span class="pl-en"&gt;close&lt;/span&gt;()
    
    &lt;span class="pl-en"&gt;print&lt;/span&gt;(&lt;span class="pl-s"&gt;f"Data successfully imported into &lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;&lt;span class="pl-s1"&gt;db_name&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt; in table &lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;&lt;span class="pl-s1"&gt;table_name&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt;."&lt;/span&gt;)

&lt;span class="pl-c"&gt;# Example usage&lt;/span&gt;
&lt;span class="pl-s1"&gt;url&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s"&gt;"https://datasette.io/content/stats.csv"&lt;/span&gt;
&lt;span class="pl-s1"&gt;db_name&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s"&gt;"example.db"&lt;/span&gt;
&lt;span class="pl-s1"&gt;table_name&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s"&gt;"data_table"&lt;/span&gt;

&lt;span class="pl-en"&gt;download_csv_and_import_to_sqlite&lt;/span&gt;(&lt;span class="pl-s1"&gt;url&lt;/span&gt;, &lt;span class="pl-s1"&gt;db_name&lt;/span&gt;, &lt;span class="pl-s1"&gt;table_name&lt;/span&gt;)&lt;/pre&gt;
&lt;p&gt;That's spot on. I may have to retire that benchmark question, it's not enough of a challenge for modern models.&lt;/p&gt;
&lt;p&gt;Can they produce &lt;a href="https://simonwillison.net/2024/Oct/25/pelicans-on-a-bicycle/"&gt;an SVG of a pelican on a bicycle&lt;/a&gt;? &lt;a href="https://github.com/simonw/pelican-bicycle#aws-models"&gt;Not very well&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="gamoa"&gt;Closing thoughts: GAMOA&lt;/h4&gt;
&lt;p&gt;My experiments here aren't particularly comprehensive - I look forward from hearing from people who have better challenges and a more disciplined way of applying them. &lt;a href="https://livebench.ai/"&gt;LiveBench&lt;/a&gt; have some useful results here already.&lt;/p&gt;

&lt;p&gt;As a general initial vibe check though these Nova models pass with flying colors.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Google Gemini now has competition in terms of pricing. This is a relief to me, Gemini is so cheap I've been nervous that they'll jack the prices up through lack of competition!&lt;/li&gt;
&lt;li&gt;These appear to be good multi-modal models - their image handling feels solid and it's neat to have a new video-input model (even if it's quite limited compared to Gemini)&lt;/li&gt;
&lt;li&gt;Anthropic's decision to &lt;a href="https://simonwillison.net/2024/Nov/4/haiku/"&gt;raise the price for Claude 3.5 Haiku&lt;/a&gt; isn't looking great, given the continued competition at the bottom end of the market. Just a few months ago Claude 3 Haiku was the cheapest model available, now it's hardly worth considering, undercut by even GPT-4o mini.&lt;/li&gt;
&lt;li&gt;Gaining API access to anything on AWS remains horrifyingly difficult - seriously, &lt;a href="https://ndurner.github.io/amazon-nova"&gt;17 steps&lt;/a&gt;! Some day a PM at AWS is going to figure out that fixing this will have a material impact on Amazon's bottom line.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From &lt;a href="https://press.aboutamazon.com/2024/12/introducing-amazon-nova-a-new-generation-of-foundation-models"&gt;the Nova press release&lt;/a&gt; in the what's next section:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Amazon will introduce an Amazon Nova &lt;strong&gt;speech-to-speech model&lt;/strong&gt; in the first quarter of 2025. [...] Amazon is also developing a novel model that can take text, images, audio, and video as input, and generate outputs in any of these modalities. This Amazon Nova model with native &lt;strong&gt;multimodal-to-multimodal&lt;/strong&gt; – or “any-to-any” modality capabilities – will be introduced mid-year 2025.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This reminded me that OpenAI's GPT-4o was meant to be able to provide image output, which they demonstrated &lt;a href="https://openai.com/index/hello-gpt-4o/"&gt;at their launch&lt;/a&gt; and as far as I'm aware have not mentioned since. Maybe this will encourage them to finally ship that feature!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;The cost of multi-modal models - first Gemini, now Nova - continues to leave me almost speechless. At 0.0136 cents per image I could generate a description of every photo in my 67,771 personal photo library using Nova Mini for $9.21.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With this release I think Amazon may have earned a spot among the top tier of model providers. Maybe we need a new FAANG acronym that covers OpenAI, Anthropic, Google, Meta and Amazon. I like &lt;strong&gt;GAMOA&lt;/strong&gt;.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/amazon"&gt;amazon&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/anthropic"&gt;anthropic&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vision-llms"&gt;vision-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-pricing"&gt;llm-pricing&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/multi-modal-output"&gt;multi-modal-output&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="amazon"/><category term="projects"/><category term="releases"/><category term="ai"/><category term="openai"/><category term="generative-ai"/><category term="llms"/><category term="llm"/><category term="anthropic"/><category term="gemini"/><category term="vision-llms"/><category term="llm-pricing"/><category term="multi-modal-output"/><category term="llm-release"/></entry><entry><title>datasette-queries</title><link href="https://simonwillison.net/2024/Dec/3/datasette-queries/#atom-tag" rel="alternate"/><published>2024-12-03T23:59:26+00:00</published><updated>2024-12-03T23:59:26+00:00</updated><id>https://simonwillison.net/2024/Dec/3/datasette-queries/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/datasette/datasette-queries"&gt;datasette-queries&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I released the first alpha of a new plugin to replace the crusty old &lt;a href="https://github.com/simonw/datasette-saved-queries"&gt;datasette-saved-queries&lt;/a&gt;. This one adds a new UI element to the top of the query results page with an expandable form for saving the query as a new &lt;a href="https://docs.datasette.io/en/stable/sql_queries.html#canned-queries"&gt;canned query&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Animated demo. I start on the table page, run a search, click View and edit SQL, then on the SQL query page open a Save query dialog, click a Suggest title and description button, wait for that to suggest something and click save." src="https://static.simonwillison.net/static/2024/datasette-queries.gif" /&gt;&lt;/p&gt;
&lt;p&gt;It's my first plugin to depend on LLM and &lt;a href="https://simonwillison.net/2024/Dec/2/datasette-llm-usage/"&gt;datasette-llm-usage&lt;/a&gt; - it uses GPT-4o mini to power an optional "Suggest title and description" button, labeled with the becoming-standard ✨ sparkles emoji to indicate an LLM-powered feature.&lt;/p&gt;
&lt;p&gt;I intend to expand this to work across multiple models as I continue to iterate on &lt;code&gt;llm-datasette-usage&lt;/code&gt; to better support those kinds of patterns.&lt;/p&gt;
&lt;p&gt;For the moment though each suggested title and description call costs about 250 input tokens and 50 output tokens, which against GPT-4o mini adds up to 0.0067 cents.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/plugins"&gt;plugins&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;



</summary><category term="plugins"/><category term="projects"/><category term="releases"/><category term="ai"/><category term="datasette"/><category term="openai"/><category term="generative-ai"/><category term="llms"/><category term="llm"/></entry><entry><title>datasette-llm-usage</title><link href="https://simonwillison.net/2024/Dec/2/datasette-llm-usage/#atom-tag" rel="alternate"/><published>2024-12-02T21:33:05+00:00</published><updated>2024-12-02T21:33:05+00:00</updated><id>https://simonwillison.net/2024/Dec/2/datasette-llm-usage/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/datasette/datasette-llm-usage"&gt;datasette-llm-usage&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I released the first alpha of a Datasette plugin to help track LLM usage by other plugins, with the goal of supporting token allowances - both for things like free public apps that stop working after a daily allowance, plus free previews of AI features for paid-account-based projects such as Datasette Cloud.&lt;/p&gt;
&lt;p&gt;It's using the usage features I added in &lt;a href="https://simonwillison.net/2024/Dec/1/llm-019/"&gt;LLM 0.19&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The alpha doesn't do much yet - it will start getting interesting once I upgrade other plugins to depend on it.&lt;/p&gt;
&lt;p&gt;Design notes so far in &lt;a href="https://github.com/datasette/datasette-llm-usage/issues/1"&gt;issue #1&lt;/a&gt;.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/plugins"&gt;plugins&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette-cloud"&gt;datasette-cloud&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;



</summary><category term="plugins"/><category term="projects"/><category term="releases"/><category term="ai"/><category term="datasette"/><category term="datasette-cloud"/><category term="generative-ai"/><category term="llms"/><category term="llm"/></entry><entry><title>LLM 0.19</title><link href="https://simonwillison.net/2024/Dec/1/llm-019/#atom-tag" rel="alternate"/><published>2024-12-01T23:59:45+00:00</published><updated>2024-12-01T23:59:45+00:00</updated><id>https://simonwillison.net/2024/Dec/1/llm-019/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://llm.datasette.io/en/stable/changelog.html#v0-19"&gt;LLM 0.19&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I just released version 0.19 of &lt;a href="https://llm.datasette.io/"&gt;LLM&lt;/a&gt;, my Python library and CLI utility for working with Large Language Models.&lt;/p&gt;
&lt;p&gt;I released 0.18 &lt;a href="https://simonwillison.net/2024/Nov/17/llm-018/"&gt;a couple of weeks ago&lt;/a&gt; adding support for calling models from Python &lt;code&gt;asyncio&lt;/code&gt; code. 0.19 improves on that, and also adds a new mechanism for models to report their token usage.&lt;/p&gt;
&lt;p&gt;LLM can log those usage numbers to a SQLite database, or make then available to custom Python code.&lt;/p&gt;
&lt;p&gt;My eventual goal with these features is to implement token accounting as a Datasette plugin so I can offer AI features in my SaaS platform without worrying about customers spending unlimited LLM tokens.&lt;/p&gt;
&lt;p&gt;Those 0.19 release notes in full:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Tokens used by a response are now logged to new &lt;code&gt;input_tokens&lt;/code&gt; and &lt;code&gt;output_tokens&lt;/code&gt; integer columns and a &lt;code&gt;token_details&lt;/code&gt; JSON string column, for the default OpenAI models and models from other plugins that &lt;a href="https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#advanced-model-plugins-usage"&gt;implement this feature&lt;/a&gt;. &lt;a href="https://github.com/simonw/llm/issues/610"&gt;#610&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;llm prompt&lt;/code&gt; now takes a &lt;code&gt;-u/--usage&lt;/code&gt; flag to display token usage at the end of the response.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;llm logs -u/--usage&lt;/code&gt; shows token usage information for logged responses.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;llm prompt ... --async&lt;/code&gt; responses are now logged to the database. &lt;a href="https://github.com/simonw/llm/issues/641"&gt;#641&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;llm.get_models()&lt;/code&gt; and &lt;code&gt;llm.get_async_models()&lt;/code&gt; functions, &lt;a href="https://llm.datasette.io/en/stable/python-api.html#python-api-listing-models"&gt;documented here&lt;/a&gt;. &lt;a href="https://github.com/simonw/llm/issues/640"&gt;#640&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;response.usage()&lt;/code&gt; and async response &lt;code&gt;await response.usage()&lt;/code&gt; methods, returning a &lt;code&gt;Usage(input=2, output=1, details=None)&lt;/code&gt; dataclass. &lt;a href="https://github.com/simonw/llm/issues/644"&gt;#644&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;response.on_done(callback)&lt;/code&gt; and &lt;code&gt;await response.on_done(callback)&lt;/code&gt; methods for specifying a callback to be executed when a response has completed, &lt;a href="https://llm.datasette.io/en/stable/python-api.html#python-api-response-on-done"&gt;documented here&lt;/a&gt;. &lt;a href="https://github.com/simonw/llm/issues/653"&gt;#653&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fix for bug running &lt;code&gt;llm chat&lt;/code&gt; on Windows 11. Thanks, &lt;a href="https://github.com/sukhbinder"&gt;Sukhbinder Singh&lt;/a&gt;. &lt;a href="https://github.com/simonw/llm/issues/495"&gt;#495&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;I also released three new plugin versions that add support for the new usage tracking feature: &lt;a href="https://github.com/simonw/llm-gemini/releases/tag/0.5"&gt;llm-gemini 0.5&lt;/a&gt;, &lt;a href="https://github.com/simonw/llm-claude-3/releases/tag/0.10"&gt;llm-claude-3 0.10&lt;/a&gt; and &lt;a href="https://github.com/simonw/llm-mistral/releases/tag/0.9"&gt;llm-mistral 0.9&lt;/a&gt;.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/cli"&gt;cli&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releasenotes"&gt;releasenotes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;



</summary><category term="cli"/><category term="projects"/><category term="releasenotes"/><category term="releases"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="llm"/></entry><entry><title>Datasette 1.0a15</title><link href="https://simonwillison.net/2024/Aug/16/datasette-10a15/#atom-tag" rel="alternate"/><published>2024-08-16T05:06:51+00:00</published><updated>2024-08-16T05:06:51+00:00</updated><id>https://simonwillison.net/2024/Aug/16/datasette-10a15/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.datasette.io/en/latest/changelog.html#a15-2024-08-15"&gt;Datasette 1.0a15&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Mainly bug fixes, but a couple of minor new features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Datasette now defaults to hiding SQLite "shadow" tables, as seen in extensions such as SQLite FTS and &lt;a href="https://github.com/asg017/sqlite-vec"&gt;sqlite-vec&lt;/a&gt;. Virtual tables that it makes sense to display, such as FTS core tables, are no longer hidden. Thanks, &lt;a href="https://github.com/asg017"&gt;Alex Garcia&lt;/a&gt;. (&lt;a href="https://github.com/simonw/datasette/issues/2296"&gt;#2296&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;The Datasette homepage is now duplicated at &lt;code&gt;/-/&lt;/code&gt;, using the default &lt;code&gt;index.html&lt;/code&gt; template. This ensures that the information on that page is still accessible even if the Datasette homepage has been customized using a custom &lt;code&gt;index.html&lt;/code&gt; template, for example on sites like &lt;a href="https://datasette.io/"&gt;datasette.io&lt;/a&gt;. (&lt;a href="https://github.com/simonw/datasette/issues/2393"&gt;#2393&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Datasette also now &lt;a href="https://github.com/simonw/datasette/issues/2390"&gt;serves more user-friendly CSRF pages&lt;/a&gt;, an improvement which required me to ship &lt;a href="https://github.com/simonw/asgi-csrf/releases/tag/0.10"&gt;asgi-csrf 0.10&lt;/a&gt;.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/csrf"&gt;csrf&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;



</summary><category term="csrf"/><category term="projects"/><category term="releases"/><category term="datasette"/></entry><entry><title>Datasette 1.0a14: The annotated release notes</title><link href="https://simonwillison.net/2024/Aug/5/datasette-1a14/#atom-tag" rel="alternate"/><published>2024-08-05T23:20:01+00:00</published><updated>2024-08-05T23:20:01+00:00</updated><id>https://simonwillison.net/2024/Aug/5/datasette-1a14/#atom-tag</id><summary type="html">
    &lt;p&gt;Released today: &lt;a href="https://docs.datasette.io/en/1.0a14/changelog.html#a14-2024-08-05"&gt;Datasette 1.0a14&lt;/a&gt;. This alpha includes significant contributions from &lt;a href="https://alexgarcia.xyz/"&gt;Alex Garcia&lt;/a&gt;, including some backwards-incompatible changes in the run-up to the 1.0 release.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="#metadata-now-lives-in-a-database"&gt;Metadata now lives in a database&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="#datasette-remote-metadata-0-2a0"&gt;datasette-remote-metadata 0.2a0&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="#sqlite-isolation-level-immediate-"&gt;SQLite isolation_level="IMMEDIATE"&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="#updating-the-urls"&gt;Updating the URLs&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="#everything-else"&gt;Everything else&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="#tricks-to-help-construct-the-release-notes"&gt;Tricks to help construct the release notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="metadata-now-lives-in-a-database"&gt;Metadata now lives in a database&lt;/h4&gt;
&lt;p&gt;The biggest change in the alpha concerns how Datasette's &lt;a href="https://docs.datasette.io/en/latest/metadata.html#metadata"&gt;metadata system&lt;/a&gt; works.&lt;/p&gt;
&lt;p&gt;Datasette can record and serve metadata about the databases, tables and columns that it is serving. This includes things like the source of the data, the license it is made available under and descriptions of the tables and columns.&lt;/p&gt;
&lt;p&gt;Historically this has been powered by a &lt;code&gt;metadata.json&lt;/code&gt; file. Over time, this file grew to include all sorts of things that weren't strictly metadata - things like plugin configuration. Cleaning this up is a major breaking change for Datasette 1.0, and Alex has been working on this across several alphas.&lt;/p&gt;
&lt;p&gt;The latest alpha adds a new &lt;a href="https://docs.datasette.io/en/1.0a14/upgrade_guide.html"&gt;upgrade guide&lt;/a&gt; describing changes plugin authors will need to make to support the new metadata system.&lt;/p&gt;
&lt;p&gt;The big change in 1.0a14 is that metadata now lives in Datasette's hidden &lt;code&gt;_internal&lt;/code&gt; SQLite database, in four new tables called &lt;code&gt;metadata_instance&lt;/code&gt;, &lt;code&gt;metadata_databases&lt;/code&gt;, &lt;code&gt;metadata_resources&lt;/code&gt; and &lt;code&gt;metadata_columns&lt;/code&gt;. The schema for these &lt;a href="https://docs.datasette.io/en/1.0a14/internals.html#datasette-s-internal-database"&gt;is now included in the documentation&lt;/a&gt; (updated &lt;a href="https://github.com/simonw/datasette/blob/f6bd2bf8b025dcee49248ae7224e242b448f558c/docs/internals.rst?plain=1#L1363-L1366"&gt;using this Cog code&lt;/a&gt;), but rather than accessing those tables directly plugins are encouraged to use the new &lt;a href="https://docs.datasette.io/en/1.0a14/internals.html#getting-and-setting-metadata"&gt;set_*_metadata() and get_*_metadata() methods&lt;/a&gt; on the &lt;code&gt;Datasette&lt;/code&gt; class.&lt;/p&gt;
&lt;p&gt;I plan to use these new tables to build a new performant, paginated homepage that shows all of the databases and tables that Datasette is serving, complete with their metadata - without needing to make potentially hundreds of calls to the now-removed &lt;code&gt;get_metadata()&lt;/code&gt; plugin hook.&lt;/p&gt;
&lt;h4 id="datasette-remote-metadata-0-2a0"&gt;datasette-remote-metadata 0.2a0&lt;/h4&gt;
&lt;p&gt;When introducing new plugin internals like this it's always good to accompany them with a plugin that exercises them. &lt;strong&gt;&lt;a href="https://github.com/simonw/datasette-remote-metadata"&gt;datasette-remote-metadata&lt;/a&gt;&lt;/strong&gt; is a few years old now, and provides a mechanism for hosting the metadata for a Datasette instance at a separate URL. This means you can deploy a stateless Datasette instance with a large database and later update the attached metadata without having to re-deploy the whole thing.&lt;/p&gt;
&lt;p&gt;I released &lt;a href="https://github.com/simonw/datasette-remote-metadata/releases/tag/0.2a0"&gt;a new alpha&lt;/a&gt; of that plugin which &lt;a href="https://github.com/simonw/datasette-remote-metadata/issues/4"&gt;switches over to the new metadata mechanism&lt;/a&gt;. The core code ended up looking like this, imitating &lt;a href="https://github.com/simonw/datasette/blob/f6bd2bf8/datasette/app.py#L446-L472"&gt;code Alex wrote&lt;/a&gt; for Datasette Core:&lt;/p&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;async&lt;/span&gt; &lt;span class="pl-k"&gt;def&lt;/span&gt; &lt;span class="pl-en"&gt;apply_metadata&lt;/span&gt;(&lt;span class="pl-s1"&gt;datasette&lt;/span&gt;, &lt;span class="pl-s1"&gt;metadata_dict&lt;/span&gt;):
    &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;key&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;metadata_dict&lt;/span&gt; &lt;span class="pl-c1"&gt;or&lt;/span&gt; {}:
        &lt;span class="pl-k"&gt;if&lt;/span&gt; &lt;span class="pl-s1"&gt;key&lt;/span&gt; &lt;span class="pl-c1"&gt;==&lt;/span&gt; &lt;span class="pl-s"&gt;"databases"&lt;/span&gt;:
            &lt;span class="pl-k"&gt;continue&lt;/span&gt;
        &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-s1"&gt;datasette&lt;/span&gt;.&lt;span class="pl-en"&gt;set_instance_metadata&lt;/span&gt;(&lt;span class="pl-s1"&gt;key&lt;/span&gt;, &lt;span class="pl-s1"&gt;metadata_dict&lt;/span&gt;[&lt;span class="pl-s1"&gt;key&lt;/span&gt;])
    &lt;span class="pl-c"&gt;# database-level&lt;/span&gt;
    &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;dbname&lt;/span&gt;, &lt;span class="pl-s1"&gt;db&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;metadata_dict&lt;/span&gt;.&lt;span class="pl-en"&gt;get&lt;/span&gt;(&lt;span class="pl-s"&gt;"databases"&lt;/span&gt;, {}).&lt;span class="pl-en"&gt;items&lt;/span&gt;():
        &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;key&lt;/span&gt;, &lt;span class="pl-s1"&gt;value&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;db&lt;/span&gt;.&lt;span class="pl-en"&gt;items&lt;/span&gt;():
            &lt;span class="pl-k"&gt;if&lt;/span&gt; &lt;span class="pl-s1"&gt;key&lt;/span&gt; &lt;span class="pl-c1"&gt;==&lt;/span&gt; &lt;span class="pl-s"&gt;"tables"&lt;/span&gt;:
                &lt;span class="pl-k"&gt;continue&lt;/span&gt;
            &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-s1"&gt;datasette&lt;/span&gt;.&lt;span class="pl-en"&gt;set_database_metadata&lt;/span&gt;(&lt;span class="pl-s1"&gt;dbname&lt;/span&gt;, &lt;span class="pl-s1"&gt;key&lt;/span&gt;, &lt;span class="pl-s1"&gt;value&lt;/span&gt;)
        &lt;span class="pl-c"&gt;# table-level&lt;/span&gt;
        &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;tablename&lt;/span&gt;, &lt;span class="pl-s1"&gt;table&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;db&lt;/span&gt;.&lt;span class="pl-en"&gt;get&lt;/span&gt;(&lt;span class="pl-s"&gt;"tables"&lt;/span&gt;, {}).&lt;span class="pl-en"&gt;items&lt;/span&gt;():
            &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;key&lt;/span&gt;, &lt;span class="pl-s1"&gt;value&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;table&lt;/span&gt;.&lt;span class="pl-en"&gt;items&lt;/span&gt;():
                &lt;span class="pl-k"&gt;if&lt;/span&gt; &lt;span class="pl-s1"&gt;key&lt;/span&gt; &lt;span class="pl-c1"&gt;==&lt;/span&gt; &lt;span class="pl-s"&gt;"columns"&lt;/span&gt;:
                    &lt;span class="pl-k"&gt;continue&lt;/span&gt;
                &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-s1"&gt;datasette&lt;/span&gt;.&lt;span class="pl-en"&gt;set_resource_metadata&lt;/span&gt;(&lt;span class="pl-s1"&gt;dbname&lt;/span&gt;, &lt;span class="pl-s1"&gt;tablename&lt;/span&gt;, &lt;span class="pl-s1"&gt;key&lt;/span&gt;, &lt;span class="pl-s1"&gt;value&lt;/span&gt;)
            &lt;span class="pl-c"&gt;# column-level&lt;/span&gt;
            &lt;span class="pl-k"&gt;for&lt;/span&gt; &lt;span class="pl-s1"&gt;columnname&lt;/span&gt;, &lt;span class="pl-s1"&gt;column_description&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-s1"&gt;table&lt;/span&gt;.&lt;span class="pl-en"&gt;get&lt;/span&gt;(&lt;span class="pl-s"&gt;"columns"&lt;/span&gt;, {}).&lt;span class="pl-en"&gt;items&lt;/span&gt;():
                &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-s1"&gt;datasette&lt;/span&gt;.&lt;span class="pl-en"&gt;set_column_metadata&lt;/span&gt;(
                    &lt;span class="pl-s1"&gt;dbname&lt;/span&gt;, &lt;span class="pl-s1"&gt;tablename&lt;/span&gt;, &lt;span class="pl-s1"&gt;columnname&lt;/span&gt;, &lt;span class="pl-s"&gt;"description"&lt;/span&gt;, &lt;span class="pl-s1"&gt;column_description&lt;/span&gt;
                )&lt;/pre&gt;
&lt;h4 id="sqlite-isolation-level-immediate-"&gt;SQLite isolation_level="IMMEDIATE"&lt;/h4&gt;
&lt;p&gt;Sylvain Kerkour wrote about the &lt;a href="https://kerkour.com/sqlite-for-servers#use-immediate-transactions"&gt;benefits of IMMEDIATE transactions&lt;/a&gt; back in February. The key issue here is that SQLite defaults to starting transactions in &lt;code&gt;DEFERRED&lt;/code&gt; mode, which can lead to &lt;code&gt;SQLITE_BUSY&lt;/code&gt; errors if a transaction is upgraded to a write transaction mid-flight. Starting in &lt;code&gt;IMMEDIATE&lt;/code&gt; mode for Datasette's dedicated write connection should help avoid this.&lt;/p&gt;
&lt;p&gt;Frustratingly I &lt;a href="https://github.com/simonw/datasette/issues/2358"&gt;failed to replicate&lt;/a&gt; the underlying problem in my own tests, despite having anecdotally seen it happen in the past.&lt;/p&gt;
&lt;p&gt;After spending more time than I had budgeted for on this, I decided to ship it as an alpha to get it properly exercised before the 1.0 stable release.&lt;/p&gt;
&lt;h4 id="updating-the-urls"&gt;Updating the URLs&lt;/h4&gt;
&lt;p&gt;Here's another change that was important to get out before 1.0.&lt;/p&gt;
&lt;p&gt;Datasette's URL design had a subtle blemish. The following page had two potential meanings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/databasename&lt;/code&gt; - list all of the tables in the specified database&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/databasename?sql=&lt;/code&gt; - execute an arbitrary SQL query against that database&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This also meant that the JSON structure returned by &lt;code&gt;/database.json&lt;/code&gt; v.s. &lt;code&gt;/database.json?sql=&lt;/code&gt; was different.&lt;/p&gt;
&lt;p&gt;Alex and I decided to fix that. Alex laid out the new design in &lt;a href="https://github.com/simonw/datasette/issues/2360"&gt;issue #2360&lt;/a&gt; - there are quite a few other changes, but the big one is that we are splitting out the SQL query interface to a new URL: &lt;code&gt;/databasename/-/query?sql=&lt;/code&gt; - or &lt;code&gt;/databasename/-/query.json?sql=&lt;/code&gt; for the JSON API.&lt;/p&gt;
&lt;p&gt;We've added redirects from the old URLs to the new ones, so existing links should continue to work.&lt;/p&gt;
&lt;h4 id="everything-else"&gt;Everything else&lt;/h4&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Fix for a bug where canned queries with named parameters could fail against SQLite 3.46. (&lt;a href="https://github.com/simonw/datasette/issues/2353"&gt;#2353&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;This reflects a bug fix that went out in &lt;a href="https://docs.datasette.io/en/stable/changelog.html#v0-64-7"&gt;Datasette 0.64.7&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Datasette now serves &lt;code&gt;E-Tag&lt;/code&gt; headers for static files. Thanks, &lt;a href="https://github.com/redraw"&gt;Agustin Bacigalup&lt;/a&gt;. (&lt;a href="https://github.com/simonw/datasette/pull/2306"&gt;#2306&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;There's still more to be done making Datasette play well with caches, but this is a great, low-risk start.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Dropdown menus now use a &lt;code&gt;z-index&lt;/code&gt; that should avoid them being hidden by plugins. (&lt;a href="https://github.com/simonw/datasette/issues/2311"&gt;#2311&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;A cosmetic bug that showed up on Datasette Cloud when using the &lt;a href="https://datasette.io/plugins/datasette-cluster-map"&gt;datasette-cluster-map&lt;/a&gt; plugin.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Incorrect table and row names are no longer reflected back on the resulting 404 page. (&lt;a href="https://github.com/simonw/datasette/issues/2359"&gt;#2359&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;This was reported as a potential security issue. The table names were correctly escaped, so this wasn't an XSS, but there was still potential for confusion if an attacker constructed a URL along the lines of &lt;code&gt;/database-does-not-exist-visit-www.attacker.com-for-more-info&lt;/code&gt;. A similar fix went out in &lt;a href="https://docs.datasette.io/en/stable/changelog.html#v0-64-8"&gt;Datasette 0.64.8&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Improved documentation for async usage of the &lt;a href="https://docs.datasette.io/en/latest/plugin_hooks.html#plugin-hook-track-event"&gt;track_event(datasette, event)&lt;/a&gt; hook. (&lt;a href="https://github.com/simonw/datasette/issues/2319"&gt;#2319&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Fixed some HTTPX deprecation warnings. (&lt;a href="https://github.com/simonw/datasette/issues/2307"&gt;#2307&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Datasette now serves a &lt;code&gt;&amp;lt;html lange="en"&amp;gt;&lt;/code&gt; attribute. Thanks, &lt;a href="https://github.com/CharlesNepote"&gt;Charles Nepote&lt;/a&gt;. (&lt;a href="https://github.com/simonw/datasette/issues/2348"&gt;#2348&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Datasette's automated tests now run against the maximum and minimum supported versions of SQLite: 3.25 (from September 2018) and 3.46 (from May 2024). Thanks, Alex Garcia. (&lt;a href="https://github.com/simonw/datasette/pull/2352"&gt;#2352&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Fixed an issue where clicking twice on the URL output by &lt;code&gt;datasette --root&lt;/code&gt; produced a confusing error. (&lt;a href="https://github.com/simonw/datasette/issues/2375"&gt;#2375&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h4 id="tricks-to-help-construct-the-release-notes"&gt;Tricks to help construct the release notes&lt;/h4&gt;
&lt;p&gt;I still write the Datasette release notes entirely by hand (aside from a few words auto-completed by GitHub Copilot) - I find the process of writing them to be really useful as a way to construct a final review of everything before it goes out.&lt;/p&gt;
&lt;p&gt;I used a couple of tricks to help this time. I always start my longer release notes &lt;a href="https://github.com/simonw/datasette/issues/2381"&gt;with an issue&lt;/a&gt;. The GitHub &lt;a href="https://github.com/simonw/datasette/compare/1.0a13...2ad51baa31bfba7940c739e99d4270f563a77290"&gt;diff view&lt;/a&gt; is useful for seeing what's changed since the last release, but I took it a step further this time with the following shell command:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;git log --pretty=format:&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;- %ad: %s %h&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; --date=short --reverse 1.0a13...81b68a14&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This outputs a summary of each commit in the range, looking like this (truncated):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;- 2024-03-12: Added two things I left out of the 1.0a13 release notes 8b6f155b
- 2024-03-15: Fix httpx warning about app=self.app, refs #2307 5af68377
- 2024-03-15: Fixed cookies= httpx warning, refs #2307 54f5604c
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Crucially, the syntax of this output is in GitHub Flavored Markdown - and pasting it into an issue comment causes both the issue references and the commit hashes to be expanded into links that &lt;a href="https://github.com/simonw/datasette/issues/2381#issuecomment-2269759462"&gt;look like this&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/release-notes-issues-markdown.jpg" alt="2024-03-12: Added two things I left out of the 1.0a13 release notes 8b6f155 2024-03-15: Fix httpx warning about app=self.app, refs Fix httpx deprecation warnings #2307 5af6837 2024-03-15: Fixed cookies= httpx warning, refs Fix httpx deprecation warnings #2307 54f5604" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;It's a neat way to get a quick review of what's changed, and also means that those issues will automatically link back to the new issue where I'm constructing the release notes.&lt;/p&gt;
&lt;p&gt;I wrote this up &lt;a href="https://til.simonwillison.net/github/release-note-assistance"&gt;in a TIL here&lt;/a&gt;, along with another trick I used where I used &lt;a href="https://llm.datasette.io/"&gt;LLM&lt;/a&gt; to get Claude 3.5 Sonnet to summarize my changes for me:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;curl &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;https://github.com/simonw/datasette/compare/1.0a13...2ad51baa3.diff&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; \
  &lt;span class="pl-k"&gt;|&lt;/span&gt; llm -m claude-3.5-sonnet --system \
  &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;generate a short summary of these changes, then a bullet point list of detailed release notes&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/annotated-release-notes"&gt;annotated-release-notes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite-busy"&gt;sqlite-busy&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="projects"/><category term="releases"/><category term="sqlite"/><category term="datasette"/><category term="annotated-release-notes"/><category term="llm"/><category term="sqlite-busy"/></entry><entry><title>Datasette 0.64.8</title><link href="https://simonwillison.net/2024/Jun/21/datasette-0648/#atom-tag" rel="alternate"/><published>2024-06-21T23:48:43+00:00</published><updated>2024-06-21T23:48:43+00:00</updated><id>https://simonwillison.net/2024/Jun/21/datasette-0648/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.datasette.io/en/stable/changelog.html#v0-64-8"&gt;Datasette 0.64.8&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
A very small Datasette release, fixing &lt;a href="https://github.com/simonw/datasette/issues/2359"&gt;a minor potential security issue&lt;/a&gt; where the name of missing databases or tables was reflected on the 404 page in a way that could allow an attacker to present arbitrary text to a user who followed a link. Not an XSS attack (no code could be executed) but still a potential vector for confusing messages.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;



</summary><category term="projects"/><category term="releases"/><category term="security"/><category term="datasette"/></entry><entry><title>Datasette 1.0 alpha series leaks names of databases and tables to unauthenticated users</title><link href="https://simonwillison.net/2023/Aug/22/datasette-alpha/#atom-tag" rel="alternate"/><published>2023-08-22T17:44:31+00:00</published><updated>2023-08-22T17:44:31+00:00</updated><id>https://simonwillison.net/2023/Aug/22/datasette-alpha/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/simonw/datasette/security/advisories/GHSA-7ch3-7pp7-7cpq"&gt;Datasette 1.0 alpha series leaks names of databases and tables to unauthenticated users&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I found and fixed a security vulnerability in the Datasette 1.0 alpha series, described in this GitHub security advisory.&lt;/p&gt;

&lt;p&gt;The vulnerability allowed unauthenticated users to see the names of the databases and tables in an otherwise private Datasette instance—though not the actual table contents.&lt;/p&gt;

&lt;p&gt;The fix is now shipped in Datasette 1.0a4.&lt;/p&gt;

&lt;p&gt;The vulnerability affected Datasette Cloud as well, but thankfully I was able to analyze the access logs and confirm that no unauthenticated requests had been made against any of the affected endpoints.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;



</summary><category term="releases"/><category term="security"/><category term="datasette"/></entry><entry><title>LLM 0.4</title><link href="https://simonwillison.net/2023/Jun/17/llm/#atom-tag" rel="alternate"/><published>2023-06-17T22:58:34+00:00</published><updated>2023-06-17T22:58:34+00:00</updated><id>https://simonwillison.net/2023/Jun/17/llm/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://llm.datasette.io/en/stable/changelog.html#v0-4"&gt;LLM 0.4&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I released a major update to my LLM CLI tool today—version 0.4, which adds conversation mode and prompt templates so you can store and re-use interesting prompts, plus a whole bunch of other large and small improvements.&lt;/p&gt;

&lt;p&gt;I also released 0.4.1 with some minor fixes and the ability to install the tool using Hombrew: brew install simonw/llm/llm


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/cli"&gt;cli&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/chatgpt"&gt;chatgpt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;



</summary><category term="cli"/><category term="projects"/><category term="releases"/><category term="ai"/><category term="openai"/><category term="generative-ai"/><category term="chatgpt"/><category term="llms"/></entry><entry><title>Django 1.2 release notes</title><link href="https://simonwillison.net/2010/May/17/django/#atom-tag" rel="alternate"/><published>2010-05-17T21:11:00+00:00</published><updated>2010-05-17T21:11:00+00:00</updated><id>https://simonwillison.net/2010/May/17/django/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://docs.djangoproject.com/en/dev/releases/1.2/"&gt;Django 1.2 release notes&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Released today, this is a terrific upgrade. Multiple database connections, model validation, improved CSRF protection, a messages framework, the new smart if template tag and lots, lots more. I’ve been using the 1.2 betas for a major new project over the past few months and it’s been smooth sailing all the way.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="http://www.djangoproject.com/weblog/2010/may/17/12/"&gt;Django 1.2 released&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/csrf"&gt;csrf&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/multidb"&gt;multidb&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/open-source"&gt;open-source&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/recovered"&gt;recovered&lt;/a&gt;&lt;/p&gt;



</summary><category term="csrf"/><category term="django"/><category term="multidb"/><category term="open-source"/><category term="python"/><category term="releases"/><category term="recovered"/></entry><entry><title>What's new in Django 1.2 alpha 1</title><link href="https://simonwillison.net/2010/Jan/7/django/#atom-tag" rel="alternate"/><published>2010-01-07T19:31:50+00:00</published><updated>2010-01-07T19:31:50+00:00</updated><id>https://simonwillison.net/2010/Jan/7/django/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://docs.djangoproject.com/en/dev/releases/1.2-alpha-1/#what-s-new-in-django-1-2-alpha-1"&gt;What&amp;#x27;s new in Django 1.2 alpha 1&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Multiple database support, improved CSRF prevention, a messages framework (similar to the Rails “flash” feature), model validation, custom e-mail backends, template caching for much faster handling of the include and extends tags, read only fields in the admin, a better if tag and more. Very exciting release.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="http://www.djangoproject.com/weblog/2010/jan/06/12-alpha-1/"&gt;Django Weblog&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/alpha"&gt;alpha&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/csrf"&gt;csrf&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django-admin"&gt;django-admin&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;&lt;/p&gt;



</summary><category term="alpha"/><category term="csrf"/><category term="django"/><category term="django-admin"/><category term="python"/><category term="releases"/></entry><entry><title>Django 1.1 release notes</title><link href="https://simonwillison.net/2009/Jul/29/django/#atom-tag" rel="alternate"/><published>2009-07-29T09:34:04+00:00</published><updated>2009-07-29T09:34:04+00:00</updated><id>https://simonwillison.net/2009/Jul/29/django/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://docs.djangoproject.com/en/dev/releases/1.1/"&gt;Django 1.1 release notes&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Django 1.1 is out! Congratulations everyone who worked on this, it’s a fantastic release. New features include aggregate support in the ORM, proxy models, deferred fields and some really nice admin improvements. Oh, and the testing framework is now up to 10 times thanks to smart use of transactions.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="http://www.djangoproject.com/weblog/2009/jul/29/1-point-1/"&gt;Django | Weblog | Django 1.1 released&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/aggregates"&gt;aggregates&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django-admin"&gt;django-admin&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/open-source"&gt;open-source&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/orm"&gt;orm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;&lt;/p&gt;



</summary><category term="aggregates"/><category term="django"/><category term="django-admin"/><category term="open-source"/><category term="orm"/><category term="python"/><category term="releases"/></entry><entry><title>Nmap 5.00 Release Notes</title><link href="https://simonwillison.net/2009/Jul/16/nmap5/#atom-tag" rel="alternate"/><published>2009-07-16T19:40:10+00:00</published><updated>2009-07-16T19:40:10+00:00</updated><id>https://simonwillison.net/2009/Jul/16/nmap5/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://nmap.org/5/"&gt;Nmap 5.00 Release Notes&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Released today, “the most important Nmap release since 1997”. New features include Ncat, a powerful netcat alternative, Ndiff, a utility for comparing scan results so you can spot changes to your network, and a new Nmap Scripting Engine using Lua.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/lua"&gt;lua&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ncat"&gt;ncat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ndiff"&gt;ndiff&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/netcat"&gt;netcat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nmap"&gt;nmap&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/tools"&gt;tools&lt;/a&gt;&lt;/p&gt;



</summary><category term="lua"/><category term="ncat"/><category term="ndiff"/><category term="netcat"/><category term="nmap"/><category term="releases"/><category term="security"/><category term="tools"/></entry><entry><title>What's New In Python 3.1</title><link href="https://simonwillison.net/2009/Jun/28/python31/#atom-tag" rel="alternate"/><published>2009-06-28T15:02:09+00:00</published><updated>2009-06-28T15:02:09+00:00</updated><id>https://simonwillison.net/2009/Jun/28/python31/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://docs.python.org/3.1/whatsnew/3.1.html"&gt;What&amp;#x27;s New In Python 3.1&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Lots of stuff, but the best bits are an ordered dictionary type (congrats, Armin), a Counter class for counting unique items in an iterable (I do this on an almost daily basis) and a bunch of performance improvements including a rewrite of the Python 3.0 IO system in C.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/armin-ronacher"&gt;armin-ronacher&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/performance"&gt;performance&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python3"&gt;python3&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python31"&gt;python31&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;&lt;/p&gt;



</summary><category term="armin-ronacher"/><category term="performance"/><category term="python"/><category term="python3"/><category term="python31"/><category term="releases"/></entry><entry><title>Dojo 1.3 now available</title><link href="https://simonwillison.net/2009/Apr/1/dojo/#atom-tag" rel="alternate"/><published>2009-04-01T00:19:25+00:00</published><updated>2009-04-01T00:19:25+00:00</updated><id>https://simonwillison.net/2009/Apr/1/dojo/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://www.dojotoolkit.org/2009/03/31/dojo-1-3-now-available"&gt;Dojo 1.3 now available&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Looks like an excellent release. dojo.create is particularly nice—I’d be interested to know why something similar has never shipped with jQuery (presumably there’s a reason) as it feels a lot more elegant than gluing together an HTML-style string. Also interesting: you can swap between Dojo’s Acme selector engine and John Resig’s sizzle.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/acme"&gt;acme&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/dojo"&gt;dojo&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/dojocreate"&gt;dojocreate&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jquery"&gt;jquery&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/selectors"&gt;selectors&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sizzle"&gt;sizzle&lt;/a&gt;&lt;/p&gt;



</summary><category term="acme"/><category term="dojo"/><category term="dojocreate"/><category term="javascript"/><category term="jquery"/><category term="releases"/><category term="selectors"/><category term="sizzle"/></entry><entry><title>Python 3.0</title><link href="https://simonwillison.net/2008/Dec/4/python/#atom-tag" rel="alternate"/><published>2008-12-04T12:38:41+00:00</published><updated>2008-12-04T12:38:41+00:00</updated><id>https://simonwillison.net/2008/Dec/4/python/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://www.python.org/download/releases/3.0/"&gt;Python 3.0&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
“We are pleased to announce the release of Python 3.0 (final), a new production-ready release, on December 3rd, 2008.”


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python3"&gt;python3&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;&lt;/p&gt;



</summary><category term="python"/><category term="python3"/><category term="releases"/></entry><entry><title>Django 1.0.2 released</title><link href="https://simonwillison.net/2008/Nov/19/django/#atom-tag" rel="alternate"/><published>2008-11-19T08:46:27+00:00</published><updated>2008-11-19T08:46:27+00:00</updated><id>https://simonwillison.net/2008/Nov/19/django/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://www.djangoproject.com/weblog/2008/nov/18/102/"&gt;Django 1.0.2 released&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
An update to last week’s 1.0.1 release, which I failed to link to. 1.0.2 mainly fixes some packaging issues, while 1.0.1 contains “over two hundred fixes to the original Django 1.0 codebase”. The team are holding up the promise to move to a regular release cycle after 1.0.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;&lt;/p&gt;



</summary><category term="django"/><category term="python"/><category term="releases"/></entry><entry><title>What's New in Python 2.6</title><link href="https://simonwillison.net/2008/Oct/2/whatus/#atom-tag" rel="alternate"/><published>2008-10-02T11:47:21+00:00</published><updated>2008-10-02T11:47:21+00:00</updated><id>https://simonwillison.net/2008/Oct/2/whatus/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://docs.python.org/whatsnew/2.6.html"&gt;What&amp;#x27;s New in Python 2.6&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Python 2.6 final has been released (the last 2.x version before 3.0). multiprocessing and simplejson (as json) are now in the standard library, any backwards compatible 3.0 features have been added and the official docs are now powered by Sphinx (used by Django 1.0 as well). There’s plenty more.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="http://www.python.org/download/releases/2.6/"&gt;Python 2.6 Release&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/json"&gt;json&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/multiprocessing"&gt;multiprocessing&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/simplejson"&gt;simplejson&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sphinx-docs"&gt;sphinx-docs&lt;/a&gt;&lt;/p&gt;



</summary><category term="django"/><category term="json"/><category term="multiprocessing"/><category term="python"/><category term="releases"/><category term="simplejson"/><category term="sphinx-docs"/></entry><entry><title>Django's release process</title><link href="https://simonwillison.net/2008/Sep/20/release/#atom-tag" rel="alternate"/><published>2008-09-20T19:16:17+00:00</published><updated>2008-09-20T19:16:17+00:00</updated><id>https://simonwillison.net/2008/Sep/20/release/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://docs.djangoproject.com/en/dev/internals/release-process/"&gt;Django&amp;#x27;s release process&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Django is moving to time-based releases, with minor releases (new features but no backwards incompatible changes) approximately every six months.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;&lt;/p&gt;



</summary><category term="django"/><category term="python"/><category term="releases"/></entry><entry><title>[Release] CouchDB 0.7.0</title><link href="https://simonwillison.net/2007/Nov/17/couchdb/#atom-tag" rel="alternate"/><published>2007-11-17T00:25:10+00:00</published><updated>2007-11-17T00:25:10+00:00</updated><id>https://simonwillison.net/2007/Nov/17/couchdb/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://jan.prima.de/~jan/plok/archives/106-Release-CouchDB-0.7.0.html"&gt;[Release] CouchDB 0.7.0&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
This is a huge milestone for the project—it’s the first official release to include the JSON REST API instead of XML, and it’s also the first release that is “intended for widespread use”.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/couchdb"&gt;couchdb&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jan-lehnardt"&gt;jan-lehnardt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/json"&gt;json&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/rest"&gt;rest&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/xml"&gt;xml&lt;/a&gt;&lt;/p&gt;



</summary><category term="couchdb"/><category term="jan-lehnardt"/><category term="json"/><category term="releases"/><category term="rest"/><category term="xml"/></entry><entry><title>Opera 9.5 (Kestrel)</title><link href="https://simonwillison.net/2007/Sep/4/opera/#atom-tag" rel="alternate"/><published>2007-09-04T10:49:00+00:00</published><updated>2007-09-04T10:49:00+00:00</updated><id>https://simonwillison.net/2007/Sep/4/opera/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://annevankesteren.nl/2007/09/kestrel-alpha"&gt;Opera 9.5 (Kestrel)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
The latest Opera alpha includes a bunch of CSS3 features (including an almost full implementation of CSS3 Selectors) as well as the ability to use SVG for scalable background images.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/anne-van-kesteren"&gt;anne-van-kesteren&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/browsers"&gt;browsers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/css3"&gt;css3&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/opera"&gt;opera&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/opera95"&gt;opera95&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/selectors"&gt;selectors&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/svg"&gt;svg&lt;/a&gt;&lt;/p&gt;



</summary><category term="anne-van-kesteren"/><category term="browsers"/><category term="css3"/><category term="opera"/><category term="opera95"/><category term="releases"/><category term="selectors"/><category term="svg"/></entry><entry><title>Satchmo 0.5 Release</title><link href="https://simonwillison.net/2007/Aug/22/satchmo/#atom-tag" rel="alternate"/><published>2007-08-22T22:36:16+00:00</published><updated>2007-08-22T22:36:16+00:00</updated><id>https://simonwillison.net/2007/Aug/22/satchmo/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://satchmoproject.com/blog/2007/aug/22/satchmo-05-release/"&gt;Satchmo 0.5 Release&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Django powered e-commerce application, “the webshop for perfectionists with deadlines”.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/releases"&gt;releases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/satchmo"&gt;satchmo&lt;/a&gt;&lt;/p&gt;



</summary><category term="django"/><category term="python"/><category term="releases"/><category term="satchmo"/></entry></feed>