<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: mandelbrot</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/mandelbrot.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2025-07-02T04:35:35+00:00</updated><author><name>Simon Willison</name></author><entry><title>Mandelbrot in x86 assembly by Claude</title><link href="https://simonwillison.net/2025/Jul/2/mandelbrot-in-x86-assembly-by-claude/#atom-tag" rel="alternate"/><published>2025-07-02T04:35:35+00:00</published><updated>2025-07-02T04:35:35+00:00</updated><id>https://simonwillison.net/2025/Jul/2/mandelbrot-in-x86-assembly-by-claude/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://gist.github.com/simonw/ba1e9fa26fc8af08934d7bc0805b9b80"&gt;Mandelbrot in x86 assembly by Claude&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Inspired by &lt;a href="https://twitter.com/russnelson/status/1940144705192542369"&gt;a tweet&lt;/a&gt; asking if Claude knew x86 assembly, I decided to run a bit of an experiment.&lt;/p&gt;
&lt;p&gt;I prompted Claude Sonnet 4:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Write me an ascii art mandelbrot fractal generator in x86 assembly&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And &lt;a href="https://claude.ai/share/abda7710-16f8-4d6d-9012-6b342fff175c"&gt;got back code&lt;/a&gt; that looked... like assembly code I guess?&lt;/p&gt;
&lt;p&gt;So I copied some jargon out of that response and asked:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;I have some code written for x86-64 assembly using NASM syntax, targeting Linux (using system calls for output).&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;How can I run that on my Mac?&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That &lt;a href="https://claude.ai/share/27107b9a-f40e-4f81-9348-5944a3c4e82f"&gt;gave me a Dockerfile&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I tried running it on my Mac and... it failed to compile.&lt;/p&gt;
&lt;p&gt;So I fired up Claude Code (with the &lt;code&gt;--dangerously-skip-permissions&lt;/code&gt; option) in that directory and told it what to run:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Run this: docker build -t myasm .&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It started crunching. It read the errors, inspected the assembly code, made changes, tried running it again in a loop, added more comments...&lt;/p&gt;
&lt;p&gt;&lt;img alt="Terminal window showing code diff for Mandelbrot fractal program. • Now I need to fix the Mandelbrot iteration logic as well. The current fixed-point arithmetic has issues: • Read(program.asm) └ Read 30 lines (Ctrl+r to expand) • The Mandelbrot iteration logic also has issues with fixed-point arithmetic. Let me fix this too: • Update(program.asm) └ Updated program.asm with 18 additions and 12 removals showing assembly code diff with line numbers 63-91, including removed lines marked in red for fixed point multiplication calculations and added lines in green with 64-bit arithmetic conversions, comments describing z_real and z_imag calculations, and mandel_done logic checks." src="https://static.simonwillison.net/static/2025/claude-code-x86.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;...and eventually produced an ASCII art image of a fractal! Claude Code proudly declared:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Success! The Docker container ran and generated a Mandelbrot set visualization using ASCII characters. The output shows all @ symbols, which indicates the algorithm is working but all points are inside the set (high iteration count). The program compiled and executed successfully on x86_64 architecture via emulation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Yeah... I was not satisfied with an image that was entirely @ symbols! I said:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;That is a bad result - it is just @ signs, I cannot see the fractal&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Claude made a few more changes and gave me the result I wanted:&lt;/p&gt;
&lt;p&gt;&lt;img alt="A terminal window showing a pleasing ASCII art Mandelbrot set, zoomed to the expected level" src="https://static.simonwillison.net/static/2025/mandelbrot-x86.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Here's the finished &lt;a href="https://gist.github.com/simonw/ba1e9fa26fc8af08934d7bc0805b9b80#file-program-asm"&gt;assembly code&lt;/a&gt;, the &lt;a href="https://gist.github.com/simonw/ba1e9fa26fc8af08934d7bc0805b9b80#file-dockerfile"&gt;Dockerfile&lt;/a&gt; to run it on a Mac and the &lt;a href="https://gist.github.com/simonw/ba1e9fa26fc8af08934d7bc0805b9b80#file-claude-code-txt"&gt;full transcript&lt;/a&gt; of the Claude Code session that got it there.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/mandelbrot"&gt;mandelbrot&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/anthropic"&gt;anthropic&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude"&gt;claude&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vibe-coding"&gt;vibe-coding&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-code"&gt;claude-code&lt;/a&gt;&lt;/p&gt;



</summary><category term="mandelbrot"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="anthropic"/><category term="claude"/><category term="vibe-coding"/><category term="claude-code"/></entry><entry><title>Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac</title><link href="https://simonwillison.net/2024/Nov/12/qwen25-coder/#atom-tag" rel="alternate"/><published>2024-11-12T23:37:36+00:00</published><updated>2024-11-12T23:37:36+00:00</updated><id>https://simonwillison.net/2024/Nov/12/qwen25-coder/#atom-tag</id><summary type="html">
    &lt;p&gt;There's a whole lot of buzz around the new &lt;a href="https://qwenlm.github.io/blog/qwen2.5-coder-family/"&gt;Qwen2.5-Coder Series&lt;/a&gt; of open source (Apache 2.0 licensed) LLM releases from Alibaba's Qwen research team. On first impression it looks like the buzz is well deserved.&lt;/p&gt;
&lt;p&gt;Qwen claim:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Qwen2.5-Coder-32B-Instruct has become the current SOTA open-source code model, matching the coding capabilities of GPT-4o.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That's a &lt;em&gt;big&lt;/em&gt; claim for a 32B model that's small enough that it can run on my 64GB MacBook Pro M2. The Qwen published scores look impressive, comparing favorably with GPT-4o and Claude 3.5 Sonnet (October 2024) edition across various code-related benchmarks:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/qwen-scores.jpg" alt="In benchmark comparisons, Qwen 2.5 Coder (32B Instruct) outperforms both GPT-4o and Claude 3.5 Sonnet on LiveCodeBench, Spider, and BIRD-SQL metrics, falls behind on MBPP, Aider, and CodeArena, shows mixed results on MultiPL-E, and performs similarly on HumanEval and McEval benchmarks." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;How about benchmarks from other researchers? Paul Gauthier's &lt;a href="https://aider.chat/docs/leaderboards/"&gt;Aider benchmarks&lt;/a&gt; have a great reputation and &lt;a href="https://twitter.com/paulgauthier/status/1856018124031832236"&gt;Paul reports&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The new Qwen 2.5 Coder models did very well on aider's code editing benchmark. The 32B Instruct model scored in between GPT-4o and 3.5 Haiku.&lt;/p&gt;
&lt;p&gt;84% 3.5 Sonnet,
75% 3.5 Haiku,
74% Qwen2.5 Coder 32B,
71% GPT-4o,
69% Qwen2.5 Coder 14B,
58% Qwen2.5 Coder 7B&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/qwen-paul.jpg" alt="Those numbers as a chart" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That was for the Aider "whole edit" benchmark. The "diff" benchmark &lt;a href="https://twitter.com/paulgauthier/status/1856042640279777420"&gt;scores well&lt;/a&gt; too, with Qwen2.5 Coder 32B tying with GPT-4o (but a little behind Claude 3.5 Haiku).&lt;/p&gt;
&lt;p&gt;Given these scores (and the &lt;a href="https://www.reddit.com/r/LocalLLaMA/comments/1gp84in/qwen25coder_32b_the_ai_thats_revolutionizing/"&gt;positive buzz on Reddit&lt;/a&gt;) I had to try it for myself.&lt;/p&gt;
&lt;p&gt;My attempts to run the &lt;a href="https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct-GGUF"&gt;Qwen/Qwen2.5-Coder-32B-Instruct-GGUF&lt;/a&gt; Q8 using &lt;a href="https://github.com/simonw/llm-gguf"&gt;llm-gguf&lt;/a&gt; were a bit too slow, because I don't have that compiled to use my Mac's GPU at the moment.&lt;/p&gt;
&lt;p&gt;But both the &lt;a href="https://ollama.com/"&gt;Ollama&lt;/a&gt; version &lt;em&gt;and&lt;/em&gt; the &lt;a href="https://github.com/ml-explore/mlx"&gt;MLX&lt;/a&gt; version worked great!&lt;/p&gt;
&lt;p&gt;I installed the Ollama version using:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ollama pull qwen2.5-coder:32b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That fetched a 20GB quantized file. I ran a prompt through that using my &lt;a href="https://llm.datasette.io/"&gt;LLM&lt;/a&gt; tool and Sergey Alexandrov's &lt;a href="https://github.com/taketwo/llm-ollama"&gt;llm-ollama&lt;/a&gt; plugin like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm install llm-ollama
llm models # Confirming the new model is present
llm -m qwen2.5-coder:32b 'python function that takes URL to a CSV file and path to a SQLite database, fetches the CSV with the standard library, creates a table with the right columns and inserts the data'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here's &lt;a href="https://gist.github.com/simonw/0a47f9e35a50d4e25a47826f4ab75dda"&gt;the result&lt;/a&gt;. The code worked, but I had to work around a frustrating &lt;code&gt;ssl&lt;/code&gt; bug first (which wouldn't have been an issue if I'd allowed the model to use &lt;code&gt;requests&lt;/code&gt; or &lt;code&gt;httpx&lt;/code&gt; instead of the standard library).&lt;/p&gt;
&lt;p&gt;I also tried running it using the Apple Silicon fast array framework MLX using the &lt;a href="https://github.com/riccardomusmeci/mlx-llm"&gt;mlx-llm&lt;/a&gt; library directly, run via &lt;a href="https://github.com/astral-sh/uv"&gt;uv&lt;/a&gt; like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uv run --with mlx-lm \
  mlx_lm.generate \
  --model mlx-community/Qwen2.5-Coder-32B-Instruct-8bit \
  --max-tokens 4000 \
  --prompt 'write me a python function that renders a mandelbrot fractal as wide as the current terminal'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That gave me a &lt;em&gt;very&lt;/em&gt; &lt;a href="https://gist.github.com/simonw/1cc1e0418a04dbd19cd281cf9b43666f"&gt;satisfying result&lt;/a&gt; - when I ran the code it generated in a terminal I got this:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/mlx-fractal.jpg" alt="macOS terminal window displaying a pleasing mandelbrot fractal as ASCII art" style="max-width: 100%;" /&gt;&lt;/p&gt;

&lt;p&gt;MLX reported the following performance metrics:&lt;/p&gt;
&lt;pre&gt;Prompt: 49 tokens, 95.691 tokens-per-sec
Generation: 723 tokens, 10.016 tokens-per-sec
Peak memory: 32.685 GB&lt;/pre&gt;

&lt;p&gt;Let's see how it does on the &lt;a href="https://simonwillison.net/2024/Oct/25/pelicans-on-a-bicycle/"&gt;Pelican on a bicycle benchmark&lt;/a&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;llm -m qwen2.5-coder:32b 'Generate an SVG of a pelican riding a bicycle'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Here's &lt;a href="https://gist.github.com/simonw/56217af454695a90be2c8e09c703198a"&gt;what I got&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2024/qwen-pelican.svg" alt="A jumble of shapes. The pelican has a yellow body, a black head and a weird proboscis kind of thing. The bicycle is several brown overlapping shapes that looks a bit like a tractor." /&gt;&lt;/p&gt;

&lt;p&gt;Questionable Pelican SVG drawings aside, this is a really promising development. 32GB is just small enough that I can run the model on my Mac without having to quit every other application I'm running, and both the speed and the quality of the results feel genuinely competitive with the current best of the hosted models.&lt;/p&gt;

&lt;p&gt;Given that code assistance is probably around 80% of my LLM usage at the moment this is a meaningfully useful release for how I engage with this class of technology.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/mandelbrot"&gt;mandelbrot&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/open-source"&gt;open-source&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-in-china"&gt;ai-in-china&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ollama"&gt;ollama&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/qwen"&gt;qwen&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/paul-gauthier"&gt;paul-gauthier&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/mlx"&gt;mlx&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/uv"&gt;uv&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pelican-riding-a-bicycle"&gt;pelican-riding-a-bicycle&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="mandelbrot"/><category term="open-source"/><category term="ai-in-china"/><category term="llm"/><category term="llms"/><category term="ollama"/><category term="qwen"/><category term="paul-gauthier"/><category term="mlx"/><category term="llm-release"/><category term="ai"/><category term="local-llms"/><category term="uv"/><category term="generative-ai"/><category term="ai-assisted-programming"/><category term="pelican-riding-a-bicycle"/></entry><entry><title>Claude 3.5 Sonnet</title><link href="https://simonwillison.net/2024/Jun/20/claude-35-sonnet/#atom-tag" rel="alternate"/><published>2024-06-20T18:01:26+00:00</published><updated>2024-06-20T18:01:26+00:00</updated><id>https://simonwillison.net/2024/Jun/20/claude-35-sonnet/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.anthropic.com/news/claude-3-5-sonnet"&gt;Claude 3.5 Sonnet&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Anthropic released a new model this morning, and I think it's likely now the single best available LLM. Claude 3 Opus was already mostly on-par with GPT-4o, and the new 3.5 Sonnet scores higher than Opus on almost all of Anthropic's internal evals.&lt;/p&gt;
&lt;p&gt;It's also twice the speed and one &lt;em&gt;fifth&lt;/em&gt; of the price of Opus (it's the same price as the previous Claude 3 Sonnet). To compare:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;gpt-4o: $5/million input tokens and $15/million output&lt;/li&gt;
&lt;li&gt;Claude 3.5 Sonnet: $3/million input, $15/million output&lt;/li&gt;
&lt;li&gt;Claude 3 Opus: $15/million input, $75/million output&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Similar to Claude 3 Haiku then, which &lt;a href="https://simonwillison.net/2024/Mar/13/llm-claude-3-03/"&gt;both under-cuts and out-performs&lt;/a&gt; OpenAI's GPT-3.5 model.&lt;/p&gt;
&lt;p&gt;In addition to the new model, Anthropic also added a "artifacts" feature to their Claude web interface. The most exciting part of this is that any of the Claude models can now build &lt;em&gt;and then render&lt;/em&gt; web pages and SPAs, directly in the Claude interface.&lt;/p&gt;
&lt;p&gt;This means you can prompt them to e.g. "Build me a web app that teaches me about mandelbrot fractals, with interactive widgets" and they'll do exactly that - I tried that prompt on Claude 3.5 Sonnet earlier and &lt;a href="https://fedi.simonwillison.net/@simon/112650324117263516"&gt;the results were spectacular&lt;/a&gt; (video demo).&lt;/p&gt;
&lt;p&gt;An unsurprising note at the end of the post:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;To complete the Claude 3.5 model family, we’ll be releasing Claude 3.5 Haiku and Claude 3.5 Opus later this year.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If the pricing stays consistent with Claude 3, Claude 3.5 Haiku is going to be a &lt;em&gt;very&lt;/em&gt; exciting model indeed.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/mandelbrot"&gt;mandelbrot&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/anthropic"&gt;anthropic&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude"&gt;claude&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vision-llms"&gt;vision-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-artifacts"&gt;claude-artifacts&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-3-5-sonnet"&gt;claude-3-5-sonnet&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/prompt-to-app"&gt;prompt-to-app&lt;/a&gt;&lt;/p&gt;



</summary><category term="mandelbrot"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="anthropic"/><category term="claude"/><category term="vision-llms"/><category term="claude-artifacts"/><category term="claude-3-5-sonnet"/><category term="llm-release"/><category term="prompt-to-app"/></entry><entry><title>textual-mandelbrot</title><link href="https://simonwillison.net/2023/Apr/1/textual-mandelbrot/#atom-tag" rel="alternate"/><published>2023-04-01T19:23:26+00:00</published><updated>2023-04-01T19:23:26+00:00</updated><id>https://simonwillison.net/2023/Apr/1/textual-mandelbrot/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/davep/textual-mandelbrot"&gt;textual-mandelbrot&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I love this: run “pipx install textual-mandelbrot” and then “mandelexp” to get an interactive Mandelbrot fractal exploration interface right there in your terminal, built on top of Textual. The code for this is only 250 lines of Python and delightfully easy to follow.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/davepdotorg/status/1642083302038200322"&gt;@davepdotorg&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/mandelbrot"&gt;mandelbrot&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/textual"&gt;textual&lt;/a&gt;&lt;/p&gt;



</summary><category term="mandelbrot"/><category term="python"/><category term="textual"/></entry><entry><title>SQLite Query Language: WITH clause</title><link href="https://simonwillison.net/2017/Nov/26/sqlite-query-language-with-clause/#atom-tag" rel="alternate"/><published>2017-11-26T07:23:12+00:00</published><updated>2017-11-26T07:23:12+00:00</updated><id>https://simonwillison.net/2017/Nov/26/sqlite-query-language-with-clause/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://sqlite.org/lang_with.html"&gt;SQLite Query Language: WITH clause&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
SQLite’s documentation on recursive CTEs starts out with some nice clear examples of tree traversal using a WITH statement, then gets into graphs, then goes way off the deep end with a Mandelbrot Set query and a query that can solve Soduku puzzles (“in less than 300 milliseconds on a modern workstation”).


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/mandelbrot"&gt;mandelbrot&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sql"&gt;sql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;&lt;/p&gt;



</summary><category term="mandelbrot"/><category term="sql"/><category term="sqlite"/></entry><entry><title>Mandelbrot set in PostgreSQL</title><link href="https://simonwillison.net/2009/Aug/13/mandelbrot/#atom-tag" rel="alternate"/><published>2009-08-13T14:23:19+00:00</published><updated>2009-08-13T14:23:19+00:00</updated><id>https://simonwillison.net/2009/Aug/13/mandelbrot/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://wiki.postgresql.org/wiki/Mandelbrot_set"&gt;Mandelbrot set in PostgreSQL&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Surprisingly short SQL statement that produces an ASCII art Mandelbrot set.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ascii-art"&gt;ascii-art&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/fractals"&gt;fractals&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/mandelbrot"&gt;mandelbrot&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/postgresql"&gt;postgresql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sql"&gt;sql&lt;/a&gt;&lt;/p&gt;



</summary><category term="ascii-art"/><category term="fractals"/><category term="mandelbrot"/><category term="postgresql"/><category term="sql"/></entry></feed>