<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: natalie-downe</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/natalie-downe.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2025-03-14T23:13:55+00:00</updated><author><name>Simon Willison</name></author><entry><title>TIL: Styling an HTML dialog modal to take the full height of the viewport</title><link href="https://simonwillison.net/2025/Mar/14/styling-an-html-dialog/#atom-tag" rel="alternate"/><published>2025-03-14T23:13:55+00:00</published><updated>2025-03-14T23:13:55+00:00</updated><id>https://simonwillison.net/2025/Mar/14/styling-an-html-dialog/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://til.simonwillison.net/css/dialog-full-height"&gt;TIL: Styling an HTML dialog modal to take the full height of the viewport&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I spent some time today trying to figure out how to have a modal &lt;code&gt;&amp;lt;dialog&amp;gt;&lt;/code&gt; element present as a full height side panel that animates in from the side. The full height bit was hard, until Natalie helped me figure out that browsers apply a default &lt;code&gt;max-height: calc(100% - 6px - 2em);&lt;/code&gt; rule which needs to be over-ridden.&lt;/p&gt;
&lt;p&gt;Also included: some &lt;a href="https://til.simonwillison.net/css/dialog-full-height#user-content-spelunking-through-the-html-specification"&gt;spelunking through the HTML spec&lt;/a&gt; to figure out where that &lt;code&gt;calc()&lt;/code&gt; expression was first introduced. The answer was &lt;a href="https://github.com/whatwg/html/commit/979af1532"&gt;November 2020&lt;/a&gt;.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/html"&gt;html&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/web-standards"&gt;web-standards&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/til"&gt;til&lt;/a&gt;&lt;/p&gt;



</summary><category term="css"/><category term="html"/><category term="natalie-downe"/><category term="web-standards"/><category term="til"/></entry><entry><title>Measuring traffic during the Half Moon Bay Pumpkin Festival</title><link href="https://simonwillison.net/2022/Oct/19/measuring-traffic/#atom-tag" rel="alternate"/><published>2022-10-19T15:41:09+00:00</published><updated>2022-10-19T15:41:09+00:00</updated><id>https://simonwillison.net/2022/Oct/19/measuring-traffic/#atom-tag</id><summary type="html">
    &lt;p&gt;This weekend was the &lt;a href="https://pumpkinfest.miramarevents.com/" rel="nofollow"&gt;50th annual Half Moon Bay Pumpkin Festival&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We live in El Granada, a tiny town 8 minutes drive from Half Moon Bay. There is a single road (coastal highway one) between the two towns, and the festival is locally notorious for its impact on traffic.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://twitter.com/natbat" rel="nofollow"&gt;Natalie&lt;/a&gt; suggested that we measure the traffic and try and see the impact for ourselves!&lt;/p&gt;
&lt;p&gt;Here's the end result for Saturday. Read on for details on how we created it.&lt;/p&gt;
&lt;p&gt;&lt;img alt="A chart showing the two lines over time" src="https://static.simonwillison.net/static/2022/pumpkin-saturday-smooth.png" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;h4&gt;&lt;a id="user-content-collecting-the-data" class="anchor" aria-hidden="true" href="#collecting-the-data"&gt;&lt;span aria-hidden="true" class="octicon octicon-link"&gt;&lt;/span&gt;&lt;/a&gt;Collecting the data&lt;/h4&gt;
&lt;p&gt;I built a &lt;a href="https://simonwillison.net/2020/Oct/9/git-scraping/" rel="nofollow"&gt;git scraper&lt;/a&gt; to gather data from the Google Maps &lt;a href="https://developers.google.com/maps/documentation/directions/overview" rel="nofollow"&gt;Directions API&lt;/a&gt;. It turns out if you pass &lt;code&gt;departure_time=now&lt;/code&gt; to that API it returns the current estimated time in traffic as part of the response.&lt;/p&gt;
&lt;p&gt;I picked a location in Half Moon Bay an a location in El Granada and constructed the following URL (pretty-printed):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;https://maps.googleapis.com/maps/api/directions/json?
  origin=GG49%2BCH,%20Half%20Moon%20Bay%20CA
  &amp;amp;destination=FH78%2BQJ,%20Half%20Moon%20Bay,%20CA
  &amp;amp;departure_time=now
  &amp;amp;key=$GOOGLE_MAPS_KEY
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The two locations here are defined using Google Plus codes. Here they are on Google Maps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.google.com/maps/search/FH78%2BQJ+Half+Moon+Bay,+CA,+USA" rel="nofollow"&gt;FH78+QJ Half Moon Bay, CA, USA&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.google.com/maps/search/GG49%2BCH+El+Granada+CA,+USA" rel="nofollow"&gt;GG49+CH El Granada CA, USA&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I constructed the reverse of the URL too, to track traffic in the other direction. Then I rigged up a scheduled GitHub Actions workflow in &lt;a href="https://github.com/simonw/scrape-hmb-traffic"&gt;this repository&lt;/a&gt; to fetch this API data, pretty-print it with &lt;code&gt;jq&lt;/code&gt; and write it to the repsoitory:&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml"&gt;&lt;pre&gt;&lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Scrape traffic&lt;/span&gt;

&lt;span class="pl-ent"&gt;on&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;push&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;workflow_dispatch&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;schedule&lt;/span&gt;:
  - &lt;span class="pl-ent"&gt;cron&lt;/span&gt;:  &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;*/5 * * * *&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;

&lt;span class="pl-ent"&gt;jobs&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;shot-scraper&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;runs-on&lt;/span&gt;: &lt;span class="pl-s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="pl-ent"&gt;steps&lt;/span&gt;:
    - &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;actions/checkout@v2&lt;/span&gt;
    - &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Scrape&lt;/span&gt;
      &lt;span class="pl-ent"&gt;env&lt;/span&gt;:
        &lt;span class="pl-ent"&gt;GOOGLE_MAPS_KEY&lt;/span&gt;: &lt;span class="pl-s"&gt;${{ secrets.GOOGLE_MAPS_KEY }}&lt;/span&gt;
      &lt;span class="pl-ent"&gt;run&lt;/span&gt;: &lt;span class="pl-s"&gt;|        &lt;/span&gt;
&lt;span class="pl-s"&gt;        curl "https://maps.googleapis.com/maps/api/directions/json?origin=GG49%2BCH,%20Half%20Moon%20Bay%20CA&amp;amp;destination=FH78%2BQJ,%20Half%20Moon%20Bay,%20California&amp;amp;departure_time=now&amp;amp;key=$GOOGLE_MAPS_KEY" | jq &amp;gt; one.json&lt;/span&gt;
&lt;span class="pl-s"&gt;        sleep 3&lt;/span&gt;
&lt;span class="pl-s"&gt;        curl "https://maps.googleapis.com/maps/api/directions/json?origin=FH78%2BQJ,%20Half%20Moon%20Bay%20CA&amp;amp;destination=GG49%2BCH,%20Half%20Moon%20Bay,%20California&amp;amp;departure_time=now&amp;amp;key=$GOOGLE_MAPS_KEY" | jq &amp;gt; two.json&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;/span&gt;    - &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Commit and push&lt;/span&gt;
      &lt;span class="pl-ent"&gt;run&lt;/span&gt;: &lt;span class="pl-s"&gt;|-&lt;/span&gt;
&lt;span class="pl-s"&gt;        git config user.name "Automated"&lt;/span&gt;
&lt;span class="pl-s"&gt;        git config user.email "actions@users.noreply.github.com"&lt;/span&gt;
&lt;span class="pl-s"&gt;        git add -A&lt;/span&gt;
&lt;span class="pl-s"&gt;        timestamp=$(date -u)&lt;/span&gt;
&lt;span class="pl-s"&gt;        git commit -m "${timestamp}" || exit 0&lt;/span&gt;
&lt;span class="pl-s"&gt;        git pull --rebase&lt;/span&gt;
&lt;span class="pl-s"&gt;        git push&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;I'm using a GitHub Actions secret called &lt;code&gt;GOOGLE_MAPS_KEY&lt;/code&gt; to store the Google Maps API key.&lt;/p&gt;
&lt;p&gt;This workflow runs every 5 minutes (more-or-less - GitHub Actions doesn't necessarily stick to the schedule). It fetches the two JSON results and writes them to files called &lt;code&gt;one.json&lt;/code&gt; and &lt;code&gt;two.json&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;... and that was the initial setup for the project. This took me about fifteen minutes to put in place, because I've built systems like this so many times before. I launched it at about 10am on Saturday and left it to collect data.&lt;/p&gt;
&lt;h4&gt;&lt;a id="user-content-analyzing-the-data-and-drawing-some-charts" class="anchor" aria-hidden="true" href="#analyzing-the-data-and-drawing-some-charts"&gt;&lt;span aria-hidden="true" class="octicon octicon-link"&gt;&lt;/span&gt;&lt;/a&gt;Analyzing the data and drawing some charts&lt;/h4&gt;
&lt;p&gt;The trick with git scraping is that the data you care about ends up captured in &lt;a href="https://github.com/simonw/scrape-hmb-traffic/commits/main"&gt;the git commit log&lt;/a&gt;. The challenge is how to extract that back out again and turn it into something useful.&lt;/p&gt;
&lt;p&gt;My &lt;a href="https://simonwillison.net/2021/Dec/7/git-history/" rel="nofollow"&gt;git-history tool&lt;/a&gt; is designed to solve this. It's a command-line utility which can iterate through every version of a file stored in a git repository, extracting information from that file out into a SQLite database table and creating a new row for every commit.&lt;/p&gt;
&lt;p&gt;Normally I run it against CSV or JSON files containing an array of rows - effectively tabular data already, where I just want to record what has changed in between commits.&lt;/p&gt;
&lt;p&gt;For this project, I was storing the raw JSON output by the Google Maps API. I didn't care about most of the information in there: I really just wanted the &lt;code&gt;duration_in_traffic&lt;/code&gt; value.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git-history&lt;/code&gt; can accept a snippet of Python code that will be run against each stored copy of a file. The snippet should return a list of JSON objects (as Python dictionaries) which the rest of the tool can then use to figure out what has changed.&lt;/p&gt;
&lt;p&gt;To cut a long story short, here's the incantation that worked:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;git-history file hmb.db one.json \
--convert '
try:
    duration_in_traffic = json.loads(content)["routes"][0]["legs"][0]["duration_in_traffic"]["value"]
    return [{"id": "one", "duration_in_traffic": duration_in_traffic}]
except Exception as ex:
    return []
' \
  --full-versions \
  --id id
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;git-history file&lt;/code&gt; command is used to load the history for a specific file - in this case it's the file &lt;code&gt;one.json&lt;/code&gt;, which will be loaded into a new SQLite database file called &lt;code&gt;hm.db&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--convert&lt;/code&gt; code uses &lt;code&gt;json.loads(content)&lt;/code&gt; to load the JSON for the current file version, then pulls out the &lt;code&gt;["routes"][0]["legs"][0]["duration_in_traffic"]["value"]&lt;/code&gt; nested value from it.&lt;/p&gt;
&lt;p&gt;If that's missing (e.g. in an earlier commit, when I hadn't yet added the &lt;code&gt;departure_time=now&lt;/code&gt; parameter to the URL) an exception will be caught and the function will return an empty list.&lt;/p&gt;
&lt;p&gt;If the &lt;code&gt;duration_in_traffic&lt;/code&gt; value is present, the function returns the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[{"id": "one", "duration_in_traffic": duration_in_traffic}]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;git-history&lt;/code&gt; likes lists of dictionaries. It's usually being run against files that contain many different rows, where the &lt;code&gt;id&lt;/code&gt; column can be used to de-dupe rows across commits and spot what has changed.&lt;/p&gt;
&lt;p&gt;In this case, each file only has a single interesting value.&lt;/p&gt;
&lt;p&gt;Two more options are used here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--full-versions&lt;/code&gt; - tells &lt;code&gt;git-history&lt;/code&gt; to store all of the columns, not just columns that have changed since the last run. The default behaviour here is to store a &lt;code&gt;null&lt;/code&gt; if a value has not changed in order to save space, but our data is tiny here so we don't need any clever optimizations.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--id id&lt;/code&gt; specifies the ID column that should be used to de-dupe changes. Again, not really important for this tiny project.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After running the above command, the resulting schema includes these tables:&lt;/p&gt;
&lt;div class="highlight highlight-source-sql"&gt;&lt;pre&gt;CREATE TABLE [commits] (
   [id] &lt;span class="pl-k"&gt;INTEGER&lt;/span&gt; &lt;span class="pl-k"&gt;PRIMARY KEY&lt;/span&gt;,
   [namespace] &lt;span class="pl-k"&gt;INTEGER&lt;/span&gt; &lt;span class="pl-k"&gt;REFERENCES&lt;/span&gt; [namespaces]([id]),
   [hash] &lt;span class="pl-k"&gt;TEXT&lt;/span&gt;,
   [commit_at] &lt;span class="pl-k"&gt;TEXT&lt;/span&gt;
);
CREATE TABLE [item_version] (
   [_id] &lt;span class="pl-k"&gt;INTEGER&lt;/span&gt; &lt;span class="pl-k"&gt;PRIMARY KEY&lt;/span&gt;,
   [_item] &lt;span class="pl-k"&gt;INTEGER&lt;/span&gt; &lt;span class="pl-k"&gt;REFERENCES&lt;/span&gt; [item]([_id]),
   [_version] &lt;span class="pl-k"&gt;INTEGER&lt;/span&gt;,
   [_commit] &lt;span class="pl-k"&gt;INTEGER&lt;/span&gt; &lt;span class="pl-k"&gt;REFERENCES&lt;/span&gt; [commits]([id]),
   [id] &lt;span class="pl-k"&gt;TEXT&lt;/span&gt;,
   [duration_in_traffic] &lt;span class="pl-k"&gt;INTEGER&lt;/span&gt;
);&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;commits&lt;/code&gt; table includes the date of the commit - &lt;code&gt;commit_at&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;item_version&lt;/code&gt; table has that &lt;code&gt;duration_in_traffic&lt;/code&gt; value.&lt;/p&gt;
&lt;p&gt;So... to get back the duration in traffic at different times of day I can run this SQL query to join those two tables together:&lt;/p&gt;
&lt;div class="highlight highlight-source-sql"&gt;&lt;pre&gt;&lt;span class="pl-k"&gt;select&lt;/span&gt;
    &lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;commit_at&lt;/span&gt;,
    duration_in_traffic
&lt;span class="pl-k"&gt;from&lt;/span&gt;
    item_version
&lt;span class="pl-k"&gt;join&lt;/span&gt;
    commits &lt;span class="pl-k"&gt;on&lt;/span&gt; &lt;span class="pl-c1"&gt;item_version&lt;/span&gt;.&lt;span class="pl-c1"&gt;_commit&lt;/span&gt; &lt;span class="pl-k"&gt;=&lt;/span&gt; &lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;id&lt;/span&gt;
&lt;span class="pl-k"&gt;order by&lt;/span&gt;
    &lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;commit_at&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;That query returns data that looks like this:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;commit_at&lt;/th&gt;
&lt;th&gt;duration_in_traffic&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2022-10-15T17:09:06+00:00&lt;/td&gt;
&lt;td&gt;1110&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2022-10-15T17:17:38+00:00&lt;/td&gt;
&lt;td&gt;1016&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2022-10-15T17:30:06+00:00&lt;/td&gt;
&lt;td&gt;1391&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;A couple of problems here. First, the &lt;code&gt;commit_at&lt;/code&gt; column is in UTC, not local time. And &lt;code&gt;duration_in_traffic&lt;/code&gt; is in seconds, which aren't particularly easy to read.&lt;/p&gt;
&lt;p&gt;Here's a SQLite fix for these two issues:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;select
    time(datetime(commits.commit_at, '-7 hours')) as t,
    duration_in_traffic / 60 as mins_in_traffic
from
    item_version
join
    commits on item_version._commit = commits.id
order by
    commits.commit_at
&lt;/code&gt;&lt;/pre&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;t&lt;/th&gt;
&lt;th&gt;mins_in_traffic&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10:09:06&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:17:38&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:30:06&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;code&gt;datetime(commits.commit_at, '-7 hours')&lt;/code&gt; parses the UTC string as a datetime, and then subsracts 7 hours from it to get the local time in California converted from UTC.&lt;/p&gt;
&lt;p&gt;I wrap that in &lt;code&gt;time()&lt;/code&gt; here because for the chart I want to render I know everything will be on the same day.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;mins_in_traffic&lt;/code&gt; now shows minutes, not seconds.&lt;/p&gt;
&lt;p&gt;We now have enough data to render a chart!&lt;/p&gt;
&lt;p&gt;But... we only have one of the two directions of traffic here. To process the numbers from &lt;code&gt;two.json&lt;/code&gt; as well I ran this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;git-history file hmb.db two.json \
--convert '
try:
    duration_in_traffic = json.loads(content)["routes"][0]["legs"][0]["duration_in_traffic"]["value"]
    return [{"id": "two", "duration_in_traffic": duration_in_traffic}]
except Exception as ex:
    return []
' \
  --full-versions \
  --id id --namespace item2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is &lt;em&gt;almost&lt;/em&gt; the same as the previous command. It's running against &lt;code&gt;two.json&lt;/code&gt; instead of &lt;code&gt;one.json&lt;/code&gt;, and it's using the &lt;code&gt;--namespace item2&lt;/code&gt; option.&lt;/p&gt;
&lt;p&gt;This causes it to populate a new table called &lt;code&gt;item2_version&lt;/code&gt; instead of &lt;code&gt;item_version&lt;/code&gt;, which is a cheap trick to avoid having to figure out how to load both files into the same table.&lt;/p&gt;
&lt;h2&gt;&lt;a id="user-content-two-lines-on-one-chart" class="anchor" aria-hidden="true" href="#two-lines-on-one-chart"&gt;&lt;span aria-hidden="true" class="octicon octicon-link"&gt;&lt;/span&gt;&lt;/a&gt;Two lines on one chart&lt;/h2&gt;
&lt;p&gt;I rendered an initial single line chart using &lt;a href="https://datasette.io/plugins/datasette-vega" rel="nofollow"&gt;datasette-vega&lt;/a&gt;, but Natalie suggested that putting lines on the same chart for the two directions of traffic would be more interesting.&lt;/p&gt;
&lt;p&gt;Since I now had one table for each direction of traffic (&lt;code&gt;item_version&lt;/code&gt; and &lt;code&gt;item_version2&lt;/code&gt;) I decided to combine those into a single table, suitable for pasting into Google Sheets.&lt;/p&gt;
&lt;p&gt;Here's the SQL I came up with to do that:&lt;/p&gt;
&lt;div class="highlight highlight-source-sql"&gt;&lt;pre&gt;with item1 &lt;span class="pl-k"&gt;as&lt;/span&gt; (
  &lt;span class="pl-k"&gt;select&lt;/span&gt;
    &lt;span class="pl-k"&gt;time&lt;/span&gt;(datetime(&lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;commit_at&lt;/span&gt;, &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;-7 hours&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;)) &lt;span class="pl-k"&gt;as&lt;/span&gt; t,
    duration_in_traffic &lt;span class="pl-k"&gt;/&lt;/span&gt; &lt;span class="pl-c1"&gt;60&lt;/span&gt; &lt;span class="pl-k"&gt;as&lt;/span&gt; mins_in_traffic
  &lt;span class="pl-k"&gt;from&lt;/span&gt;
    item_version
    &lt;span class="pl-k"&gt;join&lt;/span&gt; commits &lt;span class="pl-k"&gt;on&lt;/span&gt; &lt;span class="pl-c1"&gt;item_version&lt;/span&gt;.&lt;span class="pl-c1"&gt;_commit&lt;/span&gt; &lt;span class="pl-k"&gt;=&lt;/span&gt; &lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;id&lt;/span&gt;
  &lt;span class="pl-k"&gt;order by&lt;/span&gt;
    &lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;commit_at&lt;/span&gt;
),
item2 &lt;span class="pl-k"&gt;as&lt;/span&gt; (
  &lt;span class="pl-k"&gt;select&lt;/span&gt;
    &lt;span class="pl-k"&gt;time&lt;/span&gt;(datetime(&lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;commit_at&lt;/span&gt;, &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;-7 hours&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;)) &lt;span class="pl-k"&gt;as&lt;/span&gt; t,
    duration_in_traffic &lt;span class="pl-k"&gt;/&lt;/span&gt; &lt;span class="pl-c1"&gt;60&lt;/span&gt; &lt;span class="pl-k"&gt;as&lt;/span&gt; mins_in_traffic
  &lt;span class="pl-k"&gt;from&lt;/span&gt;
    item2_version
    &lt;span class="pl-k"&gt;join&lt;/span&gt; commits &lt;span class="pl-k"&gt;on&lt;/span&gt; &lt;span class="pl-c1"&gt;item2_version&lt;/span&gt;.&lt;span class="pl-c1"&gt;_commit&lt;/span&gt; &lt;span class="pl-k"&gt;=&lt;/span&gt; &lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;id&lt;/span&gt;
  &lt;span class="pl-k"&gt;order by&lt;/span&gt;
    &lt;span class="pl-c1"&gt;commits&lt;/span&gt;.&lt;span class="pl-c1"&gt;commit_at&lt;/span&gt;
)
&lt;span class="pl-k"&gt;select&lt;/span&gt;
  item1.&lt;span class="pl-k"&gt;*&lt;/span&gt;,
  &lt;span class="pl-c1"&gt;item2&lt;/span&gt;.&lt;span class="pl-c1"&gt;mins_in_traffic&lt;/span&gt; &lt;span class="pl-k"&gt;as&lt;/span&gt; mins_in_traffic_other_way
&lt;span class="pl-k"&gt;from&lt;/span&gt;
  item1
  &lt;span class="pl-k"&gt;join&lt;/span&gt; item2 &lt;span class="pl-k"&gt;on&lt;/span&gt; &lt;span class="pl-c1"&gt;item1&lt;/span&gt;.&lt;span class="pl-c1"&gt;t&lt;/span&gt; &lt;span class="pl-k"&gt;=&lt;/span&gt; &lt;span class="pl-c1"&gt;item2&lt;/span&gt;.&lt;span class="pl-c1"&gt;t&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This uses two CTEs (Common Table Expressions - the &lt;code&gt;with X as&lt;/code&gt; pieces) using the pattern I explained earlier - now called &lt;code&gt;item1&lt;/code&gt; and &lt;code&gt;item2&lt;/code&gt;. Having defined these two CTEs, I can join them together on the &lt;code&gt;t&lt;/code&gt; column, which is the time of day.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://lite.datasette.io/?url=https://github.com/simonw/scrape-hmb-traffic/blob/main/hmb.db?&amp;amp;install=datasette-copyable#/hmb?sql=with+item1+as+(%0A++select%0A++++time(datetime(commits.commit_at%2C+'-7+hours'))+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item_version%0A++++join+commits+on+item_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A)%2C%0Aitem2+as+(%0A++select%0A++++time(datetime(commits.commit_at%2C+'-7+hours'))+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item2_version%0A++++join+commits+on+item2_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A)%0Aselect%0A++item1.*%2C%0A++item2.mins_in_traffic+as+mins_in_traffic_other_way%0Afrom%0A++item1%0A++join+item2+on+item1.t+%3D+item2.t" rel="nofollow"&gt;Try running this query&lt;/a&gt; in Datasette Lite.&lt;/p&gt;
&lt;p&gt;Here's the output of that query for Saturday (10am to 8pm):&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;t&lt;/th&gt;
&lt;th&gt;mins_in_traffic&lt;/th&gt;
&lt;th&gt;mins_in_traffic_other_way&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10:09:06&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:17:38&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:30:06&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:47:38&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:57:37&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:08:20&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:22:27&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:38:42&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:52:35&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:03:23&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:15:16&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:27:51&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:37:48&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:46:41&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:55:03&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:05:10&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:17:57&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:32:55&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:44:53&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:55:22&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:05:21&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:17:48&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:31:04&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:41:59&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:51:48&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:00:09&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:11:17&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:25:48&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:39:41&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:51:11&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:59:34&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16:10:50&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16:25:43&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16:53:06&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17:11:34&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17:40:29&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18:12:07&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18:58:17&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20:05:13&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;I copied and pasted this table into Google Sheets and messed around with the charting tools there until I had the following chart:&lt;/p&gt;
&lt;p&gt;&lt;img alt="A chart showing the two lines over time" src="https://static.simonwillison.net/static/2022/pumpkin-saturday-smooth.png" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;Here's the same chart for Sunday:&lt;/p&gt;
&lt;p&gt;&lt;img alt="This chart shows the same thing but for Sunday" src="https://static.simonwillison.net/static/2022/pumpkin-sunday-smooth.png" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;Our &lt;a href="https://docs.google.com/spreadsheets/d/1JOimtkugZBF_YQxqn0Gn6NiIhNz-OMH2rpOZtmECAY4/edit#gid=0" rel="nofollow"&gt;Google Sheet is here&lt;/a&gt; - the two days have two separate tabs within the sheet.&lt;/p&gt;
&lt;h4&gt;&lt;a id="user-content-building-the-sqlite-database-in-github-actions" class="anchor" aria-hidden="true" href="#building-the-sqlite-database-in-github-actions"&gt;&lt;span aria-hidden="true" class="octicon octicon-link"&gt;&lt;/span&gt;&lt;/a&gt;Building the SQLite database in GitHub Actions&lt;/h4&gt;
&lt;p&gt;I did most of the development work for this project on my laptop, running &lt;code&gt;git-history&lt;/code&gt; and &lt;code&gt;datasette&lt;/code&gt; locally for speed of iteration.&lt;/p&gt;
&lt;p&gt;Once I had everything working, I decided to automate the process of building the SQLite database as well.&lt;/p&gt;
&lt;p&gt;I made the following changes to my GitHub Actions workflow:&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml"&gt;&lt;pre&gt;&lt;span class="pl-ent"&gt;jobs&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;shot-scraper&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;runs-on&lt;/span&gt;: &lt;span class="pl-s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="pl-ent"&gt;steps&lt;/span&gt;:
    - &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pl-ent"&gt;with&lt;/span&gt;:
        &lt;span class="pl-ent"&gt;fetch-depth&lt;/span&gt;: &lt;span class="pl-c1"&gt;0&lt;/span&gt; &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Needed by git-history&lt;/span&gt;
    - &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Set up Python 3.10&lt;/span&gt;
      &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;actions/setup-python@v4&lt;/span&gt;
      &lt;span class="pl-ent"&gt;with&lt;/span&gt;:
        &lt;span class="pl-ent"&gt;python-version&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;3.10&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;
        &lt;span class="pl-ent"&gt;cache&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;pip&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;
    - &lt;span class="pl-ent"&gt;run&lt;/span&gt;: &lt;span class="pl-s"&gt;pip install -r requirements.txt&lt;/span&gt;
    - &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Scrape&lt;/span&gt;
      &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Same as before...&lt;/span&gt;
      &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; env:&lt;/span&gt;
      &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; run&lt;/span&gt;
    - &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Build SQLite database&lt;/span&gt;
      &lt;span class="pl-ent"&gt;run&lt;/span&gt;: &lt;span class="pl-s"&gt;|&lt;/span&gt;
&lt;span class="pl-s"&gt;        rm -f hmb.db # Recreate from scratch each time&lt;/span&gt;
&lt;span class="pl-s"&gt;        git-history file hmb.db one.json \&lt;/span&gt;
&lt;span class="pl-s"&gt;        --convert '&lt;/span&gt;
&lt;span class="pl-s"&gt;        try:&lt;/span&gt;
&lt;span class="pl-s"&gt;            duration_in_traffic = json.loads(content)["routes"][0]["legs"][0]["duration_in_traffic"]["value"]&lt;/span&gt;
&lt;span class="pl-s"&gt;            return [{"id": "one", "duration_in_traffic": duration_in_traffic}]&lt;/span&gt;
&lt;span class="pl-s"&gt;        except Exception as ex:&lt;/span&gt;
&lt;span class="pl-s"&gt;            return []&lt;/span&gt;
&lt;span class="pl-s"&gt;        ' \&lt;/span&gt;
&lt;span class="pl-s"&gt;          --full-versions \&lt;/span&gt;
&lt;span class="pl-s"&gt;          --id id&lt;/span&gt;
&lt;span class="pl-s"&gt;        git-history file hmb.db two.json \&lt;/span&gt;
&lt;span class="pl-s"&gt;        --convert '&lt;/span&gt;
&lt;span class="pl-s"&gt;        try:&lt;/span&gt;
&lt;span class="pl-s"&gt;            duration_in_traffic = json.loads(content)["routes"][0]["legs"][0]["duration_in_traffic"]["value"]&lt;/span&gt;
&lt;span class="pl-s"&gt;            return [{"id": "two", "duration_in_traffic": duration_in_traffic}]&lt;/span&gt;
&lt;span class="pl-s"&gt;        except Exception as ex:&lt;/span&gt;
&lt;span class="pl-s"&gt;            return []&lt;/span&gt;
&lt;span class="pl-s"&gt;        ' \&lt;/span&gt;
&lt;span class="pl-s"&gt;          --full-versions \&lt;/span&gt;
&lt;span class="pl-s"&gt;          --id id --namespace item2&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;/span&gt;    - &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Commit and push&lt;/span&gt;
      &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Same as before...&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;I also added a &lt;code&gt;requirements.txt&lt;/code&gt; file containing just &lt;code&gt;git-history&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note how the &lt;code&gt;actions/checkout@v3&lt;/code&gt; step now has &lt;code&gt;fetch-depth: 0&lt;/code&gt; - this is necessary because &lt;code&gt;git-history&lt;/code&gt; needs to loop through the entire repository history, but &lt;code&gt;actions/checkout@v3&lt;/code&gt; defaults to only fetching the most recent commit.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;setup-python&lt;/code&gt; step uses &lt;code&gt;cache: "pip"&lt;/code&gt;, which causes it to cache installed dependencies from &lt;code&gt;requirements.txt&lt;/code&gt; between runs.&lt;/p&gt;
&lt;p&gt;Because that big &lt;code&gt;git-history&lt;/code&gt; step creates a &lt;code&gt;hmb.db&lt;/code&gt; SQLite database, the "Commit and push" step now includes that file in the push to the repository. So every time the workflow runs a new binary SQLite database file is committed.&lt;/p&gt;
&lt;p&gt;Normally I wouldn't do this, because Git isn't a great place to keep constantly changing binary files... but in this case the SQLite database is only 100KB and won't continue to be updated beyond the end of the pumpkin festival.&lt;/p&gt;
&lt;p&gt;End result: &lt;a href="https://github.com/simonw/scrape-hmb-traffic/blob/main/hmb.db"&gt;hmb.db is available&lt;/a&gt; in the GitHub repository.&lt;/p&gt;
&lt;h4&gt;&lt;a id="user-content-querying-it-using-datasette-lite" class="anchor" aria-hidden="true" href="#querying-it-using-datasette-lite"&gt;&lt;span aria-hidden="true" class="octicon octicon-link"&gt;&lt;/span&gt;&lt;/a&gt;Querying it using Datasette Lite&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://simonwillison.net/2022/May/4/datasette-lite/" rel="nofollow"&gt;Datasette Lite&lt;/a&gt; is my repackaged version of my Datasette server-side Python application which runs entirely in the user's browser, using WebAssembly.&lt;/p&gt;
&lt;p&gt;A neat feature of Datasette Lite is that you can pass it the URL to a SQLite database file and it will load that database in your browser and let you run queries against it.&lt;/p&gt;
&lt;p&gt;These database files need to be served with CORS headers. Every file served by GitHub includes these headers!&lt;/p&gt;
&lt;p&gt;Which means the following URL can be used to open up the latest &lt;code&gt;hmb.db&lt;/code&gt; file directly in Datasette in the browser:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://lite.datasette.io/?url=https://github.com/simonw/scrape-hmb-traffic/blob/main/hmb.db" rel="nofollow"&gt;https://lite.datasette.io/?url=https://github.com/simonw/scrape-hmb-traffic/blob/main/hmb.db&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;(This takes advantage of a &lt;a href="https://simonwillison.net/2022/Sep/16/weeknotes/" rel="nofollow"&gt;feature I added&lt;/a&gt; to Datasette Lite where it knows how to convert the URL to the HTML page about a file on GitHub to the URL to the raw file itself.)&lt;/p&gt;
&lt;p&gt;URLs to SQL queries work too. This URL will open Datasette Lite, load the SQLite database AND execute the query I constructed above:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://lite.datasette.io/?url=https://github.com/simonw/scrape-hmb-traffic/blob/main/hmb.db#/hmb?sql=with+item1+as+(%0A++select%0A++++time(datetime(commits.commit_at%2C+'-7+hours'))+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item_version%0A++++join+commits+on+item_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A)%2C%0Aitem2+as+(%0A++select%0A++++time(datetime(commits.commit_at%2C+'-7+hours'))+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item2_version%0A++++join+commits+on+item2_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A)%0Aselect%0A++item1.*%2C%0A++item2.mins_in_traffic+as+mins_in_traffic_other_way%0Afrom%0A++item1%0A++join+item2+on+item1.t+%3D+item2.t" rel="nofollow"&gt;https://lite.datasette.io/?url=https://github.com/simonw/scrape-hmb-traffic/blob/main/hmb.db#/hmb?sql=with+item1+as+(%0A++select%0A++++time(datetime(commits.commit_at%2C+'-7+hours'))+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item_version%0A++++join+commits+on+item_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A)%2C%0Aitem2+as+(%0A++select%0A++++time(datetime(commits.commit_at%2C+'-7+hours'))+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item2_version%0A++++join+commits+on+item2_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A)%0Aselect%0A++item1.*%2C%0A++item2.mins_in_traffic+as+mins_in_traffic_other_way%0Afrom%0A++item1%0A++join+item2+on+item1.t+%3D+item2.t&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;And finally... Datasette Lite &lt;a href="https://simonwillison.net/2022/Aug/17/datasette-lite-plugins/" rel="nofollow"&gt;has plugin support&lt;/a&gt;. Adding &lt;code&gt;&amp;amp;install=datasette-copyable&lt;/code&gt; to the URL adds the &lt;a href="https://datasette.io/plugins/datasette-copyable" rel="nofollow"&gt;datasette-copyable&lt;/a&gt; plugin, which adds a page for easily copying out the query results as TSV (useful for pasting into a spreadsheet) or even as GitHub-flavored Markdown (which I used to add results to this blog post).&lt;/p&gt;
&lt;p&gt;Here's &lt;a href="https://lite.datasette.io/?url=https://github.com/simonw/scrape-hmb-traffic/blob/main/hmb.db&amp;amp;install=datasette-copyable#/hmb.copyable?sql=with+item1+as+%28%0A++select%0A++++time%28datetime%28commits.commit_at%2C+%27-7+hours%27%29%29+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item_version%0A++++join+commits+on+item_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A%29%2C%0Aitem2+as+%28%0A++select%0A++++time%28datetime%28commits.commit_at%2C+%27-7+hours%27%29%29+as+t%2C%0A++++duration_in_traffic+%2F+60+as+mins_in_traffic%0A++from%0A++++item2_version%0A++++join+commits+on+item2_version._commit+%3D+commits.id%0A++order+by%0A++++commits.commit_at%0A%29%0Aselect%0A++item1.%2A%2C%0A++item2.mins_in_traffic+as+mins_in_traffic_other_way%0Afrom%0A++item1%0A++join+item2+on+item1.t+%3D+item2.t&amp;amp;_table_format=github" rel="nofollow"&gt;an example&lt;/a&gt; of that plugin in action.&lt;/p&gt;
&lt;p&gt;This was a fun little project that brought together a whole bunch of things I've been working on over the past few years. Here's some more of my writing on these different techniques and tools:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://simonwillison.net/series/git-scraping/" rel="nofollow"&gt;Git scraping&lt;/a&gt; is the key technique I'm using here to collect the data&lt;/li&gt;
&lt;li&gt;I've written a lot about &lt;a href="https://simonwillison.net/tags/githubactions/" rel="nofollow"&gt;GitHub Actions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;These are my notes about &lt;a href="https://simonwillison.net/tags/githistory/" rel="nofollow"&gt;git-history&lt;/a&gt;, the tool I used to turn a commit history into a SQLite database&lt;/li&gt;
&lt;li&gt;Here's my series of posts about &lt;a href="https://simonwillison.net/series/datasette-lite/" rel="nofollow"&gt;Datasette Lite&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/data-journalism"&gt;data-journalism&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/git-scraping"&gt;git-scraping&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/git-history"&gt;git-history&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette-lite"&gt;datasette-lite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/half-moon-bay"&gt;half-moon-bay&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="data-journalism"/><category term="natalie-downe"/><category term="projects"/><category term="sqlite"/><category term="datasette"/><category term="git-scraping"/><category term="git-history"/><category term="datasette-lite"/><category term="half-moon-bay"/></entry><entry><title>Weeknotes: Camping, a road trip and two new museums</title><link href="https://simonwillison.net/2022/May/16/weeknotes/#atom-tag" rel="alternate"/><published>2022-05-16T05:43:36+00:00</published><updated>2022-05-16T05:43:36+00:00</updated><id>https://simonwillison.net/2022/May/16/weeknotes/#atom-tag</id><summary type="html">
    &lt;p&gt;Natalie and I took a week-long road trip and camping holiday. The plan was to camp on Santa Rosa Island in the California Channel Islands, but the boat to the island was cancelled due to bad weather. We treated ourselves to a Central Californian road trip instead.&lt;/p&gt;
&lt;h4&gt;The Madonna Inn&lt;/h4&gt;
&lt;p&gt;If you're driving down from San Francisco to Santa Barbara and you don't stay a night at the Madonna Inn in San Luis Obispo you're missing out.&lt;/p&gt;
&lt;p&gt;This legendary hotel/motel built 110 guest rooms in the 1960s, each of them with a different theme. We ended up staying two nights thanks to our boat cancellation - one in the &lt;a href="https://www.madonnainn.com/room-131-kona-rock"&gt;Kona Rock&lt;/a&gt; room (Hawaii themed, mostly carved out of solid rock, the shower has a waterfall) and one in &lt;a href="https://www.madonnainn.com/room-193-safari"&gt;Safari&lt;/a&gt;. Epic.&lt;/p&gt;

&lt;p&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/madonna-inn-1.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="The Kona Rock room - the walls are all maed of rocks" src="https://simonwillisonnet.imgix.net/static/2022/madonna-inn-1.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/madonna-inn-2.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="The bathroom in Kona Rock is made ofmrocks too - the sink is a huge uneven piece of rock" src="https://simonwillisonnet.imgix.net/static/2022/madonna-inn-2.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/madonna-inn-3.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="The Safari room has a beautiful four poster bed and exciting wallpaper" src="https://simonwillisonnet.imgix.net/static/2022/madonna-inn-3.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;h4&gt;Camping&lt;/h4&gt;
&lt;p&gt;Camping in California generally requires booking a site, often months in advance. Our travel companions knew what they were doing and managed to grab us last minute spots for one night at Islay Creek near Los Osos and two nights in the beautiful Los Padres National Forest.&lt;/p&gt;
&lt;h4&gt;The Victorian Mansion&lt;/h4&gt;
&lt;p&gt;I have a habit of dropping labels on Google Maps with tips that people have given me about different places. Labels have quite a strict length limit, which means my tips are often devoid of context - including when and from whom the tip came.&lt;/p&gt;
&lt;p&gt;This means I'm constantly stumbling across little tips from my past self, with no recollection of where the tip came from. This is delightful.&lt;/p&gt;
&lt;p&gt;As we were planning the last leg of our trip, I spotted a label north of Santa Barbara which just said "6 rooms puts Madonna Inn to shame".&lt;/p&gt;
&lt;p&gt;I have no recollection of saving this tip. I had attached it to the &lt;a href="https://thevick.com/"&gt;Victorian Mansion Bed &amp;amp; Breakfast&lt;/a&gt; in Los Alamos, California - an old Victorian house with six uniquely themed rooms.&lt;/p&gt;
&lt;p&gt;We stayed in the 1950s suite. It was full of neon and the bed was a 1956 Cadillac convertible which the house had been reconstructed around when the building was moved to its present location. We watched &lt;a href="https://en.wikipedia.org/wiki/Sideways"&gt;Sideways&lt;/a&gt;, a movie set in the area, on the projector that simulated a drive-in movie theater on a screen in front of the car.&lt;/p&gt;
&lt;p&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/victorian-mansion-1.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="The outside of the Victorian Mansion is a beautiful, well, Victorian mansion - with suspiciously boarded up windows" src="https://simonwillisonnet.imgix.net/static/2022/victorian-mansion-1.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/victorian-mansion-2.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="The 1950s suite with neon lights and a car for a bed" src="https://simonwillisonnet.imgix.net/static/2022/victorian-mansion-2.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;h4&gt;And some museums&lt;/h4&gt;
&lt;p&gt;On the way down to San Luis Obispo we stumbled across the &lt;a href="https://www.niche-museums.com/106"&gt;Paso Robles Pioneer Museum&lt;/a&gt;. This was the best kind of local history museum - entirely run by volunteers, and with an eclectic accumulation of donated exhibits covering all kinds of details of the history of the surrounding area. I particularly enjoyed the Swift Jewell Barbed Wire Collection - the fourth largest collection of barbed wire on public display in the world!&lt;/p&gt;
&lt;p&gt;(This raised the obvious question: what are the top three? From &lt;a href="https://www.atlasobscura.com/categories/barbed-wire"&gt;this category on Atlas Obscura&lt;/a&gt; it looks like there are two in Kansas and one in Texas.)&lt;/p&gt;
&lt;p&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/pioneer-1.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="The museum has an indoor street with recreations of historic businesses from the local town" src="https://simonwillisonnet.imgix.net/static/2022/pioneer-1.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/pioneer-2.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="A sign says: Swift Jewell Barbed Wire Collection - above a wall full of barbed wire samples" src="https://simonwillisonnet.imgix.net/static/2022/pioneer-2.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;p&gt;Then on the way back up we checked &lt;a href="https://www.roadsideamerica.com/"&gt;Roadside America&lt;/a&gt; and found &lt;a href="https://www.roadsideamerica.com/tip/41757"&gt;its listing&lt;/a&gt; for Mendenhall's Museum of Gasoline Pumps &amp;amp; Petroliana. This was the absolute best kind of niche museum: an obsessive collection, in someone's home, available to view by appointment only.&lt;/p&gt;
&lt;p&gt;We got lucky: one of the museum's operators spotted us lurking around the perimeter looking optimistic and let us have a look around despite not having pre-booked.&lt;/p&gt;
&lt;p&gt;The museum features neon, dozens of gas pumps, more than 400 porcelain gas pump globes, thousands of gas station signs plus classic and historic racing cars too. My write-up and photos are &lt;a href="https://www.niche-museums.com/107"&gt;available on Niche Museums&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-1.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="A wall outside the museum covered in signs and neon" src="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-1.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-2.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="Beautiful old historic gas pumps" src="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-2.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-3.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="A very exciting classic racing car in a garage covered in more signs" src="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-3.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
    &lt;a class="photoswipe-img" data-pswp-height="900" data-pswp-width="1200" href="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-4.jpeg?w=1200&amp;amp;auto=compress"&gt;&lt;img alt="A bar area, with signs covering every inch of the walls and ceiling" src="https://simonwillisonnet.imgix.net/static/2022/mendenhalls-4.jpeg?w=1200&amp;amp;auto=compress" style="width: 32%" /&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;h4&gt;Museums this week&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.niche-museums.com/106"&gt;Paso Robles Pioneer Museum&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.niche-museums.com/107"&gt;Mendenhall's Museum of Gasoline Pumps &amp;amp; Petroliana&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;TIL this week&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/python/copy-file"&gt;Efficiently copying a file&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/travel"&gt;travel&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/weeknotes"&gt;weeknotes&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="natalie-downe"/><category term="travel"/><category term="weeknotes"/></entry><entry><title>Pillar Point Stewards, pypi-to-sqlite, improvements to shot-scraper and appreciating datasette-dashboards</title><link href="https://simonwillison.net/2022/Apr/8/weeknotes/#atom-tag" rel="alternate"/><published>2022-04-08T20:26:36+00:00</published><updated>2022-04-08T20:26:36+00:00</updated><id>https://simonwillison.net/2022/Apr/8/weeknotes/#atom-tag</id><summary type="html">
    &lt;p&gt;This week I helped Natalie launch the Pillar Point Stewards website and built a new tool for loading PyPI package data into SQLite, in order to help promote the excellent datasette-dashboards plugin by Romain Clement.&lt;/p&gt;
&lt;h4 id="pillar-point-stewards"&gt;Pillar Point Stewards&lt;/h4&gt;
&lt;p&gt;I've been helping my wife Natalie Downe build the website for the &lt;a href="https://www.pillarpointstewards.com/"&gt;Pillar Point Stewards&lt;/a&gt; initative that she is organizing on behalf of the San Mateo MPA Collaborative and California Academy of Sciences.&lt;/p&gt;
&lt;p&gt;We live in El Granada, CA - home to the Pillar Point reef.&lt;/p&gt;
&lt;p&gt;The reef has always been mixed-use, with harvesting of sea life such as mussels and clams legal provided the harvesters have an inexpensive fishing license.&lt;/p&gt;
&lt;p&gt;Unfortunately, during the pandemic the number of people harvesting the reef raised by an order of magnitude - up to over a thousand people in just a single weekend. This had a major impact on the biodiversity of the reef, as described in &lt;a href="https://baynature.org/2021/01/12/packed-at-pillar-point/"&gt;Packed at Pillar Point by Anne Marshall-Chalmers for Bay Nature&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Pillar Point Stewards is an initiative to recruit volunteer stewards to go out on the reef during low tides, talking to people and trying to inspire curiosity and discourage unsustainable harvesting practices.&lt;/p&gt;
&lt;p&gt;A very small part of the project is the website to support it, which helps volunteers sign up for shifts at low tides.&lt;/p&gt;
&lt;p&gt;We re-used some of the work we had previously done &lt;a href="https://simonwillison.net/2020/Aug/21/weeknotes-rocky-beaches/"&gt;for Rocky Beaches&lt;/a&gt;, in particular the logic for working with tide times &lt;a href="https://tidesandcurrents.noaa.gov/web_services_info.html"&gt;from NOAA&lt;/a&gt; to decide when the shifts should be.&lt;/p&gt;
&lt;p&gt;Natalie designed the site and built the front-end. I implemented the Django backend and integrated with &lt;a href="https://auth0.com/"&gt;Auth0&lt;/a&gt; in order to avoid running our own signup and registration flow. This was the inspiration for the &lt;a href="https://datasette.io/plugins/datasette-auth0"&gt;datasette-auth0&lt;/a&gt; plugin I &lt;a href="https://simonwillison.net/2022/Mar/28/datasette-auth0/"&gt;released last week&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Signed in volunteers can select their shift times from a calendar:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2022/www-pillarpointstewards-com.jpg" alt="The signed in homepage, showing a list of upcoming shifts and a calendar view." style="max-width:100%;" /&gt;&lt;/p&gt;
&lt;p&gt;We also included an SVG tide chart on each shift page using the tide data from NOAA, which looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2022/www-pillarpointstewards-com-shifts-182.png" alt="The tide chart shows the tide level throughout the day, highlighting the low tide and showing which portion of the day is covered by the shift" style="max-width:100%;" /&gt;&lt;/p&gt;
&lt;p&gt;We've been building the site in public. You can see how everything works in the &lt;a href="https://github.com/natbat/pillarpointstewards"&gt;natbat/pillarpointstewards&lt;/a&gt; GitHub repository, including how the site uses &lt;a href="https://github.com/natbat/pillarpointstewards/blob/165cdcfe8b87cb15742e2729d0077202102fc751/.github/workflows/test.yml#L39-L45"&gt;continuous deployment&lt;/a&gt; against &lt;a href="https://fly.io/"&gt;Fly&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="datasette-dashboards"&gt;datasette-dashboards&lt;/h4&gt;
&lt;p&gt;This is not my project, but I'm writing about it here because I only just found out about it and it's really cool.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://romain-clement.net/"&gt;Romain Clement&lt;/a&gt; built a plugin for Datasette called &lt;a href="https://github.com/rclement/datasette-dashboards"&gt;datasette-dashboards&lt;/a&gt;. It's best explained by checking out his &lt;a href="https://datasette-dashboards-demo.vercel.app/-/dashboards/job-offers-stats"&gt;live demo&lt;/a&gt;, which looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2022/datasette-dashboards-with-filters.png" alt="A dashboard, showing Job offers statistics - with a line chart, a big number, a donut chart, a nested bar chart and a choropleth map. The elements are arranged in a visually pleasing grid, with the line chart taking up two columns while everything else takes up one." style="max-width:100%;" /&gt;&lt;/p&gt;
&lt;p&gt;There are a bunch of clever ideas in this plugin.&lt;/p&gt;
&lt;p&gt;It uses YAML syntax to define the different dashboard panels, outsourcing the actual visualization elements to Vega. You can see &lt;a href="https://github.com/rclement/datasette-dashboards/blob/202fb2fcbc9efe4848fe940fae435eff75bb4f59/demo/metadata.yml"&gt;the YAML for the demo here&lt;/a&gt;. Here's an edited subset of the YAML illustrating some interesting points:&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml"&gt;&lt;pre&gt;&lt;span class="pl-ent"&gt;plugins&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;datasette-dashboards&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;job-offers-stats&lt;/span&gt;:
      &lt;span class="pl-ent"&gt;title&lt;/span&gt;: &lt;span class="pl-s"&gt;Job offers statistics&lt;/span&gt;
      &lt;span class="pl-ent"&gt;description&lt;/span&gt;: &lt;span class="pl-s"&gt;Gather metrics about job offers&lt;/span&gt;
      &lt;span class="pl-ent"&gt;layout&lt;/span&gt;:
        - &lt;span class="pl-s"&gt;[analysis-note, offers-day, offers-day, offers-count]&lt;/span&gt;
        - &lt;span class="pl-s"&gt;[analysis-note, offers-source, offers-day-source, offers-region]&lt;/span&gt;
      &lt;span class="pl-ent"&gt;filters&lt;/span&gt;:
        &lt;span class="pl-ent"&gt;date_start&lt;/span&gt;:
          &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Date Start&lt;/span&gt;
          &lt;span class="pl-ent"&gt;type&lt;/span&gt;: &lt;span class="pl-s"&gt;date&lt;/span&gt;
          &lt;span class="pl-ent"&gt;default&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;2021-01-01&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;
        &lt;span class="pl-ent"&gt;date_end&lt;/span&gt;:
          &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Date End&lt;/span&gt;
          &lt;span class="pl-ent"&gt;type&lt;/span&gt;: &lt;span class="pl-s"&gt;date&lt;/span&gt;
      &lt;span class="pl-ent"&gt;charts&lt;/span&gt;:
        &lt;span class="pl-ent"&gt;analysis-note&lt;/span&gt;:
          &lt;span class="pl-ent"&gt;library&lt;/span&gt;: &lt;span class="pl-s"&gt;markdown&lt;/span&gt;
          &lt;span class="pl-ent"&gt;display&lt;/span&gt;: &lt;span class="pl-s"&gt;|-&lt;/span&gt;
&lt;span class="pl-s"&gt;            # Analysis details&lt;/span&gt;
&lt;span class="pl-s"&gt;            ...&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;/span&gt;        &lt;span class="pl-ent"&gt;offers-count&lt;/span&gt;:
          &lt;span class="pl-ent"&gt;title&lt;/span&gt;: &lt;span class="pl-s"&gt;Total number of offers&lt;/span&gt;
          &lt;span class="pl-ent"&gt;db&lt;/span&gt;: &lt;span class="pl-s"&gt;jobs&lt;/span&gt;
          &lt;span class="pl-ent"&gt;query&lt;/span&gt;: &lt;span class="pl-s"&gt;SELECT count(*) as count FROM offers_view WHERE TRUE [[ AND date &amp;gt;= date(:date_start) ]] [[ AND date &amp;lt;= date(:date_end) ]];&lt;/span&gt;
          &lt;span class="pl-ent"&gt;library&lt;/span&gt;: &lt;span class="pl-s"&gt;metric&lt;/span&gt;
          &lt;span class="pl-ent"&gt;display&lt;/span&gt;:
            &lt;span class="pl-ent"&gt;field&lt;/span&gt;: &lt;span class="pl-s"&gt;count&lt;/span&gt;
            &lt;span class="pl-ent"&gt;prefix&lt;/span&gt;:
            &lt;span class="pl-ent"&gt;suffix&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt; offers&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;
        &lt;span class="pl-ent"&gt;offers-day&lt;/span&gt;:
          &lt;span class="pl-ent"&gt;title&lt;/span&gt;: &lt;span class="pl-s"&gt;Number of offers by day&lt;/span&gt;
          &lt;span class="pl-ent"&gt;db&lt;/span&gt;: &lt;span class="pl-s"&gt;jobs&lt;/span&gt;
          &lt;span class="pl-ent"&gt;query&lt;/span&gt;: &lt;span class="pl-s"&gt;SELECT date(date) as day, count(*) as count FROM offers_view WHERE TRUE [[ AND date &amp;gt;= date(:date_start) ]] [[ AND date &amp;lt;= date(:date_end) ]] GROUP BY day ORDER BY day&lt;/span&gt;
          &lt;span class="pl-ent"&gt;library&lt;/span&gt;: &lt;span class="pl-s"&gt;vega&lt;/span&gt;
          &lt;span class="pl-ent"&gt;display&lt;/span&gt;:
            &lt;span class="pl-ent"&gt;mark&lt;/span&gt;: &lt;span class="pl-s"&gt;{ type: line, tooltip: true }&lt;/span&gt;
            &lt;span class="pl-ent"&gt;encoding&lt;/span&gt;:
              &lt;span class="pl-ent"&gt;x&lt;/span&gt;: &lt;span class="pl-s"&gt;{ field: day, type: temporal }&lt;/span&gt;
              &lt;span class="pl-ent"&gt;y&lt;/span&gt;: &lt;span class="pl-s"&gt;{ field: count, type: quantitative }&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The SQL query for each panel is defined as &lt;code&gt;query:&lt;/code&gt; - and can take parameters such as &lt;code&gt;:date_end&lt;/code&gt; which are defined by the &lt;code&gt;filters:&lt;/code&gt; section. Note that here one of the filters has a type of &lt;code&gt;date&lt;/code&gt;, which turns into a &lt;code&gt;&amp;lt;input type="date"&amp;gt;&lt;/code&gt; in the filter interface.&lt;/p&gt;
&lt;p&gt;For &lt;code&gt;library: vega&lt;/code&gt; panels the &lt;code&gt;display:&lt;/code&gt; key holds the raw &lt;a href="https://vega.github.io/vega/docs/specification/"&gt;Vega specification&lt;/a&gt;, so anything the Vega visualization library can do is available to the plugin.&lt;/p&gt;
&lt;p&gt;I didn't know Vega could render choropleth maps! That map there is defined by this YAML, which loads a GeoJSON file of the regions in France from the &lt;a href="https://github.com/gregoiredavid/france-geojson"&gt;gregoiredavid/france-geojson&lt;/a&gt; GitHub repository.&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml"&gt;&lt;pre&gt;&lt;span class="pl-ent"&gt;display&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;mark&lt;/span&gt;: &lt;span class="pl-s"&gt;geoshape&lt;/span&gt;
  &lt;span class="pl-ent"&gt;projection&lt;/span&gt;: &lt;span class="pl-s"&gt;{ type: mercator }&lt;/span&gt;
  &lt;span class="pl-ent"&gt;transform&lt;/span&gt;:
    - &lt;span class="pl-ent"&gt;lookup&lt;/span&gt;: &lt;span class="pl-s"&gt;region&lt;/span&gt;
      &lt;span class="pl-ent"&gt;from&lt;/span&gt;:
        &lt;span class="pl-ent"&gt;data&lt;/span&gt;:
          &lt;span class="pl-ent"&gt;url&lt;/span&gt;: &lt;span class="pl-s"&gt;https://raw.githubusercontent.com/gregoiredavid/france-geojson/master/regions.geojson&lt;/span&gt;
          &lt;span class="pl-ent"&gt;format&lt;/span&gt;: &lt;span class="pl-s"&gt;{ type: json, property: features }&lt;/span&gt;
        &lt;span class="pl-ent"&gt;key&lt;/span&gt;: &lt;span class="pl-s"&gt;properties.nom&lt;/span&gt;
        &lt;span class="pl-ent"&gt;fields&lt;/span&gt;: &lt;span class="pl-s"&gt;[type, geometry]&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;I think my favourite trick though is the way it handles layout. The layout for the demo is defined thus:&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml"&gt;&lt;pre&gt;&lt;span class="pl-ent"&gt;layout&lt;/span&gt;:
  - &lt;span class="pl-s"&gt;[analysis-note, offers-day, offers-day, offers-count]&lt;/span&gt;
  - &lt;span class="pl-s"&gt;[analysis-note, offers-source, offers-day-source, offers-region]&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This is then implemented &lt;a href="https://github.com/rclement/datasette-dashboards/blob/202fb2fcbc9efe4848fe940fae435eff75bb4f59/datasette_dashboards/templates/dashboard_view.html#L8-L26"&gt;using CSS grids&lt;/a&gt;! Here's the template fragment that does the work:&lt;/p&gt;
&lt;div class="highlight highlight-text-html-django"&gt;&lt;pre&gt;&amp;lt;&lt;span class="pl-ent"&gt;style&lt;/span&gt;&amp;gt;&lt;span class="pl-s1"&gt;&lt;/span&gt;
&lt;span class="pl-s1"&gt;  &lt;span class="pl-k"&gt;@media&lt;/span&gt; (&lt;span class="pl-c1"&gt;min-width&lt;/span&gt;: &lt;span class="pl-c1"&gt;800&lt;span class="pl-k"&gt;px&lt;/span&gt;&lt;/span&gt;) {&lt;/span&gt;
&lt;span class="pl-s1"&gt;    &lt;span class="pl-e"&gt;.dashboard-grid&lt;/span&gt; {&lt;/span&gt;
&lt;span class="pl-s1"&gt;      {% &lt;span class="pl-c1"&gt;if&lt;/span&gt; &lt;span class="pl-c1"&gt;dashboard&lt;/span&gt;.&lt;span class="pl-c1"&gt;layout&lt;/span&gt; %}&lt;/span&gt;
&lt;span class="pl-s1"&gt;      &lt;span class="pl-ent"&gt;grid-template-areas&lt;/span&gt;: {% &lt;span class="pl-c1"&gt;for&lt;/span&gt; &lt;span class="pl-c1"&gt;row&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-c1"&gt;dashboard&lt;/span&gt;.&lt;span class="pl-c1"&gt;layout&lt;/span&gt; %}"{% &lt;span class="pl-c1"&gt;for&lt;/span&gt; &lt;span class="pl-c1"&gt;col&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-c1"&gt;row&lt;/span&gt; %}{{ &lt;span class="pl-c1"&gt;col&lt;/span&gt; }} {% &lt;span class="pl-c1"&gt;endfor&lt;/span&gt; %}" {% &lt;span class="pl-c1"&gt;endfor&lt;/span&gt; %};&lt;/span&gt;
&lt;span class="pl-s1"&gt;      {% &lt;span class="pl-c1"&gt;else&lt;/span&gt; %}&lt;/span&gt;
&lt;span class="pl-s1"&gt;      &lt;span class="pl-ent"&gt;grid-template-columns&lt;/span&gt;: repeat(2, 1fr);&lt;/span&gt;
&lt;span class="pl-s1"&gt;      {% &lt;span class="pl-c1"&gt;endif&lt;/span&gt; %}&lt;/span&gt;
&lt;span class="pl-s1"&gt;    }&lt;/span&gt;
&lt;span class="pl-s1"&gt;&lt;/span&gt;
&lt;span class="pl-s1"&gt;    {% &lt;span class="pl-c1"&gt;if&lt;/span&gt; &lt;span class="pl-c1"&gt;dashboard&lt;/span&gt;.&lt;span class="pl-c1"&gt;layout&lt;/span&gt; %}&lt;/span&gt;
&lt;span class="pl-s1"&gt;    {% &lt;span class="pl-c1"&gt;for&lt;/span&gt; &lt;span class="pl-c1"&gt;chart&lt;/span&gt;_&lt;span class="pl-c1"&gt;slug&lt;/span&gt;, &lt;span class="pl-c1"&gt;chart&lt;/span&gt; &lt;span class="pl-c1"&gt;in&lt;/span&gt; &lt;span class="pl-c1"&gt;dashboard&lt;/span&gt;.&lt;span class="pl-c1"&gt;charts&lt;/span&gt;.&lt;span class="pl-c1"&gt;items&lt;/span&gt;() %}&lt;/span&gt;
&lt;span class="pl-s1"&gt;    &lt;span class="pl-e"&gt;#card-&lt;/span&gt;{{ &lt;span class="pl-c1"&gt;chart&lt;/span&gt;_&lt;span class="pl-c1"&gt;slug&lt;/span&gt; }} {&lt;/span&gt;
&lt;span class="pl-s1"&gt;      &lt;span class="pl-c1"&gt;&lt;span class="pl-c1"&gt;grid-area&lt;/span&gt;&lt;/span&gt;: {{ chart_slug }};&lt;/span&gt;
&lt;span class="pl-s1"&gt;    }&lt;/span&gt;
&lt;span class="pl-s1"&gt;    {% &lt;span class="pl-c1"&gt;endfor&lt;/span&gt; %}&lt;/span&gt;
&lt;span class="pl-s1"&gt;    {% &lt;span class="pl-c1"&gt;endif&lt;/span&gt; %}&lt;/span&gt;
&lt;span class="pl-s1"&gt;  }&lt;/span&gt;
&lt;span class="pl-s1"&gt;&lt;/span&gt;&amp;lt;/&lt;span class="pl-ent"&gt;style&lt;/span&gt;&amp;gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Such a clever and elegant trick.&lt;/p&gt;
&lt;h4 id="pypi-to-sqlite"&gt;pypi-to-sqlite&lt;/h4&gt;
&lt;p&gt;I wanted to add &lt;code&gt;datasette-dashboards&lt;/code&gt; to the official Datasette &lt;a href="https://datasette.io/plugins"&gt;plugins directory&lt;/a&gt;, but there was a catch: since most of the plugins listed there are written by me, the site has some baked in expectations: in particular, it expects that plugins will all be using the GitHub releases feature (&lt;a href="https://github.com/simonw/datasette-graphql/releases"&gt;for example&lt;/a&gt;) to announce their releases.&lt;/p&gt;
&lt;p&gt;Romain's plugin wasn't using that feature, instead maintaining &lt;a href="https://github.com/rclement/datasette-dashboards/blob/master/CHANGELOG.md"&gt;its own changelog file&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I've been meaning to make the plugin directory more forgiving for a while. I decided to switch from using GitHub releases as the definitive source of release information to using releases published to &lt;a href="https://pypi.org/"&gt;PyPI (the Python package index)&lt;/a&gt; instead.&lt;/p&gt;
&lt;p&gt;PyPI offers a stable JSON API: &lt;a href="https://pypi.org/pypi/datasette-dashboards/json"&gt;https://pypi.org/pypi/datasette-dashboards/json&lt;/a&gt; - which includes information on the package and all of its releases.&lt;/p&gt;
&lt;p&gt;To reliably pull that information into &lt;a href="https://datasette.io"&gt;datasette.io&lt;/a&gt; I decided on a two-step process. First, I set up a &lt;a href="https://simonwillison.net/2020/Oct/9/git-scraping/"&gt;Git scraper&lt;/a&gt; to archive the data that I cared about into a new repository called &lt;a href="https://github.com/simonw/pypi-datasette-packages"&gt;pypi-datasette-packages&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;That repo stores the current PyPI JSON for every package listed on the Datasette website. This means I can see changes made to those files over time by browsing the commit history. It also means that if PyPI is unavailable I can still &lt;a href="https://simonwillison.net/2020/Dec/13/datasette-io/"&gt;build and deploy&lt;/a&gt; the site.&lt;/p&gt;
&lt;p&gt;Then I wrote a new tool called &lt;a href="https://datasette.io/tools/pypi-to-sqlite"&gt;pypi-to-sqlite&lt;/a&gt; to load that data into SQLite database tables. You can try that out like so:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pip install pypi-to-sqlite
pypi-to-sqlite pypi.db datasette-dashboards pypi-to-sqlite --prefix pypi_
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That &lt;code&gt;--prefix&lt;/code&gt; option causes the tables to be created with the specified prefix in their names.&lt;/p&gt;
&lt;p&gt;Here are the three tables generated by that command:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://datasette.io/content/pypi_packages"&gt;pypi_packages&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://datasette.io/content/pypi_versions"&gt;pypi_versions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://datasette.io/content/pypi_releases"&gt;pypi_releases&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Using data from these tables I was able to &lt;a href="https://github.com/simonw/datasette.io/issues/98#issuecomment-1093144133"&gt;rework the SQL view&lt;/a&gt; that powers the plugins and tools directories on the site, and now &lt;a href="https://datasette.io/plugins/datasette-dashboards"&gt;datasette-dashboards has its own page there&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="shot-scraper-10-11"&gt;shot-scraper 0.10 and 0.11&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/shot-scraper"&gt;shot-scraper&lt;/a&gt; is my tool for taking automated screenshots of web pages, built on top of &lt;a href="https://playwright.dev/"&gt;Playwright&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://twitter.com/palewire"&gt;Ben Welsh&lt;/a&gt; has been a key early adopter of &lt;code&gt;shot-scraper&lt;/code&gt;, using it to power his &lt;a href="https://github.com/palewire/news-homepages"&gt;news-homepages&lt;/a&gt; project which takes screenshots of various news websites and then both &lt;a href="https://twitter.com/newshomepages"&gt;tweets the results&lt;/a&gt; and uploads them to the &lt;a href="https://archive.org/details/news-homepages"&gt;News Homepages collection&lt;/a&gt; on the Internet Archive.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/shot-scraper/releases/tag/0.10"&gt;shot-scraper 0.10&lt;/a&gt; is mostly Ben's work: he contributed both a &lt;code&gt;--timeout&lt;/code&gt; option and a &lt;code&gt;--browser&lt;/code&gt; option to let you install and use browsers other than the Chromium default!&lt;/p&gt;
&lt;p&gt;(Ben needed this because some news homepages were embedding videos in a format that &lt;a href="https://github.com/microsoft/playwright/issues/13093"&gt;wasn't supported by Chromium&lt;/a&gt; but did work fine in regular Chrome.)&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/ryancheley"&gt;Ryan Cheley&lt;/a&gt; also contributed to 0.10 - thanks to Ryan, the &lt;code&gt;shot-scraper multi&lt;/code&gt; command now continues taking shots even if one of them fails, unless you pass the &lt;code&gt;--fail-on-error&lt;/code&gt; flag.&lt;/p&gt;
&lt;p&gt;In writing my weeknotes, I decided to use &lt;code&gt;shot-scraper&lt;/code&gt; to take a screenshot of the signed in homepage of the &lt;a href="https://www.pillarpointstewards.com/"&gt;www.pillarpointstewards.com&lt;/a&gt; site.&lt;/p&gt;
&lt;p&gt;In doing so, I found out that Google SSO &lt;a href="https://github.com/simonw/shot-scraper/issues/61"&gt;refuses to work&lt;/a&gt; with the default Playwright Chromium! But it does continue to work with Firefox, so I fixed the &lt;code&gt;shot-scraper auth&lt;/code&gt; to support the &lt;code&gt;--browser&lt;/code&gt; option.&lt;/p&gt;
&lt;p&gt;I took the screenshot like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;shot-scraper auth https://www.pillarpointstewards.com/ -b firefox auth.json
# Now manually sign in with Auth0 and Google
shot-scraper https://www.pillarpointstewards.com/ -b firefox -a auth.json \
  --javascript "
    Array.from(
      document.querySelectorAll('[href^=tel]')
    ).forEach(el =&amp;gt; el.innerHTML = '(xxx) xxx-xxxx')"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That &lt;code&gt;--javascript&lt;/code&gt; line there redacts the phone numbers that are displayed on the page to signed in volunteers.&lt;/p&gt;
&lt;p&gt;I created the second screenshot of just the tide times chart using this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;shot-scraper https://www.pillarpointstewards.com/shifts/182/ \
  -b firefox -a auth.json \
  --selector '.primary h2:nth-child(8)' \
  --selector .day-alone --padding 15
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/shot-scraper/releases/tag/0.11"&gt;shot-scraper 0.11&lt;/a&gt;, released a few minutes ago, contains the new &lt;code&gt;auth --browser&lt;/code&gt; feature plus some additional contributions from Ben Welsh, Ryan Murphy and Ian Wootten:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;New &lt;code&gt;shot-scraper accessibility --timeout&lt;/code&gt; option, thanks &lt;a href="https://github.com/palewire"&gt;Ben Welsh&lt;/a&gt;. &lt;a href="https://github.com/simonw/shot-scraper/pull/59"&gt;#59&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;shot-scraper auth --browser&lt;/code&gt; option for authentication using a browser other than Chromium. &lt;a href="https://github.com/simonw/shot-scraper/issues/61"&gt;#61&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Using &lt;code&gt;--quality&lt;/code&gt; now results in a JPEG file with the correct &lt;code&gt;.jpg&lt;/code&gt; extension. Thanks, &lt;a href="https://github.com/iwootten"&gt;Ian Wootten&lt;/a&gt;. &lt;a href="https://github.com/simonw/shot-scraper/pull/58"&gt;#58&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;New &lt;code&gt;--reduced-motion&lt;/code&gt; flag for emulating the "prefers-reduced-motion" media feature. Thanks, &lt;a href="https://github.com/rdmurphy"&gt;Ryan Murphy&lt;/a&gt;. &lt;a href="https://github.com/simonw/shot-scraper/issues/49"&gt;#49&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h4&gt;Releases this week&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/simonw/shot-scraper"&gt;shot-scraper&lt;/a&gt;&lt;/strong&gt;: &lt;a href="https://github.com/simonw/shot-scraper/releases/tag/0.11"&gt;0.11&lt;/a&gt; - (&lt;a href="https://github.com/simonw/shot-scraper/releases"&gt;12 releases total&lt;/a&gt;) - 2022-04-08
&lt;br /&gt;Tools for taking automated screenshots of websites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/simonw/pypi-to-sqlite"&gt;pypi-to-sqlite&lt;/a&gt;&lt;/strong&gt;: &lt;a href="https://github.com/simonw/pypi-to-sqlite/releases/tag/0.2.2"&gt;0.2.2&lt;/a&gt; - (&lt;a href="https://github.com/simonw/pypi-to-sqlite/releases"&gt;3 releases total&lt;/a&gt;) - 2022-04-08
&lt;br /&gt;Load data about Python packages from PyPI into SQLite&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/simonw/shot-scraper"&gt;shot-scraper&lt;/a&gt;&lt;/strong&gt;: &lt;a href="https://github.com/simonw/shot-scraper/releases/tag/0.10"&gt;0.10&lt;/a&gt; - (&lt;a href="https://github.com/simonw/shot-scraper/releases"&gt;12 releases total&lt;/a&gt;) - 2022-03-29
&lt;br /&gt;Tools for taking automated screenshots of websites&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;TIL this week&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/webassembly/compile-to-wasm-llvm-macos"&gt;Compiling to WASM with llvm on macOS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/python/calendar-weeks"&gt;Generating a calendar week grid with the Python Calendar module&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/docker/docker-for-mac-container-to-postgresql-on-host"&gt;Allowing a container in Docker Desktop for Mac to talk to a PostgreSQL server on the host machine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/pytest/treat-warnings-as-errors"&gt;Treating warnings as errors in pytest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/auth0/auth0-logout"&gt;Logging users out of Auth0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/bash/use-awk-to-add-a-prefix"&gt;Using awk to add a prefix&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/continuous-deployment"&gt;continuous-deployment&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/continuous-integration"&gt;continuous-integration&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/plugins"&gt;plugins&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pypi"&gt;pypi&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/weeknotes"&gt;weeknotes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/shot-scraper"&gt;shot-scraper&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ben-welsh"&gt;ben-welsh&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="continuous-deployment"/><category term="continuous-integration"/><category term="natalie-downe"/><category term="plugins"/><category term="projects"/><category term="pypi"/><category term="datasette"/><category term="weeknotes"/><category term="shot-scraper"/><category term="ben-welsh"/></entry><entry><title>Datasette 0.51 (plus weeknotes)</title><link href="https://simonwillison.net/2020/Nov/1/datasette-0-51/#atom-tag" rel="alternate"/><published>2020-11-01T04:22:55+00:00</published><updated>2020-11-01T04:22:55+00:00</updated><id>https://simonwillison.net/2020/Nov/1/datasette-0-51/#atom-tag</id><summary type="html">
    &lt;p&gt;I shipped &lt;a href="https://docs.datasette.io/en/stable/changelog.html#v0-51"&gt;Datasette 0.51&lt;/a&gt; today, with a new visual design, plugin hooks for adding navigation options, better handling of binary data, URL building utility methods and better support for running Datasette behind a proxy. It's a lot of stuff! Here are the annotated release notes.&lt;/p&gt;
&lt;h4 id="weeknotes-57-new-design"&gt;New visual design&lt;/h4&gt;
&lt;blockquote&gt;
&lt;p&gt;Datasette is no longer white and grey with blue and purple links! &lt;a href="https://twitter.com/natbat"&gt;Natalie Downe&lt;/a&gt; has been working on a visual refresh, the first iteration of which is included in this release. (&lt;a href="https://github.com/simonw/datasette/pull/1056"&gt;#1056&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It's about time Datasette grew beyond its clearly-designed-by-a-mostly-backend-engineer roots. Natalie has been helping me start adding some visual polish: we've started with an update to the colour scheme and will be continuing to iterate on the visual design as the project evolves towards the 1.0 release.&lt;/p&gt;
&lt;p&gt;The new design makes the navigation bar much more obvious, which is important for this release since the new navigation menu (tucked away behind a three-bar icon) is a key new feature.&lt;/p&gt;
&lt;p&gt;&lt;img alt="A screenshot of the new Datasette visual design, also showcasing two new menu icons" src="https://static.simonwillison.net/static/2020/datasette-0.51.png" style="max-width:100%;" /&gt;&lt;/p&gt;
&lt;h4 id="weeknotes-57-plugins-links"&gt;Plugins can now add links within Datasette&lt;/h4&gt;
&lt;blockquote&gt;
&lt;p&gt;A number of existing Datasette plugins add new pages to the Datasette interface, providig tools for things like &lt;a href="https://github.com/simonw/datasette-upload-csvs"&gt;uploading CSVs&lt;/a&gt;, &lt;a href="https://github.com/simonw/datasette-edit-schema"&gt;editing table schemas&lt;/a&gt; or &lt;a href="https://github.com/simonw/datasette-configure-fts"&gt;configuring full-text search&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Plugins like this can now link to themselves from other parts of Datasette interface. The &lt;a href="https://docs.datasette.io/en/stable/plugin_hooks.html#plugin-hook-menu-links"&gt;menu_links(datasette, actor)&lt;/a&gt; hook (&lt;a href="https://github.com/simonw/datasette/issues/1064"&gt;#1064&lt;/a&gt;) lets plugins add links to Datasette's new top-right application menu, and the &lt;a href="https://docs.datasette.io/en/stable/plugin_hooks.html#plugin-hook-table-actions"&gt;table_actions(datasette, actor, database, table)&lt;/a&gt; hook (&lt;a href="https://github.com/simonw/datasette/issues/1066"&gt;#1066&lt;/a&gt;) adds links to a new "table actions" menu on the table page.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This feature has been a long time coming. I've been writing an increasing number of plugins that add new pages to Datasette, and so far the main way of using them has been to memorise and type in their URLs!&lt;/p&gt;
&lt;p&gt;The new navigation menu (which only displays if it has something in it) provides a global location to add new links. I've already released several plugin updates that take advantage of this.&lt;/p&gt;
&lt;p&gt;The new "table actions" menu imitates Datasette's existing column header menu icon - it's a cog. Clicking it opens a menu of actions relating to the current table.&lt;/p&gt;
&lt;p&gt;Want to see a demo?&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The demo at &lt;a href="https://latest.datasette.io/"&gt;latest.datasette.io&lt;/a&gt; now includes some example plugins. To see the new table actions menu first &lt;a href="https://latest.datasette.io/login-as-root"&gt;sign into that demo as root&lt;/a&gt; and then visit the &lt;a href="https://latest.datasette.io/fixtures/facetable"&gt;facetable&lt;/a&gt; table to see the new cog icon menu at the top of the page.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here's an animated GIF demo showing the new menus in action.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Using the nav menu to upload CSVs, then the table actions menu to edit the table schema and rename latitude and longitude columns so that the Central Park Squirrel Census is displayed on a map!" src="https://static.simonwillison.net/static/2020/datasette-0.51-menus.gif" style="max-width:100%;" /&gt;&lt;/p&gt;
&lt;h4 id="weeknotes-57-binary-data"&gt;Binary data&lt;/h4&gt;
&lt;blockquote&gt;
&lt;p&gt;SQLite tables can contain binary data in &lt;code&gt;BLOB&lt;/code&gt; columns. Datasette now provides links for users to download this data directly from Datasette, and uses those links to make binary data available from CSV exports. See &lt;a href="https://docs.datasette.io/en/stable/binary_data.html#binary"&gt;Binary data&lt;/a&gt; for more details. (&lt;a href="https://github.com/simonw/datasette/issues/1036"&gt;#1036&lt;/a&gt; and &lt;a href="https://github.com/simonw/datasette/issues/1034"&gt;#1034&lt;/a&gt;).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I spent a ton of time on this over the past few weeks. The initial impetus was a realization that Datasette CSV exports included ugly Python &lt;code&gt;b'\x15\x1c\x02\xc7\xad\x05\xfe'&lt;/code&gt; strings, which felt like the worst possible way to display binary in a CSV file, out of universally bad options.&lt;/p&gt;
&lt;p&gt;Datasette's main interface punted on binary entirely - it would show a &lt;code&gt;&amp;lt;Binary data: 7 bytes&amp;gt;&lt;/code&gt; label which didn't help much either.&lt;/p&gt;
&lt;p&gt;The only way to get at binary data stored in a Datasette instance was to request the JSON version and then manually decode the Base-64 value within it!&lt;/p&gt;
&lt;p&gt;This is now fixed: binary columns can be downloaded directly to your computer, using a new &lt;code&gt;.blob&lt;/code&gt; output renderer. The approach is described on &lt;a href="https://docs.datasette.io/en/stable/binary_data.html#binary"&gt;this new page in the documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Security was a major consideration when building this feature. Allowing the download of arbitrary byte payloads from a web server is dangerous business: it can easily result in XSS holes where HTML with dangerous &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; content can end up hosted on the primary domain.&lt;/p&gt;
&lt;p&gt;After some research, I decided to serve up binary content for download using the following headings:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;content-type: application/binary
x-content-type-options: nosniff
content-disposition: attachment; filename="data-f30889.blob"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;application/binary&lt;/code&gt; is a safer Content-Type option than the more common &lt;code&gt;application/octet-stream&lt;/code&gt;, according to Michal Zalewski's renowned web application security book The Tangled Web (&lt;a href="https://neexee.github.io/posts-en/the-tangled-web/"&gt;quoted here&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;&lt;code&gt;x-content-type-options: nosniff&lt;/code&gt; disables the XSS-tastic content sniffing feature in older versions of Internet Explorer, where IE would helpfully guess that you intended to serve HTML based on the first few bytes of the response.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;content-disposition: attachment&lt;/code&gt; header causes the browser to show a "download this file" dialog, using the suggested filename.&lt;/p&gt;
&lt;p&gt;If you know of a reason that this isn't secure enough, please &lt;a href="https://twitter.com/simonw"&gt;let me know&lt;/a&gt;!&lt;/p&gt;
&lt;h4 id="weeknotes-57-url-building"&gt;URL building&lt;/h4&gt;
&lt;blockquote&gt;
&lt;p&gt;The new &lt;a href="https://docs.datasette.io/en/stable/internals.html#internals-datasette-urls"&gt;datasette.urls&lt;/a&gt; family of methods can be used to generate URLs to key pages within the Datasette interface, both within custom templates and Datasette plugins. See &lt;a href="https://docs.datasette.io/en/stable/writing_plugins.html#writing-plugins-building-urls"&gt;Building URLs within plugins&lt;/a&gt; for more details. (&lt;a href="https://github.com/simonw/datasette/issues/904"&gt;#904&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Datasette's &lt;a href="https://docs.datasette.io/en/stable/config.html#config-base-url"&gt;base_url&lt;/a&gt; configuration setting was the forcing factor around this piece of work.&lt;/p&gt;
&lt;p&gt;It allows you to configure Datasette to serve content starting at a path other than &lt;code&gt;/&lt;/code&gt; - for example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;datasette --config base_url:/path-to-datasette/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will serve all Datasette pages at locations starting with &lt;code&gt;/path-to-datasette/&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Why would you want to do this? It's useful if you are proxying traffic to Datasette from within the URL hierarchy of an existing website.&lt;/p&gt;
&lt;p&gt;The feature didn't work properly, and enough people care about it that I had a steady stream of bug reports. For 0.51 I gathered them all into a single giant &lt;a href="https://github.com/simonw/datasette/issues/1023"&gt;tracking issue&lt;/a&gt; and worked through them all one by one.&lt;/p&gt;
&lt;p&gt;It quickly became apparent that the key challenge was building URLs within Datasette - not just within HTML template pages, but also for things like HTTP redirects.&lt;/p&gt;
&lt;p&gt;Datasette itself needed to generate URLs that took the &lt;code&gt;base_url&lt;/code&gt; setting into account, but so do Datasette plugins. So I built &lt;a href="https://docs.datasette.io/en/stable/internals.html#datasette-urls"&gt;a new datasette.urls&lt;/a&gt; collection of helper methods and made them part of the documented internals API for plugins. The &lt;a href="https://docs.datasette.io/en/stable/writing_plugins.html#building-urls-within-plugins"&gt;Building URLs within plugins&lt;/a&gt; documentation shows how these should be used.&lt;/p&gt;
&lt;p&gt;I also added documentation on &lt;a href="https://docs.datasette.io/en/stable/deploying.html#deploying-proxy"&gt;Running Datasette behind a proxy&lt;/a&gt; with example configs (tested on my laptop) for both nginx and Apache.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://docs.datasette.io/en/stable/internals.html#datasette-client"&gt;datasette.client&lt;/a&gt; mechanism from Datasette 0.50 allows plugins to make calls to Datasette's internal JSON API without the overhead of an HTTP request. This is another place where plugins need to be able to construct valid URLs to internal Datasette pages.&lt;/p&gt;
&lt;p&gt;I added this example to the documentation showing how the two features can work together:&lt;/p&gt;
&lt;pre&gt;&lt;span class="pl-s1"&gt;table_json&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; (
    &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-s1"&gt;datasette&lt;/span&gt;.&lt;span class="pl-s1"&gt;client&lt;/span&gt;.&lt;span class="pl-en"&gt;get&lt;/span&gt;(
        &lt;span class="pl-s1"&gt;datasette&lt;/span&gt;.&lt;span class="pl-s1"&gt;urls&lt;/span&gt;.&lt;span class="pl-en"&gt;table&lt;/span&gt;(&lt;span class="pl-s"&gt;"fixtures"&lt;/span&gt;, &lt;span class="pl-s"&gt;"facetable"&lt;/span&gt;, &lt;span class="pl-s1"&gt;format&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;"json"&lt;/span&gt;)
    )
).&lt;span class="pl-en"&gt;json&lt;/span&gt;()&lt;/pre&gt;
&lt;p&gt;One final weird detail on this: Datasette now has various methods that automatically add the &lt;code&gt;base_url&lt;/code&gt; prefix to a URL. I got worried about what would happen if these were applied more than once (as above, where &lt;code&gt;datasette.urls.table()&lt;/code&gt; applies the prefix so does &lt;code&gt;datasette.client.get()&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;I fixed this using the same trick that Django and Jinja use to avoid appliying auto-escaping twice to content that will be displayed in HTML: the &lt;code&gt;datasette.urls&lt;/code&gt; methods actually return a &lt;code&gt;PrefixedUrlString&lt;/code&gt; object which is a subclass of &lt;code&gt;str&lt;/code&gt; that knows that the prefix has been applied! Code for that &lt;a href="https://github.com/simonw/datasette/blob/main/datasette/utils/__init__.py#L1015"&gt;lives here&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="weeknotes-57-weeknotes-smaller-changes"&gt;Smaller changes&lt;/h4&gt;
&lt;p&gt;A few highlights from the "smaller changes" in Datasette 0.51:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Wide tables shown within Datasette now scroll horizontally (&lt;a href="https://github.com/simonw/datasette/issues/998"&gt;#998&lt;/a&gt;). This is achieved using a new &lt;code&gt;&amp;lt;div class="table-wrapper"&amp;gt;&lt;/code&gt; element which may impact the implementation of some plugins (for example &lt;a href="https://github.com/simonw/datasette-cluster-map/commit/fcb4abbe7df9071c5ab57defd39147de7145b34e"&gt;this change to datasette-cluster-map&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;I think this is a big improvement: if your database table is too wide, it now scrolls horizontally on the page (rather than blowing the entire page out to a wider width). You can see that in action on the &lt;a href="https://global-power-plants.datasettes.com/global-power-plants/global-power-plants"&gt;global-power-plants demo&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;New &lt;a href="https://docs.datasette.io/en/stable/authentication.html#permissions-debug-menu"&gt;debug-menu&lt;/a&gt; permission. (&lt;a href="https://github.com/simonw/datasette/issues/1068"&gt;#1068&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you are signed in as &lt;code&gt;root&lt;/code&gt; the new navigation menu links to a whole plethora of previously-undiscoverable Datasette debugging tools. This new permission controls the display of those items.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Link:&lt;/code&gt; HTTP header pagination. (&lt;a href="https://github.com/simonw/datasette/issues/1014"&gt;#1014&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Inspired by GitHub and WordPress, which both use the HTTP &lt;code&gt;Link&lt;/code&gt; header in this way. It's an optional extra though: Datasette will always offer in-JSON pagination information.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Edit SQL button on canned queries, (&lt;a href="https://github.com/simonw/datasette/issues/1019"&gt;#1019&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Suggested by Jacob Fenton &lt;a href="https://github.com/simonw/datasette/issues/1019"&gt;in this issue&lt;/a&gt;. The implementation had quite a few edge cases since there are certain categories of canned query that can't be executed as custom SQL by the user. See the &lt;a href="https://github.com/simonw/datasette/issues/1019#issuecomment-708128286"&gt;issue comments&lt;/a&gt; for details and a demo.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--load-extension=spatialite&lt;/code&gt; shortcut. (&lt;a href="https://github.com/simonw/datasette/issues/1028"&gt;#1028&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Inspired by &lt;a href="https://github.com/simonw/sqlite-utils/issues/136"&gt;a similar feature&lt;/a&gt; in &lt;code&gt;sqlite-utils&lt;/code&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;datasette -o&lt;/code&gt; option now opens the most relevant page. (&lt;a href="https://github.com/simonw/datasette/issues/976"&gt;#976&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a fun little feature. If your Datasette only loads a single database, and that database only has a single table (common if you've just run a single CSV import) then running this will open your browser directly to that table page:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;datasette data.db -o
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;datasette --cors&lt;/code&gt; option now enables access to &lt;code&gt;/database.db&lt;/code&gt; downloads. (&lt;a href="https://github.com/simonw/datasette/issues/1057"&gt;#1057&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;This was inspired by Mike Bostock's &lt;a href="https://observablehq.com/@mbostock/sqlite"&gt;Observable Notebook&lt;/a&gt; that uses the Emscripten-compiled &lt;a href="https://github.com/sql-js/sql.js"&gt;JavaScript version&lt;/a&gt; of SQLite to run queries against SQLite database files.&lt;/p&gt;
&lt;p&gt;It turned out you couldn't use that notebook against SQLite files hosted in Datasette because they weren't covered by Datasette's CORS option. Now they are!&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;New documentation on &lt;a href="https://docs.datasette.io/en/stable/writing_plugins.html#writing-plugins-designing-urls"&gt;Designing URLs for your plugin&lt;/a&gt;. (&lt;a href="https://github.com/simonw/datasette/issues/1053"&gt;#1053&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Recommendations for plugin authors, inspired by a question &lt;a href="https://twitter.com/kanedr/status/1320653434895347713"&gt;from David Kane&lt;/a&gt; on Twitter. David has been building &lt;a href="https://github.com/drkane/datasette-reconcile"&gt;datasette-reconcile&lt;/a&gt;, a Datasette plugin that offers a reconciliation API endpoint that can be used with &lt;a href="https://openrefine.org/"&gt;OpenRefine&lt;/a&gt;. What a brilliant idea!&lt;/p&gt;
&lt;h4 id="weeknotes-57-datasette-edit-templates"&gt;datasette-edit-templates (almost)&lt;/h4&gt;
&lt;p&gt;Inspired by a conversation with &lt;a href="https://twitter.com/obra"&gt;Jesse Vincent&lt;/a&gt;, I also spent some time experimenting with the idea of a plugin that can load and edit templates from the database - which would turn a personal Datasette into a really fun interface hacking environment. I nearly got this working, and even shipped a preview of a &lt;code&gt;load_template()&lt;/code&gt; plugin hook in the &lt;a href="https://github.com/simonw/datasette/releases/tag/0.51a2"&gt;Datasette 0.51a2&lt;/a&gt; alpha... before crashing into a road block when I realized that it also needed to work with Jinja's &lt;code&gt;{% extends %}&lt;/code&gt; and &lt;code&gt;{% include %}&lt;/code&gt; template tags and loaders for those &lt;a href="https://github.com/pallets/jinja/issues/1304"&gt;don't currenty support async functions&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In exploring this I also realized that my &lt;code&gt;load_template()&lt;/code&gt; plugin hook wasn't actually necessary - if I'm going to solve this problem with Jinja loaders I can do so using the existing &lt;a href="https://docs.datasette.io/en/stable/plugin_hooks.html#prepare-jinja2-environment-env"&gt;prepare_jinja2_environment(env)&lt;/a&gt; hook.&lt;/p&gt;
&lt;p&gt;My not-yet-functional prototype for this is caled &lt;a href="https://github.com/simonw/datasette-edit-templates"&gt;datasette-edit-templates&lt;/a&gt;. I'm pretty confident I can get it working against the old plugin hook with a little more work.&lt;/p&gt;
&lt;h4 id="weeknotes-57-other-weeknotes"&gt;Other weeknotes&lt;/h4&gt;
&lt;p&gt;Most of my time this week was spent on Datasette 0.51 - but I did find a little bit of time for other projects.&lt;/p&gt;
&lt;p&gt;I finished recording my talk for PyCon Argentina. It will air on November 20th.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/sqlite-utils/releases/tag/2.23"&gt;sqlite-utils 2.23&lt;/a&gt; is out, with a &lt;code&gt;.m2m()&lt;/code&gt; bug fix from Adam Wolf and the new ability to display progress bars when importing TSV and CSV files.&lt;/p&gt;
&lt;h4 id="weeknotes-57-releases"&gt;Releases this week&lt;/h4&gt;
&lt;p&gt;Several of these are updates to take advantage of the new navigation plugin hooks introduced in Datasette 0.51.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette-configure-fts/releases/tag/1.1"&gt;datasette-configure-fts 1.1&lt;/a&gt; - 2020-11-01&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette-graphql/releases/tag/1.1"&gt;datasette-graphql 1.1&lt;/a&gt; - 2020-11-01&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette-edit-schema/releases/tag/0.4"&gt;datasette-edit-schema 0.4&lt;/a&gt; - 2020-10-31&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette-upload-csvs/releases/tag/0.6"&gt;datasette-upload-csvs 0.6&lt;/a&gt; - 2020-10-31&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette/releases/tag/0.51"&gt;datasette 0.51&lt;/a&gt; - 2020-10-31&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette/releases/tag/0.51a2"&gt;datasette 0.51a2&lt;/a&gt; - 2020-10-30&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette-edit-schema/releases/tag/0.4a0"&gt;datasette-edit-schema 0.4a0&lt;/a&gt; - 2020-10-30&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette/releases/tag/0.51a1"&gt;datasette 0.51a1&lt;/a&gt; - 2020-10-30&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/datasette-render-markdown/releases/tag/1.2"&gt;datasette-render-markdown 1.2&lt;/a&gt; - 2020-10-28&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/simonw/sqlite-utils/releases/tag/2.23"&gt;sqlite-utils 2.23&lt;/a&gt; - 2020-10-28&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="weeknotes-57-til"&gt;TIL this week&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/til/til/python_decorators-with-optional-arguments.md"&gt;Decorators with optional arguments&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://til.simonwillison.net/til/til/javascript_dropdown-menu-with-details-summary.md"&gt;Dropdown menu with details summary&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/urls"&gt;urls&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/xss"&gt;xss&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/weeknotes"&gt;weeknotes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/annotated-release-notes"&gt;annotated-release-notes&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="natalie-downe"/><category term="projects"/><category term="security"/><category term="urls"/><category term="xss"/><category term="datasette"/><category term="weeknotes"/><category term="annotated-release-notes"/></entry><entry><title>Weeknotes: Rocky Beaches, Datasette 0.48, a commit history of my database</title><link href="https://simonwillison.net/2020/Aug/21/weeknotes-rocky-beaches/#atom-tag" rel="alternate"/><published>2020-08-21T00:52:16+00:00</published><updated>2020-08-21T00:52:16+00:00</updated><id>https://simonwillison.net/2020/Aug/21/weeknotes-rocky-beaches/#atom-tag</id><summary type="html">
    &lt;p&gt;This week I helped Natalie launch &lt;a href="https://www.rockybeaches.com/"&gt;Rocky Beaches&lt;/a&gt;, shipped Datasette 0.48 and several releases of &lt;code&gt;datasette-graphql&lt;/code&gt;, upgraded the CSRF protection for &lt;code&gt;datasette-upload-csvs&lt;/code&gt; and figured out how to get a commit log of changes to my blog by backing up its database to a GitHub repository.&lt;/p&gt;
&lt;h4 id="rocky-beaches"&gt;Rocky Beaches&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://twitter.com/natbat"&gt;Natalie&lt;/a&gt; released the first version of &lt;a href="https://www.rockybeaches.com/"&gt;rockybeaches.com&lt;/a&gt; this week. It's a site that helps you find places to go tidepooling (known as rockpooling in the UK) and figure out the best times to go based on low tide times.&lt;/p&gt;

&lt;p&gt;&lt;img style="max-width: 100%" src="https://static.simonwillison.net/static/2020/Rocky_Beaches__Pillar_Point_Harbor_CA.jpg" alt="Screenshot of the Pillar Point page for Rocky Beaches" /&gt;&lt;/p&gt;

&lt;p&gt;I helped out with the backend for the site, mainly as an excuse to further explore the idea of using Datasette to power full websites (previously explored with &lt;a href="https://simonwillison.net/2019/Nov/25/niche-museums/"&gt;Niche Museums&lt;/a&gt; and &lt;a href="https://simonwillison.net/2020/Apr/20/self-rewriting-readme/"&gt;my TILs&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;The site uses a pattern I've been really enjoying: it's essentially a static dynamic site. Pages are dynamically rendered by Datasette using Jinja templates and a SQLite database, but the database itself is treated as a static asset: it's built at deploy time by &lt;a href="https://github.com/natbat/rockybeaches/blob/main/.github/workflows/deploy.yml"&gt;this GitHub Actions workflow&lt;/a&gt; and deployed (currently to &lt;a href="https://www.vercel.com/"&gt;Vercel&lt;/a&gt;) as a binary asset along with the code.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/natbat/rockybeaches/blob/main/scripts/build.sh"&gt;build script&lt;/a&gt; uses &lt;a href="https://github.com/simonw/yaml-to-sqlite"&gt;yaml-to-sqlite&lt;/a&gt; to load two YAML files - &lt;a href="https://github.com/natbat/rockybeaches/blob/4127c0f0539178664cefed4aca00db2b5c00c855/data/places.yml"&gt;places.yml&lt;/a&gt; and &lt;a href="https://github.com/natbat/rockybeaches/blob/4127c0f0539178664cefed4aca00db2b5c00c855/data/stations.yml"&gt;stations.yml&lt;/a&gt; - and create the &lt;code&gt;stations&lt;/code&gt; and &lt;code&gt;places&lt;/code&gt; database tables.&lt;/p&gt;
&lt;p&gt;It then runs two custom Python scripts to fetch relevant data for those places from &lt;a href="https://www.inaturalist.org/"&gt;iNaturalist&lt;/a&gt; and the &lt;a href="https://tidesandcurrents.noaa.gov/web_services_info.html"&gt;NOAA Tides &amp;amp; Currents API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The data all ends up in the Datasette instance that powers the site - you can browse it at &lt;a href="http://www.rockybeaches.com/data"&gt;www.rockybeaches.com/data&lt;/a&gt; or interact with it using GraphQL API at &lt;a href="http://www.rockybeaches.com/graphql"&gt;www.rockybeaches.com/graphql&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The code is a little convoluted at the moment - I'm still iterating towards the best patterns for building websites like this using Datasette - but I'm very pleased with the productivity and performance that this approach produced.&lt;/p&gt;
&lt;h4 id="datasette-048"&gt;Datasette 0.48&lt;/h4&gt;
&lt;p&gt;Highlights from &lt;a href="https://docs.datasette.io/en/stable/changelog.html#v0-48"&gt;Datasette 0.48&lt;/a&gt; release notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Datasette documentation now lives at &lt;a href="https://docs.datasette.io/"&gt;docs.datasette.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;extra_template_vars&lt;/code&gt;, &lt;code&gt;extra_css_urls&lt;/code&gt;, &lt;code&gt;extra_js_urls&lt;/code&gt; and &lt;code&gt;extra_body_script&lt;/code&gt; plugin hooks now all accept the same arguments. See &lt;a href="https://docs.datasette.io/en/stable/plugin_hooks.html#plugin-hook-extra-template-vars"&gt;extra_template_vars(template, database, table, columns, view_name, request, datasette)&lt;/a&gt; for details. (&lt;a href="https://github.com/simonw/datasette/issues/939"&gt;#939&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Those hooks now accept a new &lt;code&gt;columns&lt;/code&gt; argument detailing the table columns that will be rendered on that page. (&lt;a href="https://github.com/simonw/datasette/issues/938"&gt;#938&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I released a new version of &lt;a href="https://github.com/simonw/datasette-cluster-map"&gt;datasette-cluster-map&lt;/a&gt; that takes advantage of the new &lt;code&gt;columns&lt;/code&gt; argument to only inject Leaflet maps JavaScript onto the page if the table being rendered includes latitude and longitude columns - previously the plugin would load extra code on pages that weren't going to render a map at all. That's now running on &lt;a href="https://global-power-plants.datasettes.com/"&gt;https://global-power-plants.datasettes.com/&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="datasette-graphql"&gt;datasette-graphql&lt;/h4&gt;
&lt;p&gt;Using &lt;a href="https://github.com/simonw/datasette-graphql"&gt;datasette-graphql&lt;/a&gt; for Rocky Beaches inspired me to add two new features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A new &lt;code&gt;graphql()&lt;/code&gt; Jinja custom template function that lets you execute custom GraphQL queries inside a Datasette template page - which turns out to be a pretty elegant way for the template to load exactly the data that it needs in order to render the page. Here's &lt;a href="https://github.com/natbat/rockybeaches/blob/70039f18b3d3823a4f069deca513e950a3aaba4f/templates/row-data-places.html#L29-L46"&gt;how Rocky Beaches uses that&lt;/a&gt;. &lt;a href="https://github.com/simonw/datasette-graphql/issues/50"&gt;Issue 50&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Some of the iNaturalist data that Rocky Beaches uses is stored as JSON data in text columns in SQLite - mainly because I was too lazy to model it out as tables. This was coming out of the GraphQL API as strings-containing-JSON, so I added a &lt;code&gt;json_columns&lt;/code&gt; plugin configuration mechanism for turning those into Graphene &lt;code&gt;GenericScalar&lt;/code&gt; fields - see &lt;a href="https://github.com/simonw/datasette-graphql/issues/53"&gt;issue 53&lt;/a&gt; for details.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I also landed a big performance improvement. The plugin works by introspecting the database and generating a GraphQL schema that represents those tables, columns and views. For tables with a lot of tables this can get expensive, and the introspection was being run on every request.&lt;/p&gt;
&lt;p&gt;I didn't want to require a server restart any time the schema changed, so I didn't want to cache the schema in-memory. Ideally it would be cached but the cache would become invalid any time the schema itself changed.&lt;/p&gt;
&lt;p&gt;It turns out SQLite has a mechanism for this: the &lt;code&gt;PRAGMA schema_version&lt;/code&gt; statement, which returns an integer version number that changes any time the underlying schema is changed (e.g. a table is added or modified).&lt;/p&gt;
&lt;p&gt;I built a quick &lt;a href="https://github.com/simonw/datasette-schema-versions"&gt;datasette-schema-versions&lt;/a&gt; plugin to try this feature out (in less than twenty minutes thanks to my &lt;a href="https://simonwillison.net/2020/Jun/20/cookiecutter-plugins/"&gt;datasette-plugin cookiecutter template&lt;/a&gt;) and prove to myself that it works. Then I built a caching mechanism for &lt;code&gt;datasette-graphql&lt;/code&gt; that uses the current &lt;code&gt;schema_version&lt;/code&gt; as the cache key. See &lt;a href="https://github.com/simonw/datasette-graphql/issues/51"&gt;issue 51&lt;/a&gt; for details.&lt;/p&gt;
&lt;h4 id="asgi-csrf-and-datasette-upload-csvs"&gt;asgi-csrf and datasette-upload-csvs&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/datasette-upload-csvs"&gt;datasette-upload-csvs&lt;/a&gt; is a Datasette plugin that adds a form for uploading CSV files and converting them to SQLite tables.&lt;/p&gt;
&lt;p&gt;Datasette 0.44 &lt;a href="https://docs.datasette.io/en/latest/changelog.html#csrf-protection"&gt;added CSRF protection&lt;/a&gt;, which broke the plugin. I fixed that this week, but it took some extra work because file uploads use the &lt;code&gt;multipart/form-data&lt;/code&gt; HTTP mechanism and my &lt;a href="https://github.com/simonw/asgi-csrf"&gt;asgi-csrf&lt;/a&gt; library didn't support that.&lt;/p&gt;
&lt;p&gt;I &lt;a href="https://github.com/simonw/asgi-csrf/issues/1"&gt;fixed that&lt;/a&gt; this week, but the code was quite complicated. Since &lt;code&gt;asgi-csrf&lt;/code&gt; is a security library I decided to aim for 100% code coverage, the first time I've done that for one of my projects.&lt;/p&gt;
&lt;p&gt;I got there with the help of codecov.io and &lt;a href="https://pypi.org/project/pytest-cov/"&gt;pytest-cov&lt;/a&gt;. I wrote up what I learned about those tools in &lt;a href="https://github.com/simonw/til/blob/main/pytest/pytest-code-coverage.md"&gt;a TIL&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="backing-up-my-blog-database-to-a-github-repository"&gt;Backing up my blog database to a GitHub repository&lt;/h4&gt;
&lt;p&gt;I really like keeping content in a git repository (see Rocky Beaches and Niche Museums). Every content management system I've ever been has eventually desired revision control, and modeling that in a database and adding it to an existing project is always a huge pain.&lt;/p&gt;
&lt;p&gt;I have 18 years of content on this blog. I want that backed up to git - and this week I realized I have the tools to do that already.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/db-to-sqlite"&gt;db-to-sqlite&lt;/a&gt; is my tool for taking any SQL Alchemy supported database (so far tested with MySQL and PostgreSQL) and exporting it into a SQLite database.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/sqlite-diffable"&gt;sqlite-diffable&lt;/a&gt; is a very early stage tool I built last year. The idea is to dump a SQLite database out to disk in a way that is designed to work well with git diffs. Each table is dumped out as newline-delimited JSON, one row per line.&lt;/p&gt;
&lt;p&gt;So... how about converting my blog's PostgreSQL database to SQLite, then dumping it to disk with &lt;code&gt;sqlite-diffable&lt;/code&gt; and committing the result to a git repository? And then running that in a GitHub Action?&lt;/p&gt;
&lt;p&gt;Here's &lt;a href="https://github.com/simonw/simonwillisonblog-backup/blob/main/.github/workflows/backup.yml"&gt;the workflow&lt;/a&gt;. It does exactly that, with a few extra steps: it only grabs a subset of my tables, and it redacts the &lt;code&gt;password&lt;/code&gt; column from my &lt;code&gt;auth_user&lt;/code&gt; table so that my hashed password isn't exposed in the backup.&lt;/p&gt;
&lt;p&gt;I now have &lt;a href="https://github.com/simonw/simonwillisonblog-backup/commits/main"&gt;a commit log&lt;/a&gt; of changes to my blog's database!&lt;/p&gt;
&lt;p&gt;I've set it to run nightly, but I can trigger it manually by clicking a button too.&lt;/p&gt;
&lt;h4 id="til-this-week-46"&gt;TIL this week&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/til/blob/main/readthedocs/custom-subdomain.md"&gt;Pointing a custom subdomain at Read the Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/til/blob/main/pytest/pytest-code-coverage.md"&gt;Code coverage using pytest and codecov.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/til/blob/main/readthedocs/readthedocs-search-api.md"&gt;Read the Docs Search API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/til/blob/main/heroku/programatic-access-postgresql.md"&gt;Programatically accessing Heroku PostgreSQL from GitHub Actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/til/blob/main/macos/find-largest-sqlite.md"&gt;Finding the largest SQLite files on a Mac&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/til/blob/main/github-actions/grep-tests.md"&gt;Using grep to write tests in CI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="releases-this-week-46"&gt;Releases this week&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-graphql/releases/tag/0.14"&gt;datasette-graphql 0.14&lt;/a&gt; - 2020-08-20&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-graphql/releases/tag/0.13"&gt;datasette-graphql 0.13&lt;/a&gt; - 2020-08-19&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-schema-versions/releases/tag/0.1"&gt;datasette-schema-versions 0.1&lt;/a&gt; - 2020-08-19&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-graphql/releases/tag/0.12.3"&gt;datasette-graphql 0.12.3&lt;/a&gt; - 2020-08-19&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dogsheep/github-to-sqlite/releases/tag/2.5"&gt;github-to-sqlite 2.5&lt;/a&gt; - 2020-08-18&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-publish-vercel/releases/tag/0.8"&gt;datasette-publish-vercel 0.8&lt;/a&gt; - 2020-08-17&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-cluster-map/releases/tag/0.12"&gt;datasette-cluster-map 0.12&lt;/a&gt; - 2020-08-16&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette/releases/tag/0.48"&gt;datasette 0.48&lt;/a&gt; - 2020-08-16&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-graphql/releases/tag/0.12.2"&gt;datasette-graphql 0.12.2&lt;/a&gt; - 2020-08-16&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-saved-queries/releases/tag/0.2.1"&gt;datasette-saved-queries 0.2.1&lt;/a&gt; - 2020-08-15&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette/releases/tag/0.47.3"&gt;datasette 0.47.3&lt;/a&gt; - 2020-08-15&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-upload-csvs/releases/tag/0.5"&gt;datasette-upload-csvs 0.5&lt;/a&gt; - 2020-08-15&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/asgi-csrf/releases/tag/0.7"&gt;asgi-csrf 0.7&lt;/a&gt; - 2020-08-15&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/asgi-csrf/releases/tag/0.7a0"&gt;asgi-csrf 0.7a0&lt;/a&gt; - 2020-08-15&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/asgi-csrf/releases/tag/0.7a0"&gt;asgi-csrf 0.7a0&lt;/a&gt; - 2020-08-15&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-cluster-map/releases/tag/0.11.1"&gt;datasette-cluster-map 0.11.1&lt;/a&gt; - 2020-08-14&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-cluster-map/releases/tag/0.11"&gt;datasette-cluster-map 0.11&lt;/a&gt; - 2020-08-14&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/simonw/datasette-graphql/releases/tag/0.12.1"&gt;datasette-graphql 0.12.1&lt;/a&gt; - 2020-08-13&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/csrf"&gt;csrf&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/databases"&gt;databases&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/git"&gt;git&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/github"&gt;github&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/graphql"&gt;graphql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/inaturalist"&gt;inaturalist&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/weeknotes"&gt;weeknotes&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="csrf"/><category term="databases"/><category term="git"/><category term="github"/><category term="natalie-downe"/><category term="projects"/><category term="graphql"/><category term="datasette"/><category term="inaturalist"/><category term="weeknotes"/></entry><entry><title>Weeknotes: Datasette alphas for testing new plugin hooks</title><link href="https://simonwillison.net/2020/Jun/19/datasette-alphas/#atom-tag" rel="alternate"/><published>2020-06-19T00:30:57+00:00</published><updated>2020-06-19T00:30:57+00:00</updated><id>https://simonwillison.net/2020/Jun/19/datasette-alphas/#atom-tag</id><summary type="html">
    &lt;p&gt;A relatively quiet week this week, compared to last week's massive push to &lt;a href="https://simonwillison.net/2020/Jun/12/annotated-release-notes/"&gt;ship Datasette 0.44&lt;/a&gt; with authentication, permissions and writable canned queries. I can now ship alpha releases, such as today's &lt;a href="https://github.com/simonw/datasette/releases/tag/0.45a1"&gt;Datasette 0.45a1&lt;/a&gt;, which means I can preview new plugin features before they are completely ready and stable.&lt;/p&gt;

&lt;h4&gt;Datasette alphas and betas&lt;/h4&gt;

&lt;p&gt;I want to be able to iterate on plugin hooks more effectively, which means embracing release early, release often. I also want plugin authors to be able to trust Datasette not to break their work - a major focus for &lt;a href="https://github.com/simonw/datasette/issues/519#issuecomment-636330023"&gt;Datasette 1.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Releasing alpha preview versions can really help here. I have two plugin hooks in the pipeline for Datasette 0.45 already: &lt;a href="https://datasette.readthedocs.io/en/latest/plugins.html#startup-datasette"&gt;startup&lt;/a&gt; and &lt;a href="https://datasette.readthedocs.io/en/latest/plugins.html#canned-queries-datasette-database-actor"&gt;canned_queries&lt;/a&gt;. These are still under development but are now available to preview in Datasette 0.45a1. Install it like so:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;pip install datasette==0.45a1&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Please post any feedback on the design of these hooks to the &lt;a href="https://github.com/simonw/datasette/issues/834"&gt;startup hook&lt;/a&gt; or &lt;a href="https://github.com/simonw/datasette/issues/852"&gt;canned_queries hook&lt;/a&gt; issue threads.&lt;/p&gt;

&lt;p&gt;Another compelling reason to ship alphas: it means I can release alpha versions of my own plugins that themselves depend on a Datasette alpha, and still have their unit tests pass in continuous integration. I expect to take advantage of that ability a lot in the future.&lt;/p&gt;

&lt;p&gt;Figuring out how to safely ship an alpha took a little bit of work, because I wanted to make sure that alphas wouldn't become the default version installed from PyPI, mess up my existing CI configuration or end up accidentally published to Docker Hub. You can follow my research on this &lt;a href="https://github.com/simonw/datasette/issues/807"&gt;in this issue&lt;/a&gt;, including my experiments shipping an alpha for &lt;a href="https://github.com/simonw/datasette-render-images"&gt;datasette-render-images&lt;/a&gt; first.&lt;/p&gt;

&lt;h4&gt;Those new plugin hooks&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;startup()&lt;/code&gt; (&lt;a href="https://datasette.readthedocs.io/en/latest/plugins.html#startup-datasette"&gt;documentation&lt;/a&gt;) is a hook that gets called on Datasette server startup, and passed the &lt;code&gt;datasette&lt;/code&gt; object. The initial use-case was plugins that might want to validate their own configuration, but I imagine other interesting uses for it will emerge over time.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;canned_queries()&lt;/code&gt; (&lt;a href="https://datasette.readthedocs.io/en/latest/plugins.html#canned-queries-datasette-database-actor"&gt;documentation&lt;/a&gt;) lets plugin authors dynamically generate new &lt;a href="https://datasette.readthedocs.io/en/latest/sql_queries.html#canned-queries"&gt;canned queries&lt;/a&gt; for a given database. It gets passed the datasette object, the current database name &lt;em&gt;and&lt;/em&gt; the currently authenticated actor, if there is one. This means you can do fun things like include a user's own saved queries loaded from another database table:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;from datasette import hookimpl

@hookimpl
def canned_queries(datasette, database, actor):
    async def inner():
        db = datasette.get_database(database)
        if actor is not None and await db.table_exists("saved_queries"):
            results = await db.execute(
                "select name, sql from saved_queries where actor_id = :id", {
                    "id": actor["id"]
                }
            )
            return {result["name"]: {
                "sql": result["sql"]
            } for result in results}
    return inner&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I'm using a pattern here that's shared by a number of other Datasette plugin hooks: rather than returning the results directly, this plugin function returns an &lt;code&gt;async def inner()&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;The code that calls the hook knows that if an &lt;code&gt;asyncio&lt;/code&gt; awaitable function is returned it should &lt;code&gt;await&lt;/code&gt; it. This is my trick for using awaitable functions with &lt;a href="https://pluggy.readthedocs.io/en/latest/"&gt;Pluggy&lt;/a&gt;, which wasn't initially built with async in mind.&lt;/p&gt;

&lt;h4&gt;Shooting naturalist videos with Natalie&lt;/h4&gt;

&lt;p&gt;Natalie has started a &lt;a href="https://www.youtube.com/channel/UCu_N3UP2o8JdnNmGAhYRijQ"&gt;new YouTube channel&lt;/a&gt; to put her various science communication courses at Stanford into action. I've been helping out as camera-person, which has been really interesting. I'm currently shooting with FiLMiC running on an iPhone 11 Pro on a tripod, using audio from an AirPod (until we can get our hands on something better).&lt;/p&gt;

&lt;p&gt;Natalie's been editing the videos on her iPhone and these early results are really good! Here's the video we shot for Sea Lion Birthday on 15th June, a day when 50% of all California Sea Lions celebrate their birthday. Watch &lt;a href="https://www.youtube.com/watch?v=Z2B_cbp34D8"&gt;the video&lt;/a&gt; to find out why.&lt;/p&gt;

&lt;p&gt;The close-up footage of the sea lions was shot by Natalie on a Canon DSLR with a 100-400mm lens. I love that lens so much for wildlife photography.&lt;/p&gt;

&lt;iframe style="max-width: 100%" width="560" height="315" src="https://www.youtube-nocookie.com/embed/Z2B_cbp34D8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen"&gt;&amp;#160;&lt;/iframe&gt;

&lt;h4&gt;TIL this week&lt;/h4&gt;

&lt;p&gt;Just one new TIL this week but it's a good one: &lt;a href="https://github.com/simonw/til/blob/master/sqlite/ld-preload.md"&gt;Using LD_PRELOAD to run any version of SQLite with Python&lt;/a&gt;. I've been wanting to figure out a good way to replace the SQLite version used by the Python standard library for ages - &lt;a href="https://github.com/coleifer/pysqlite3"&gt;pysqlite3&lt;/a&gt; helps a lot here, but I also need the ability to run arbitrary older versions to help respond to bug reports. The &lt;code&gt;LD_PRELOAD&lt;/code&gt; trick works perfectly for that.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/plugins"&gt;plugins&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/weeknotes"&gt;weeknotes&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="natalie-downe"/><category term="plugins"/><category term="projects"/><category term="datasette"/><category term="weeknotes"/></entry><entry><title>Happy Birthday Sea Lions!</title><link href="https://simonwillison.net/2020/Jun/15/happy-birthday-sea-lions/#atom-tag" rel="alternate"/><published>2020-06-15T19:08:56+00:00</published><updated>2020-06-15T19:08:56+00:00</updated><id>https://simonwillison.net/2020/Jun/15/happy-birthday-sea-lions/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.youtube.com/watch?v=Z2B_cbp34D8"&gt;Happy Birthday Sea Lions!&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Today, June 15th, is Sea Lion birthday—half of all California Sea Lions are born today thanks to clever co-ordinated delayed implantation by Sea Lion females. Natalie has started making nature videos and I’ve been tagging along as her camera-person—this three minute video, shot at Pier 39 in San Francisco, celebrates Sea Lion birthday and explains how it works.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/Natbat/status/1272558614922719232"&gt;@natbat&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/wildlife"&gt;wildlife&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/youtube"&gt;youtube&lt;/a&gt;&lt;/p&gt;



</summary><category term="natalie-downe"/><category term="wildlife"/><category term="youtube"/></entry><entry><title>Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist</title><link href="https://simonwillison.net/2018/Dec/18/develop-your-naturalist-superpowers/#atom-tag" rel="alternate"/><published>2018-12-18T22:39:19+00:00</published><updated>2018-12-18T22:39:19+00:00</updated><id>https://simonwillison.net/2018/Dec/18/develop-your-naturalist-superpowers/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://24ways.org/2018/observable-notebooks-and-inaturalist/"&gt;Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Natalie’s article for this year’s 24 ways advent calendar shows how you can use Observable notebooks to quickly build interactive visualizations against web APIs. She uses the iNaturalist API to show species of Nudibranchs that you might see in a given month, plus a Vega-powered graph of sightings over the course of the year. This really inspired me to think harder about how I can use Observable to solve some of my API debugging needs, and I’ve already spun up a couple of private Notebooks to exercise new APIs that I’m building at work. It’s a huge productivity boost.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/Natbat/status/1074820561509859328"&gt;@natbat&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/webapis"&gt;webapis&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/inaturalist"&gt;inaturalist&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/observable"&gt;observable&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nudibranchs"&gt;nudibranchs&lt;/a&gt;&lt;/p&gt;



</summary><category term="natalie-downe"/><category term="webapis"/><category term="inaturalist"/><category term="observable"/><category term="nudibranchs"/></entry><entry><title>Automatically playing science communication games with transfer learning and fastai</title><link href="https://simonwillison.net/2018/Oct/29/transfer-learning/#atom-tag" rel="alternate"/><published>2018-10-29T03:16:33+00:00</published><updated>2018-10-29T03:16:33+00:00</updated><id>https://simonwillison.net/2018/Oct/29/transfer-learning/#atom-tag</id><summary type="html">
    &lt;p&gt;This weekend was the 9th annual &lt;a href="https://sf.sciencehackday.org/"&gt;Science Hack Day San Francisco&lt;/a&gt;, which was also the 100th Science Hack Day held worldwide.&lt;/p&gt;
&lt;p&gt;Natalie and I decided to combine our interests and build something fun.&lt;/p&gt;
&lt;p&gt;I’m currently enrolled in Jeremy Howard’s &lt;a href="http://course.fast.ai/"&gt;Deep Learning course&lt;/a&gt; so I figured this was a great opportunity to try out some computer vision.&lt;/p&gt;
&lt;p&gt;Natalie runs the &lt;a href="https://natbat.github.io/scicomm-calendar/"&gt;SciComm Games calendar&lt;/a&gt; and accompanying &lt;a href="https://twitter.com/SciCommGames"&gt;@SciCommGames&lt;/a&gt; bot to promote and catalogue science communication hashtag games on Twitter.&lt;/p&gt;
&lt;p&gt;Hashtag games? Natalie &lt;a href="https://natbat.github.io/scicomm-calendar/"&gt;explains them here&lt;/a&gt; - essentially they are games run by scientists on Twitter to foster public engagement around an animal or topic by challenging people to identify if a photo is a #cougarOrNot or participate in a #TrickyBirdID or identify #CrowOrNo or many others.&lt;/p&gt;
&lt;p&gt;Combining the two… we decided to build a bot that automatically plays these games using computer vision. So far it’s just trying #cougarOrNot - you can see the bot in action at &lt;a href="https://twitter.com/critter_vision/with_replies"&gt;@critter_vision&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;&lt;a id="Training_data_from_iNaturalist_14"&gt;&lt;/a&gt;Training data from iNaturalist&lt;/h3&gt;
&lt;p&gt;In order to build a machine learning model, you need to start out with some training data.&lt;/p&gt;
&lt;p&gt;I’m a big fan of &lt;a href="https://www.inaturalist.org/"&gt;iNaturalist&lt;/a&gt;, a citizen science project that encourages users to upload photographs of wildlife (and plants) they have seen and have their observations verified by a community. Natalie and I used it to build &lt;a href="https://www.owlsnearme.com/"&gt;owlsnearme.com&lt;/a&gt; earlier this year - the API in particular is fantastic.&lt;/p&gt;
&lt;p&gt;iNaturalist has &lt;a href="https://www.inaturalist.org/observations?place_id=1&amp;amp;taxon_id=41944"&gt;over 5,000 verified sightings&lt;/a&gt; of felines (cougars, bobcats, domestic cats and more) in the USA.&lt;/p&gt;
&lt;p&gt;The raw data is available as &lt;a href="http://api.inaturalist.org/v1/observations?identified=true&amp;amp;photos=true&amp;amp;identifications=most_agree&amp;amp;quality_grade=research&amp;amp;order=desc&amp;amp;order_by=created_at&amp;amp;taxon_id=41944&amp;amp;place_id=1&amp;amp;per_page=200"&gt;a paginated JSON API&lt;/a&gt;. The &lt;a href="https://static.inaturalist.org/photos/27333309/medium.jpg"&gt;medium sized photos&lt;/a&gt; are just the right size for training a neural network.&lt;/p&gt;
&lt;p&gt;I started by grabbing 5,000 images and saving them to disk with a filename that reflected their identified species:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Bobcat_9005106.jpg
Domestic-Cat_10068710.jpg
Bobcat_15713672.jpg
Domestic-Cat_6755280.jpg
Mountain-Lion_9075705.jpg
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;&lt;a id="Building_a_model_32"&gt;&lt;/a&gt;Building a model&lt;/h3&gt;
&lt;p&gt;I’m only one week into the &lt;a href="http://www.fast.ai/"&gt;fast.ai&lt;/a&gt; course so this really isn’t particularly sophisticated yet, but it was just about good enough to power our hack.&lt;/p&gt;
&lt;p&gt;The main technique we are learning in the course is called &lt;a href="https://machinelearningmastery.com/transfer-learning-for-deep-learning/"&gt;transfer learning&lt;/a&gt;, and it really is shockingly effective. Instead of training a model from scratch you start out with a pre-trained model and use some extra labelled images to train a small number of extra layers.&lt;/p&gt;
&lt;p&gt;The initial model we are using is &lt;a href="https://www.kaggle.com/pytorch/resnet34"&gt;ResNet-34&lt;/a&gt;, a 34-layer neural network trained on 1,000 labelled categories in the &lt;a href="http://www.image-net.org/"&gt;ImageNet&lt;/a&gt; corpus.&lt;/p&gt;
&lt;p&gt;In class, we learned to use this technique to get 94% accuracy against the &lt;a href="http://www.robots.ox.ac.uk/~vgg/data/pets/"&gt;Oxford-IIIT Pet Dataset&lt;/a&gt; - around 7,000 images covering 12 cat breeds and 25 dog breeds. In 2012 the researchers at Oxford were able to get 59.21% using a sophisticated model - it 2018 we can get 94% with transfer learning and just a few lines of code.&lt;/p&gt;
&lt;p&gt;I started with an example provided in class, which loads and trains images from files on disk using a regular expression that extracts the labels from the filenames.&lt;/p&gt;
&lt;p&gt;My full Jupyter notebook is &lt;a href="https://github.com/simonw/cougar-or-not/blob/master/inaturalist-cats.ipynb"&gt;inaturalist-cats.ipynb&lt;/a&gt; - the key training code is as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from fastai import *
from fastai.vision import *
cat_images_path = Path('/home/jupyter/.fastai/data/inaturalist-usa-cats/images')
cat_fnames = get_image_files(cat_images_path)
cat_data = ImageDataBunch.from_name_re(
    cat_images_path,
    cat_fnames,
    r'/([^/]+)_\d+.jpg$',
    ds_tfms=get_transforms(),
    size=224
)
cat_data.normalize(imagenet_stats)
cat_learn = ConvLearner(cat_data, models.resnet34, metrics=error_rate)
cat_learn.fit_one_cycle(4)
# Save the generated model to disk
cat_learn.save(&amp;quot;usa-inaturalist-cats&amp;quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Calling &lt;code&gt;cat_learn.save(&amp;quot;usa-inaturalist-cats&amp;quot;)&lt;/code&gt; created an 84MB file on disk at &lt;code&gt;/home/jupyter/.fastai/data/inaturalist-usa-cats/images/models/usa-inaturalist-cats.pth&lt;/code&gt; - I used &lt;code&gt;scp&lt;/code&gt; to copy that model down to my laptop.&lt;/p&gt;
&lt;p&gt;This model gave me a 24% error rate which is pretty terrible - others on the course have been getting error rates less than 10% for all kinds of interesting problems. My focus was to get a model deployed as an API though so I haven’t spent any additional time fine-tuning things yet.&lt;/p&gt;
&lt;h3&gt;&lt;a id="Deploying_the_model_as_an_API_67"&gt;&lt;/a&gt;Deploying the model as an API&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://github.com/fastai/fastai"&gt;fastai library&lt;/a&gt; strongly encourages training against a GPU, using &lt;a href="https://pytorch.org/"&gt;pytorch&lt;/a&gt; and &lt;a href="https://mathema.tician.de/software/pycuda/"&gt;PyCUDA&lt;/a&gt;. I’ve been using n1-highmem-8 Google Cloud Platform instance with an attached Tesla P4, then running everything in a Jupyter notebook there. This costs around $0.38 an hour - fine for a few hours of training, but way too expensive to permanently host a model.&lt;/p&gt;
&lt;p&gt;Thankfully, while a GPU is essential for productively training models it’s not nearly as important for evaluating them against new data. pytorch can run in CPU mode for that just fine on standard hardware, and the &lt;a href="https://github.com/fastai/fastai/blob/master/README.md"&gt;fastai README&lt;/a&gt; includes instructions on installing it for a CPU using pip.&lt;/p&gt;
&lt;p&gt;I started out by ensuring I could execute my generated model on my own laptop (since pytorch doesn’t yet work with the GPU built into the Macbook Pro). Once I had that working, I used the resulting code to write a tiny Starlette-powered API server. The code for that can be found in &lt;a href="https://github.com/simonw/cougar-or-not/blob/8adafac571aad3385317c76bd229448b3cdaa0ac/cougar.py"&gt;in cougar.py&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;fastai is under very heavy development and the latest version doesn’t quite have a clean way of loading a model from disk without also including the initial training images, so I had to hack around quite a bit to get this working using clues from &lt;a href="https://forums.fast.ai/"&gt;the fastai forums&lt;/a&gt;. I expect this to get much easier over the next few weeks as the library continues to evolve based on feedback from the current course.&lt;/p&gt;
&lt;p&gt;To deploy the API I wrote &lt;a href="https://github.com/simonw/cougar-or-not/blob/8adafac571aad3385317c76bd229448b3cdaa0ac/Dockerfile"&gt;a Dockerfile&lt;/a&gt; and shipped it to &lt;a href="https://zeit.co/now"&gt;Zeit Now&lt;/a&gt;. Now remains my go-to choice for this kind of project, though unfortunately their new (and brilliant) v2 platform imposes &lt;a href="https://github.com/zeit/now-cli/issues/1523"&gt;a 100MB image size limit&lt;/a&gt; - not nearly enough when the model file itself weights in at 83 MB. Thankfully it’s still possible to &lt;a href="https://github.com/simonw/cougar-or-not/commit/5ad3d5b49c6419e4c2440291bc5fb204625aae83"&gt;specify their v1 cloud&lt;/a&gt; which is more forgiving for larger applications.&lt;/p&gt;
&lt;p&gt;Here’s the result: an API which can accept either the URL to an image or an uploaded image file: &lt;a href="https://cougar-or-not.now.sh/"&gt;https://cougar-or-not.now.sh/&lt;/a&gt; - try it out with &lt;a href="https://cougar-or-not.now.sh/classify-url?url=https://upload.wikimedia.org/wikipedia/commons/9/9a/Oregon_Cougar_ODFW.JPG"&gt;a cougar&lt;/a&gt; and &lt;a href="https://cougar-or-not.now.sh/classify-url?url=https://upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Bobcat2.jpg/1200px-Bobcat2.jpg"&gt;a bobcat&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;&lt;a id="The_Twitter_Bot_81"&gt;&lt;/a&gt;The Twitter Bot&lt;/h3&gt;
&lt;p&gt;Natalie built &lt;a href="https://github.com/natbat/CritterVision"&gt;the Twitter bot&lt;/a&gt;. It runs as a scheduled task on Heroku and works by checking for new #cougarOrNot tweets from &lt;a href="https://twitter.com/drmichellelarue"&gt;Dr. Michelle LaRue&lt;/a&gt;, extracting any images, passing them to my API and replying with a tweet that summarizes the results. Take a look at &lt;a href="https://twitter.com/critter_vision/with_replies"&gt;its recent replies&lt;/a&gt; to get a feel for how well it is doing.&lt;/p&gt;
&lt;p&gt;Amusingly, Dr. LaRue frequently tweets memes to promote upcoming competitions and marks them with the same hashtag. The bot appears to think that most of the memes are bobcats! I should definitely spend some time tuning that model.&lt;/p&gt;
&lt;p&gt;Science Hack Day was great fun. A big thanks to the organizing team, and congrats to all of the other participants. I’m really looking forward to the next one.&lt;/p&gt;
&lt;p&gt;Plus… we won a medal!&lt;/p&gt;
&lt;blockquote class="twitter-tweet" data-lang="en"&gt;&lt;p lang="en" dir="ltr"&gt;Enjoyed &lt;a href="https://twitter.com/hashtag/scienceHackday?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#scienceHackday&lt;/a&gt; this weekend, made &amp;amp; launched a cool machine learning hack to process images &amp;amp; work out if they have a cougar in them or not! &lt;a href="https://twitter.com/hashtag/CougarOrNot?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#CougarOrNot&lt;/a&gt; &lt;a href="https://twitter.com/critter_vision?ref_src=twsrc%5Etfw"&gt;@critter_vision&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;... we won a medal!&lt;br /&gt;&lt;br /&gt;Bot code: &lt;a href="https://t.co/W2jZcGCnFr"&gt;https://t.co/W2jZcGCnFr&lt;/a&gt;&lt;br /&gt;Machine learning API: &lt;a href="https://t.co/swNiKlcTp0"&gt;https://t.co/swNiKlcTp0&lt;/a&gt; &lt;a href="https://t.co/dcdIhNZy63"&gt;pic.twitter.com/dcdIhNZy63&lt;/a&gt;&lt;/p&gt;&amp;#8212; Natbat (@Natbat) &lt;a href="https://twitter.com/Natbat/status/1056717060116369410?ref_src=twsrc%5Etfw"&gt;October 29, 2018&lt;/a&gt;&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/computer-vision"&gt;computer-vision&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/machine-learning"&gt;machine-learning&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/inaturalist"&gt;inaturalist&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/fastai"&gt;fastai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/transferlearning"&gt;transferlearning&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jeremy-howard"&gt;jeremy-howard&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/starlette"&gt;starlette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="computer-vision"/><category term="machine-learning"/><category term="natalie-downe"/><category term="inaturalist"/><category term="fastai"/><category term="transferlearning"/><category term="jeremy-howard"/><category term="starlette"/></entry><entry><title>Letterboxing on Lundy</title><link href="https://simonwillison.net/2018/Sep/18/letterboxing-lundy/#atom-tag" rel="alternate"/><published>2018-09-18T17:09:21+00:00</published><updated>2018-09-18T17:09:21+00:00</updated><id>https://simonwillison.net/2018/Sep/18/letterboxing-lundy/#atom-tag</id><summary type="html">
    &lt;p&gt;Last week Natalie and I spent a delightful two days with our friends Hannah and Adam on the beautiful island of &lt;a href="https://en.wikipedia.org/wiki/Lundy"&gt;Lundy&lt;/a&gt; in the Bristol Channel, 12 miles off the coast of North Devon.&lt;/p&gt;
&lt;p&gt;I’ve been wanting to visit Lundy for years. The island is managed by the &lt;a href="https://www.landmarktrust.org.uk/"&gt;Landmark Trust&lt;/a&gt;, a UK charity who look after historic buildings and make them available as holiday rentals.&lt;/p&gt;
&lt;p&gt;Our first experience with the Landmark Trust was the original /dev/fort &lt;a href="https://devfort.com/cohort/1/the-first-dev-fort"&gt;back in 2008&lt;/a&gt; when we rented &lt;a href="https://www.landmarktrust.org.uk/search-and-book/properties/fort-clonque-7423"&gt;a Napoleonic Sea Fortress&lt;/a&gt; on Alderney in the Channel Islands. Ever since then I’ve been keeping an eye out for opportunities to try out more of their properties: just two weeks ago we stayed in &lt;a href="https://www.landmarktrust.org.uk/search-and-book/properties/wortham-manor-13309"&gt;Wortham Manor&lt;/a&gt; and used it as a staging ground to help prepare a family wedding.&lt;/p&gt;

&lt;div class="gallery"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_6897.jpg?w=200&amp;amp;auto=compress" alt="Wortham Manor" title="Wortham Manor" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_6777.jpg?w=200&amp;amp;auto=compress" alt="Dining room" title="The dining room" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_6782.jpg?w=200&amp;amp;auto=compress" alt="A group in the kitchen" title="The kitchen" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_6950.jpg?w=200&amp;amp;auto=compress" alt="Wedding preparations outside the manor" title="Wedding preparations outside the manor" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_6982.jpg?w=200&amp;amp;auto=compress" alt="Open doors" title="Doors" /&gt;
&lt;/div&gt;

&lt;p&gt;I cannot recommend the Landmark Trust experience strongly enough: each property is unique and fascinating, they are kept in great condition and if you split the cost of a larger rental among a group of friends the price can be comparable to a youth hostel.&lt;/p&gt;
&lt;p&gt;Lundy is their Crown Jewels: they’ve been looking after the island since the 1960s and now offer &lt;a href="https://www.landmarktrust.org.uk/Search-and-book/location/lundy/"&gt;23 self-catering properties there&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We took the ferry out on Tuesday morning (a truly horrific two hour voyage) and back again on Thursday evening (thankfully much calmer). Once on Lundy we stayed in &lt;a href="https://www.landmarktrust.org.uk/search-and-book/properties/castle-keep-south-5720"&gt;Castle Keep South&lt;/a&gt;, a two bedroom house in the keep of a castle built in the 13th century by Henry III, after he retook the island from the apparently traitorous William de Marisco (who was then hanged, drawed and quartered for good measure - apparently one of the &lt;a href="https://en.wikipedia.org/wiki/Hanged,_drawn_and_quartered#Treason_in_England"&gt;first ever uses&lt;/a&gt; of that punishment). Lundy has some &lt;em&gt;very&lt;/em&gt; interesting history attached to it.&lt;/p&gt;

&lt;div class="gallery"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_0029.JPG?w=200&amp;amp;auto=compress" alt="Marisco Castle" title="Marisco Castle" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_1278.JPG?w=200&amp;amp;auto=compress" alt="Rocks outside the castle" title="Rocks outside the castle" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_1939.JPG?w=200&amp;amp;auto=compress" alt="Inside the castle" title="Inside the castle" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_5999.JPG?w=200&amp;amp;auto=compress" alt="The castle" title="The castle" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_8959.JPG?w=200&amp;amp;auto=compress" alt="The castle on the cliffs" title="The castle on the cliffs" /&gt;
&lt;/div&gt;

&lt;p&gt;The island itself is utterly spectacular. Three miles long, half a mile wide, surrounded by craggy cliffs and mostly topped with ferns and bracken. Not a lot of trees except for the more sheltered eastern side. A charming population of sheep, goats, Lundy Ponies and some highland cattle with extremely intimidating horns.&lt;/p&gt;

&lt;div class="gallery"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_9502.jpg?w=200&amp;amp;auto=compress" alt="Scenery" title="Scenery" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_0275.JPG?w=200&amp;amp;auto=compress" alt="Scenery" title="Scenery" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_4340.JPG?w=200&amp;amp;auto=compress" alt="A fine highland cow" title="A fine highland cow" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_4346.JPG?w=200&amp;amp;auto=compress" alt="A Lundy pony" title="A Lundy pony" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_5270.JPG?w=200&amp;amp;auto=compress" alt="Dozy sheep" title="Dozy sheep" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_5358.JPG?w=200&amp;amp;auto=compress" alt="Goat" title="Goat" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_9368.jpg?w=200&amp;amp;auto=compress" alt="Bracken" title="Bracken" /&gt;
&lt;/div&gt;

&lt;div class="big-image"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_4319.jpg?w=800" alt="A highland cow that looks like Boris Johnson" title="Boris" style="max-width: 100%" /&gt;
&lt;/div&gt;

&lt;p&gt;(“They’re complete softies. We call that one Boris because he looks like Boris Johnson” - a lady who works in the Tavern)&lt;/p&gt;
&lt;p&gt;Lundy has three light houses (two operational, one retired), the aforementioned castle, a charming little village, a church and numerous fascinating ruins and isolated buildings, many of which you can stay in. It has the remains of two crashed WWII German Heinkel He 111 bombers (which we eventually tracked down).&lt;/p&gt;

&lt;div class="gallery"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_2618.JPG?w=200&amp;amp;auto=compress" alt="Stairs" title="Stairs" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7600.jpg?w=200&amp;amp;auto=compress" alt="An old lighthouse" title="The Old Light" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_0787.JPG?w=200&amp;amp;auto=compress" alt="A newer lighthouse" title="North Light" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_4349.JPG?w=200&amp;amp;auto=compress" alt="Rusty aircraft remains" title="Remains of a Heinkel" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_4351.JPG?w=200&amp;amp;auto=compress" alt="Isolated building" title="An isolated building" /&gt;
&lt;/div&gt;

&lt;p&gt;It also hosts what is quite possibly the world’s best Letterboxing trail.&lt;/p&gt;
&lt;h3&gt;&lt;a id="Letterboxing_37"&gt;&lt;/a&gt;Letterboxing?&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Letterboxing_(hobby)"&gt;Letterboxing&lt;/a&gt; is an outdoor activity that is primarily pursued in the UK. It consists of weatherproof boxes hidden in remote locations, usually under a pile of rocks, containing a notebook and a custom stamp. The location of the boxes is provided by a set of clues. Given the clues, your challenge is to find all of the boxes and collect their stamps in your notebook.&lt;/p&gt;
&lt;p&gt;On Lundy the clues can be purchased from the village shop.&lt;/p&gt;
&lt;p&gt;I had dabbled with Letterboxing a tiny bit in the past but it hadn’t really clicked with me until Natalie (a keen letterboxer) encouraged us to give it a go on Lundy.&lt;/p&gt;
&lt;p&gt;It ended up occupying almost every waking moment of our time there, and taking us to every far-flung corner of the island.&lt;/p&gt;
&lt;p&gt;There are 28 letterboxes on Lundy. We managed to get 27 of them - and we would have got them all, if the last one hadn’t been located on a beach that was shut off from the public due to grey seals using it to raise their newly born pups! The pups were cute enough that we forgave them.&lt;/p&gt;

&lt;div class="gallery"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_2907.JPG?w=200&amp;amp;auto=compress" alt="A baby seal on a beach" title="Baby seal" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_3455.JPG?w=200&amp;amp;auto=compress" alt="No access: Seal Breeding Area sign" title="No access - Seal breeding area" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_3978.JPG?w=200&amp;amp;auto=compress" alt="A baby seal blocks the road" title="Baby seal blocking the road" /&gt;
&lt;/div&gt;

&lt;p&gt;To give you an idea for how it works, here’s the clue for letterbox 27, “The Ugly”:&lt;/p&gt;

&lt;div class="big-image"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7769.jpg?w=800" alt="The Ugly: From the lookout hut near the flagpole on the east side of Millcombe, walk in the direction of the South Light until it is on a bearing of 130°, Bramble Villa 210° and Millcombe due west. The letterbox is beneah you." style="max-width: 100%" /&gt;
&lt;/div&gt;

&lt;p&gt;There were letterboxes in lighthouses, letterboxes in ruins, letterboxes perilously close to cliff-faces, letterboxes in church pews, letterboxes in quarries, letterboxes in caves. If you thought that letterboxing was for kids, after scrabbling down more perilous cliff paths than I can count I can assure you it isn't!&lt;/p&gt;

&lt;div class="gallery"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_2179.JPG?w=200&amp;amp;auto=compress" alt="Natalie finds a letterbox" title="Natalie finds a letterbox" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_6548.JPG?w=200&amp;amp;auto=compress" alt="A letterbox near a cave" title="A letterbox near a cave" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7661.jpg?w=200&amp;amp;auto=compress" alt="Stamping a letterbox" title="Stamping a letterbox" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7727.jpg?w=200&amp;amp;auto=compress" alt="Hunting high and low" title="Hunting high and low" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7783.jpg?w=200&amp;amp;auto=compress" alt="Hannah found another one" title="Hannah found another one" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7820.jpg?w=200&amp;amp;auto=compress" alt="Adam finds one on a slope" title="Adam finds one on a slope" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7843.jpg?w=200&amp;amp;auto=compress" alt="Stamping a letterbox together" title="Stamping a letterbox together" /&gt;
&lt;/div&gt;

&lt;p&gt;On Thursday I clocked up 24,000 steps walking 11 miles and burned 1,643 calories. For comparison, when I ran the half marathon last year I only burned 1,222. These GPS tracks from my Apple Watch give a good impression of how far we ended up walking on our second day of searching.&lt;/p&gt;

&lt;div class="gallery" data-row-height="200"&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7935.PNG?w=200&amp;amp;auto=compress" alt="Apple watch GPS trace" title="GPS trace #1" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7936.PNG?w=200&amp;amp;auto=compress" alt="Apple watch GPS trace" title="GPS trace #2" /&gt;
    &lt;img src="https://simonwillisonnet.imgix.net/static/2018/lundy/IMG_7937.PNG?w=200&amp;amp;auto=compress" alt="Apple watch GPS trace" title="GPS trace #3" /&gt;
&lt;/div&gt;

&lt;p&gt;When we checked the letterboxing log book in the Tavern on Wednesday evening we found most people who attempt to hit all 28 letterboxes spread it out over a much more sensible timeframe. I’m not sure that I would recommend trying to fit it in to just two days, but it’s hard to imagine a better way of adding extra purpose to an exploration of the island.&lt;/p&gt;
&lt;p&gt;Should you attempt letterboxing on Lundy (and if you can get out there you really should consider it), a few tips:&lt;/p&gt;
&lt;ul&gt;
    &lt;li&gt;If in doubt, look for the paths. Most of the harder to find letterboxes were at least located near an obvious worn path.&lt;/li&gt;
    &lt;li&gt;“Earthquake” is a nightmare. The clue really didn’t help us - we ended up performing a vigorous search of most of the area next to (not inside) the earthquake fault.&lt;/li&gt;
    &lt;li&gt;The iPhone compass app is really useful for finding bearings. We didn’t use a regular compass at all.&lt;/li&gt;
    &lt;li&gt;If you get stuck, check for extra clues in the letterboxing log book in the tavern. This helped us crack Earthquake.&lt;/li&gt;
    &lt;li&gt;There’s more than one pond. The quarry pond is very obvious once you find it.&lt;/li&gt;
    &lt;li&gt;Take as many different maps as you can find - many of the clues reference named landmarks that may not appear on the letterboxing clue map. We forgot to grab an offline copy of Lundy in the Google Maps app and regretted it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you find yourself in Ilfracombe on the way to or from Lundy, the &lt;a href="http://www.ilfracombemuseum.co.uk/"&gt;Ilfracombe Museum&lt;/a&gt; is well worth your time. It’s a classic example in the genre of “eccentric collects a wide variety of things, builds a museum for them”. Highlights include a cupboard full of pickled bats and a drawer full of 100-year-old wedding cake samples.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/travel"&gt;travel&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="natalie-downe"/><category term="travel"/></entry><entry><title>Photos from our tour of the amazing bone collection of Ray Bandar</title><link href="https://simonwillison.net/2018/Feb/21/photos-our-tour-amazing-bone-collection-ray-bandar/#atom-tag" rel="alternate"/><published>2018-02-21T04:58:35+00:00</published><updated>2018-02-21T04:58:35+00:00</updated><id>https://simonwillison.net/2018/Feb/21/photos-our-tour-amazing-bone-collection-ray-bandar/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://twitter.com/simonw/status/966174716061876224"&gt;Photos from our tour of the amazing bone collection of Ray Bandar&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Ray Bandar (1927-2017) was an artist, scientist, naturalist and an incredibly prolific collector of bones. His collection is in the process of moving to the California Academy of Sciences but Natalie managed to land us a private tour lead by his great nephew. The collection is truly awe-inspiring, and a testament to an extraordinary life lived following a very particular passion.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/photos"&gt;photos&lt;/a&gt;&lt;/p&gt;



</summary><category term="natalie-downe"/><category term="photos"/></entry><entry><title>owlsnearme source code on GitHub</title><link href="https://simonwillison.net/2018/Feb/4/owlsnearme-source/#atom-tag" rel="alternate"/><published>2018-02-04T22:33:34+00:00</published><updated>2018-02-04T22:33:34+00:00</updated><id>https://simonwillison.net/2018/Feb/4/owlsnearme-source/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/simonw/owlsnearme"&gt;owlsnearme source code on GitHub&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Here’s the source code for our new owlsnearme.com project. It’s a single-page React application that pulls all of its data from the iNaturalist API. We built it this weekend with the SuperbOwl kick-off as a hard deadline so it’s not the most beautiful React code, but it’s a nice demonstration of how React (and create-react-app in particular) can be used for rapid development.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/github"&gt;github&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/react"&gt;react&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/inaturalist"&gt;inaturalist&lt;/a&gt;&lt;/p&gt;



</summary><category term="github"/><category term="javascript"/><category term="natalie-downe"/><category term="projects"/><category term="react"/><category term="inaturalist"/></entry><entry><title>Owls Near Me</title><link href="https://simonwillison.net/2018/Feb/4/owlsnearme/#atom-tag" rel="alternate"/><published>2018-02-04T22:26:29+00:00</published><updated>2018-02-04T22:26:29+00:00</updated><id>https://simonwillison.net/2018/Feb/4/owlsnearme/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.owlsnearme.com/"&gt;Owls Near Me&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Back in 2010 Natalie and I shipped owlsnearyou.com—a website for finding your nearest owls, using data from the sadly deceased WildlifeNearYou (RIP). To celebrate #SuperbOwl Sunday we rebuilt the same concept on top of the excellent iNaturalist API. Search for a place to see which owls have been spotted there, or click the magic button to geolocate your device and see which owls have been spotted in your nearby area!


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/wildlifenearyou"&gt;wildlifenearyou&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/inaturalist"&gt;inaturalist&lt;/a&gt;&lt;/p&gt;



</summary><category term="natalie-downe"/><category term="projects"/><category term="wildlifenearyou"/><category term="inaturalist"/></entry><entry><title>Deploying an asynchronous Python microservice with Sanic and Zeit Now</title><link href="https://simonwillison.net/2017/Oct/14/async-python-sanic-now/#atom-tag" rel="alternate"/><published>2017-10-14T21:46:38+00:00</published><updated>2017-10-14T21:46:38+00:00</updated><id>https://simonwillison.net/2017/Oct/14/async-python-sanic-now/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;a href="https://simonwillison.net/tags/jsonhead/"&gt;Back in 2008&lt;/a&gt; Natalie Downe and I deployed what today we would call a microservice: &lt;a href="https://github.com/simonw/json-head"&gt;json-head&lt;/a&gt;, a tiny Google App Engine app that allowed you to make an HTTP head request against a URL and get back the HTTP headers as JSON. One of our initial use-scase for this was &lt;a href="https://gist.github.com/natbat/8406b8e5a8ed22d6a2e1bbd75771bc97"&gt;Natalie’s addSizes.js&lt;/a&gt;, an unobtrusive jQuery script that could annotate links to PDFs and other large files with their corresponding file size pulled from the &lt;code&gt;Content-Length&lt;/code&gt; header. Another potential use-case is detecting broken links, since the API can be used to spot 404 status codes (&lt;a href="https://json-head.now.sh/?url=https://simonwillison.net/page-does-not-exist"&gt;as in this example&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;At some point in the following decade &lt;code&gt;json-head.appspot.com&lt;/code&gt; stopped working. Today I’m bringing it back, mainly as an excuse to try out the combination of Python 3.5 async, the &lt;a href="https://github.com/channelcat/sanic/"&gt;Sanic&lt;/a&gt; microframework and Zeit’s brilliant &lt;a href="https://zeit.co/now"&gt;Now&lt;/a&gt; deployment platform.&lt;/p&gt;
&lt;p&gt;First, a demo. &lt;a href="https://json-head.now.sh/?url=https://simonwillison.net/"&gt;https://json-head.now.sh/?url=https://simonwillison.net/&lt;/a&gt; returns the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[
    {
        &amp;quot;ok&amp;quot;: true,
        &amp;quot;headers&amp;quot;: {
            &amp;quot;Date&amp;quot;: &amp;quot;Sat, 14 Oct 2017 18:37:52 GMT&amp;quot;,
            &amp;quot;Content-Type&amp;quot;: &amp;quot;text/html; charset=utf-8&amp;quot;,
            &amp;quot;Connection&amp;quot;: &amp;quot;keep-alive&amp;quot;,
            &amp;quot;Set-Cookie&amp;quot;: &amp;quot;__cfduid=dd0b71b4e89bbaca5b27fa06c0b95af4a1508006272; expires=Sun, 14-Oct-18 18:37:52 GMT; path=/; domain=.simonwillison.net; HttpOnly; Secure&amp;quot;,
            &amp;quot;Cache-Control&amp;quot;: &amp;quot;s-maxage=200&amp;quot;,
            &amp;quot;X-Frame-Options&amp;quot;: &amp;quot;SAMEORIGIN&amp;quot;,
            &amp;quot;Via&amp;quot;: &amp;quot;1.1 vegur&amp;quot;,
            &amp;quot;CF-Cache-Status&amp;quot;: &amp;quot;HIT&amp;quot;,
            &amp;quot;Vary&amp;quot;: &amp;quot;Accept-Encoding&amp;quot;,
            &amp;quot;Server&amp;quot;: &amp;quot;cloudflare-nginx&amp;quot;,
            &amp;quot;CF-RAY&amp;quot;: &amp;quot;3adca70269a51e8f-SJC&amp;quot;,
            &amp;quot;Content-Encoding&amp;quot;: &amp;quot;gzip&amp;quot;
        },
        &amp;quot;status&amp;quot;: 200,
        &amp;quot;url&amp;quot;: &amp;quot;https://simonwillison.net/&amp;quot;
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Given a URL, &lt;code&gt;json-head.now.sh&lt;/code&gt; performs an HTTP HEAD request and returns the resulting status code and the HTTP headers. Results are returned with the &lt;code&gt;Access-Control-Allow-Origin: *&lt;/code&gt; header so you can call the API using &lt;code&gt;fetch()&lt;/code&gt; or &lt;code&gt;XMLHttpRequest&lt;/code&gt; from JavaScript running on any page.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Sanic_and_Python_asyncawait_32"&gt;&lt;/a&gt;Sanic and Python async/await&lt;/h2&gt;
&lt;p&gt;A key new feature &lt;a href="https://docs.python.org/3/whatsnew/3.5.html"&gt;added to Python 3.5&lt;/a&gt; back in September 2015 was built-in syntactic support for coroutine control via the async/await statements. Python now has some serious credibility as a platform for asynchronous I/O (the concept that got me &lt;a href="https://simonwillison.net/2009/Nov/23/node/"&gt;so excited about Node.js back in 2009&lt;/a&gt;). This has lead to an explosion of asynchronous innovation around the Python community.&lt;/p&gt;
&lt;p&gt;json-head is the perfect application for async - it’s little more than a dumbed-down HTTP proxy, accepting incoming HTTP requests, making its own requests elsewhere and then returning the results.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/channelcat/sanic/"&gt;Sanic&lt;/a&gt; is a Flask-like web framework built specifically to take advantage of async/await in Python 3.5. It’s designed for speed - built on top of &lt;a href="https://github.com/MagicStack/uvloop"&gt;uvloop&lt;/a&gt;, a Python wrapper for &lt;a href="https://github.com/libuv/libuv"&gt;libuv&lt;/a&gt; (which itself was originally built to power Node.js). uvloop’s self-selected benchmarks are &lt;a href="https://magic.io/blog/uvloop-blazing-fast-python-networking/"&gt;extremely impressive&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Zeit_Now_40"&gt;&lt;/a&gt;Zeit Now&lt;/h2&gt;
&lt;p&gt;To host this new microservice, I chose &lt;a href="https://zeit.co/now"&gt;Zeit Now&lt;/a&gt;. It’s a truly beautiful piece of software design.&lt;/p&gt;
&lt;p&gt;Now lets you treat deployments as immutable. Every time you deploy you get a brand new URL. You can then interact with your deployment directly, or point an existing alias to it if you want a persistent URL for your project.&lt;/p&gt;
&lt;p&gt;Deployments are free, and deployed code stays available forever due to &lt;a href="https://github.com/zeit/now-cli/issues/189"&gt;some clever engineering&lt;/a&gt; behind the scenes.&lt;/p&gt;
&lt;p&gt;Best of all: deploying a project takes just a single command: type &lt;code&gt;now&lt;/code&gt; and the code in your current directory will be deployed to their cloud and assigned a unique URL.&lt;/p&gt;
&lt;p&gt;Now was originally built for Node.js projects, but last August &lt;a href="https://zeit.co/blog/now-dockerfile"&gt;Zeit added Docker support&lt;/a&gt;. If the directory you run it in contains a Dockerfile, running &lt;code&gt;now&lt;/code&gt; will upload, build and run the corresponding image.&lt;/p&gt;
&lt;p&gt;There’s just one thing missing: good examples of how to deploy Python projects to Now using Docker. I’m hoping this article can help fill that gap.&lt;/p&gt;
&lt;p&gt;Here’s the &lt;a href="https://github.com/simonw/json-head/blob/master/Dockerfile"&gt;complete Dockerfile&lt;/a&gt; I’m using for json-head:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;FROM python:3
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 8006
CMD [&amp;quot;python&amp;quot;, &amp;quot;json_head.py&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I’m using the &lt;a href="https://hub.docker.com/_/python/"&gt;official Docker Python image&lt;/a&gt; as a base, copying the current directory into the image, using &lt;code&gt;pip install&lt;/code&gt; to install dependencies and then exposing port 8006 (for no reason other than that’s the port I use for local development environment) and running the &lt;a href="https://github.com/simonw/json-head/blob/master/json_head.py"&gt;json_head.py&lt;/a&gt; script. Now is smart enough to forward incoming HTTP traffic on port 80 to the port that was exposed by the container.&lt;/p&gt;
&lt;p&gt;If you setup Now yourself (&lt;code&gt;npm install -g now&lt;/code&gt; or use &lt;a href="https://zeit.co/download"&gt;one of their installers&lt;/a&gt;) you can deploy my code directly from GitHub to your own instance with a single command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ now simonw/json-head
&amp;gt; Didn't find directory. Searching on GitHub...
&amp;gt; Deploying GitHub repository &amp;quot;simonw/json-head&amp;quot; under simonw
&amp;gt; Ready! https://simonw-json-head-xqkfgorgei.now.sh (copied to clipboard) [1s]
&amp;gt; Initializing…
&amp;gt; Building
&amp;gt; ▲ docker build
Sending build context to Docker daemon 7.168 kBkB
&amp;gt; Step 1 : FROM python:3
&amp;gt; 3: Pulling from library/python
&amp;gt; ... lots more stuff here ...
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;a id="Initial_implementation_79"&gt;&lt;/a&gt;Initial implementation&lt;/h2&gt;
&lt;p&gt;Here’s my first working version of json-head using Sanic:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from sanic import Sanic
from sanic import response
import aiohttp

app = Sanic(__name__)

async def head(session, url):
    try:
        async with session.head(url) as response:
            return {
                'ok': True,
                'headers': dict(response.headers),
                'status': response.status,
                'url': url,
            }
    except Exception as e:
        return {
            'ok': False,
            'error': str(e),
            'url': url,
        }

@app.route('/')
async def handle_request(request):
    url = request.args.get('url')
    if url:
        async with aiohttp.ClientSession() as session:
            head_info = await head(session, url)
            return response.json(
                head_info,
                headers={
                    'Access-Control-Allow-Origin': '*'
                },
            )
    else:
        return response.html('Try /?url=xxx')

if __name__ == '__main__':
    app.run(host=&amp;quot;0.0.0.0&amp;quot;, port=8006)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This exact code is deployed at &lt;a href="https://json-head-thlbstmwfi.now.sh/"&gt;https://json-head-thlbstmwfi.now.sh/&lt;/a&gt; - since Now deployments are free, there’s no reason not to leave work-in-progress examples hosted as throwaway deployments.&lt;/p&gt;
&lt;p&gt;In addition to Sanic, I’m also using the handy &lt;a href="https://github.com/aio-libs/aiohttp"&gt;aiohttp&lt;/a&gt; asynchronous HTTP library - which features API design clearly inspired by my all-time favourite HTTP library, &lt;a href="https://github.com/requests/requests"&gt;requests&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The key new pieces of syntax to understand in the above code are the async and await statements. &lt;code&gt;async def&lt;/code&gt; is used to declare a function that acts as a coroutine. Coroutines need to be executed inside an event loop (which Sanic handles for us), but gain the ability to use the &lt;code&gt;await&lt;/code&gt; statement.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;await&lt;/code&gt; statement is the real magic here: it suspends the current coroutine until the coroutine it is calling has finished executing. It is this that allows us to write asynchronous code without descending into a messy hell of callback functions.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Adding_parallel_requests_131"&gt;&lt;/a&gt;Adding parallel requests&lt;/h2&gt;
&lt;p&gt;So far we haven’t really taken advantage of what async I/O can do - if every incoming HTTP request results in a single outgoing HTTP response then async may help us scale to serve more incoming requests at once but it’s not really giving us any new functionality.&lt;/p&gt;
&lt;p&gt;Executing multiple outbound HTTP requests in parallel is a much more interesting use-case. Let’s add support for multiple &lt;code&gt;?url=&lt;/code&gt; parameters, such as the following:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://json-head.now.sh/?url=https://simonwillison.net/&amp;amp;url=https://www.google.com/"&gt;https://json-head.now.sh/?url=https://simonwillison.net/&amp;amp;url=https://www.google.com/&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[
    {
        &amp;quot;ok&amp;quot;: true,
        &amp;quot;headers&amp;quot;: {
            &amp;quot;Date&amp;quot;: &amp;quot;Sat, 14 Oct 2017 19:35:29 GMT&amp;quot;,
            &amp;quot;Content-Type&amp;quot;: &amp;quot;text/html; charset=utf-8&amp;quot;,
            &amp;quot;Connection&amp;quot;: &amp;quot;keep-alive&amp;quot;,
            &amp;quot;Set-Cookie&amp;quot;: &amp;quot;__cfduid=ded486c1faaac166e8ae72a87979c02101508009729; expires=Sun, 14-Oct-18 19:35:29 GMT; path=/; domain=.simonwillison.net; HttpOnly; Secure&amp;quot;,
            &amp;quot;Cache-Control&amp;quot;: &amp;quot;s-maxage=200&amp;quot;,
            &amp;quot;X-Frame-Options&amp;quot;: &amp;quot;SAMEORIGIN&amp;quot;,
            &amp;quot;Via&amp;quot;: &amp;quot;1.1 vegur&amp;quot;,
            &amp;quot;CF-Cache-Status&amp;quot;: &amp;quot;EXPIRED&amp;quot;,
            &amp;quot;Vary&amp;quot;: &amp;quot;Accept-Encoding&amp;quot;,
            &amp;quot;Server&amp;quot;: &amp;quot;cloudflare-nginx&amp;quot;,
            &amp;quot;CF-RAY&amp;quot;: &amp;quot;3adcfb671c862888-SJC&amp;quot;,
            &amp;quot;Content-Encoding&amp;quot;: &amp;quot;gzip&amp;quot;
        },
        &amp;quot;status&amp;quot;: 200,
        &amp;quot;url&amp;quot;: &amp;quot;https://simonwillison.net/&amp;quot;
    },
    {
        &amp;quot;ok&amp;quot;: true,
        &amp;quot;headers&amp;quot;: {
            &amp;quot;Date&amp;quot;: &amp;quot;Sat, 14 Oct 2017 19:35:29 GMT&amp;quot;,
            &amp;quot;Expires&amp;quot;: &amp;quot;-1&amp;quot;,
            &amp;quot;Cache-Control&amp;quot;: &amp;quot;private, max-age=0&amp;quot;,
            &amp;quot;Content-Type&amp;quot;: &amp;quot;text/html; charset=ISO-8859-1&amp;quot;,
            &amp;quot;P3P&amp;quot;: &amp;quot;CP=\&amp;quot;This is not a P3P policy! See g.co/p3phelp for more info.\&amp;quot;&amp;quot;,
            &amp;quot;Content-Encoding&amp;quot;: &amp;quot;gzip&amp;quot;,
            &amp;quot;Server&amp;quot;: &amp;quot;gws&amp;quot;,
            &amp;quot;X-XSS-Protection&amp;quot;: &amp;quot;1; mode=block&amp;quot;,
            &amp;quot;X-Frame-Options&amp;quot;: &amp;quot;SAMEORIGIN&amp;quot;,
            &amp;quot;Set-Cookie&amp;quot;: &amp;quot;1P_JAR=2017-10-14-19; expires=Sat, 21-Oct-2017 19:35:29 GMT; path=/; domain=.google.com&amp;quot;,
            &amp;quot;Alt-Svc&amp;quot;: &amp;quot;quic=\&amp;quot;:443\&amp;quot;; ma=2592000; v=\&amp;quot;39,38,37,35\&amp;quot;&amp;quot;,
            &amp;quot;Transfer-Encoding&amp;quot;: &amp;quot;chunked&amp;quot;
        },
        &amp;quot;status&amp;quot;: 200,
        &amp;quot;url&amp;quot;: &amp;quot;https://www.google.com/&amp;quot;
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We’re now accepting multiple URLs and executing multiple HEAD requests… but Python 3.5 async makes it easy to do this in parallel, so our overall request time should match that of the single longest HEAD request that we triggered.&lt;/p&gt;
&lt;p&gt;Here’s an implementation that adds support for multiple, parallel outbound HTTP requests:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@app.route('/')
async def handle_request(request):
    urls = request.args.getlist('url')
    if urls:
        async with aiohttp.ClientSession() as session:
            head_infos = await asyncio.gather(*[
                head(session, url) for url in urls
            ])
            return response.json(
                head_infos,
                headers={'Access-Control-Allow-Origin': '*'},
            )
    else:
        return response.html(INDEX)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We’re using the &lt;code&gt;asyncio&lt;/code&gt; module from the Python 3.5 standard library here - in particular the &lt;code&gt;gather&lt;/code&gt; function. &lt;a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.gather"&gt;&lt;code&gt;asyncio.gather&lt;/code&gt;&lt;/a&gt; takes a list of coroutines and returns a future aggregating their results. This future will resolve (and return to a corresponding &lt;code&gt;await&lt;/code&gt; statement) as soon as all of those coroutines have returned their values.&lt;/p&gt;
&lt;p&gt;My final code for json-head &lt;a href="https://github.com/simonw/json-head"&gt;can be found on GitHub&lt;/a&gt;. As I hope I’ve demonstrated, the combination of Python 3.5+, Sanic and Now makes deploying asynchronous Python microservices trivially easy.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/async"&gt;async&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jsonhead"&gt;jsonhead&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sanic"&gt;sanic&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/zeit-now"&gt;zeit-now&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/docker"&gt;docker&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="async"/><category term="jsonhead"/><category term="natalie-downe"/><category term="python"/><category term="sanic"/><category term="zeit-now"/><category term="docker"/></entry><entry><title>Getting married and going travelling</title><link href="https://simonwillison.net/2010/Jun/21/married/#atom-tag" rel="alternate"/><published>2010-06-21T23:01:00+00:00</published><updated>2010-06-21T23:01:00+00:00</updated><id>https://simonwillison.net/2010/Jun/21/married/#atom-tag</id><summary type="html">
    &lt;p&gt;It’s been a busy month. On Saturday the 5th of June I married the wonderful &lt;a href="http://natbat.net/"&gt;Natalie Downe&lt;/a&gt; in a beautiful ceremony at Roedean School in Brighton. The reception had &lt;a href="http://www.flickr.com/search/groups/?q=owl&amp;amp;m=pool&amp;amp;w=1445512%40N23&amp;amp;s=int"&gt;owls&lt;/a&gt;, &lt;a href="http://www.flickr.com/search/groups/?q=cheese&amp;amp;m=pool&amp;amp;w=1445512%40N23&amp;amp;s=int"&gt;cheese&lt;/a&gt;, &lt;a href="http://www.flickr.com/photos/tags/pugwashtheferret/"&gt;a ferret&lt;/a&gt;, &lt;a href="http://www.flickr.com/search/groups/?q=eagle&amp;amp;m=pool&amp;amp;w=1445512%40N23&amp;amp;s=int"&gt;a golden eagle&lt;/a&gt;, amazing Turkish food, &lt;a href="http://www.flickr.com/photos/ianlloyd/4676083928/"&gt;Jewish chair dancing&lt;/a&gt; and &lt;a href="http://www.flickr.com/photos/drewm/4675203480/" title="The big group photo"&gt;lovely guests&lt;/a&gt;. It was the happiest day of my life.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.flickr.com/photos/sebleedelisle/4673804871/"&gt;&lt;img alt="Natalie, Me and a Golden Eagle" height="281" src="http://simonwillison.net/static/2010/me-nat-eagle.jpg" width="450" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href="http://www.flickr.com/photos/drewm/collections/72157624091523559/" title="Collection: Natalie &amp;amp; Simon's Wedding"&gt;official wedding photos&lt;/a&gt; were taken by &lt;a href="http://allinthehead.com/"&gt;Drew McLellan&lt;/a&gt;, and there’s &lt;a href="http://www.flickr.com/groups/natalie-simon-wedding/pool/"&gt;a Flickr group pool&lt;/a&gt; as well. The day after the wedding Natalie’s sister Louise took &lt;a href="http://www.flickr.com/photos/simon/sets/72157624107650141/"&gt;some fun photos&lt;/a&gt; of us running around Brighton in our wedding clothes.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.flickr.com/photos/simon/4682807900/"&gt;&lt;img alt="Bride and Groom on the Carousel" height="300" src="http://simonwillison.net/static/2010/me-nat-carousel.jpg" width="450" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Thanks to everyone who helped out with the preparations, and also to everyone who came along to share the special day with us. And a big thanks to Tom Coates, my best man.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.flickr.com/photos/drewm/4674731543/"&gt;&lt;img alt="Best man and Groom" height="300" src="http://simonwillison.net/static/2010/bestman.jpg" width="450" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Yesterday afternoon, we set out on our honeymoon. I’m writing this from the beach in Nice, on the south coast of France. Tomorrow we take the ferry to Corsica for a week in relative luxury. After that, we’re backpacking around Europe, then Africa, then the rest of the world. We’ve given up our flat and put our stuff in to storage, and the plan is to keep on travelling until we get fed up or run out of money. We expect to be gone for at least 18 months.&lt;/p&gt;
&lt;p&gt;Since we’re both web developers, we’re lucky to be able to take some of our work with us. I’ll still be doing some work for the Guardian and Natalie is available for freelance work. If you have something you think we can help you with, &lt;a href="mailto:contact@natimon.com"&gt;drop us a line&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Naturally we’ll be blogging, tweeting and &lt;a href="http://www.flickr.com/groups/sparkabout/pool/"&gt;Flickring&lt;/a&gt; our adventures. You can follow our updates at &lt;a href="http://sparkabout.net/"&gt;http://sparkabout.net/&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.flickr.com/photos/drewm/4675496221/"&gt;&lt;img alt="Photobomb!" height="545" src="http://simonwillison.net/static/2010/photobomb.jpg" width="450" /&gt;&lt;/a&gt;&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/brighton"&gt;brighton&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cheese"&gt;cheese&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/tom-coates"&gt;tom-coates&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/travel"&gt;travel&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/recovered"&gt;recovered&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natbat"&gt;natbat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sparkabout"&gt;sparkabout&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/wedding"&gt;wedding&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/personal-news"&gt;personal-news&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="brighton"/><category term="cheese"/><category term="natalie-downe"/><category term="tom-coates"/><category term="travel"/><category term="recovered"/><category term="natbat"/><category term="sparkabout"/><category term="wedding"/><category term="personal-news"/></entry><entry><title>WildlifeNearYou talk at £5 app, and being Wired (not Tired)</title><link href="https://simonwillison.net/2010/Apr/11/wired/#atom-tag" rel="alternate"/><published>2010-04-11T20:42:11+00:00</published><updated>2010-04-11T20:42:11+00:00</updated><id>https://simonwillison.net/2010/Apr/11/wired/#atom-tag</id><summary type="html">
    &lt;p&gt;Two quick updates about &lt;a href="http://www.wildlifenearyou.com/"&gt;WildlifeNearYou&lt;/a&gt;. First up, I gave a talk about the site at &lt;a href="http://fivepoundapp.com/"&gt;£5 app&lt;/a&gt;, my favourite Brighton evening event which celebrates side projects and the joy of Making Stuff. I talked about the site's &lt;a href="http://simonwillison.net/2010/Jan/12/wildlifenearyou/"&gt;genesis on a fort&lt;/a&gt;, &lt;a href="http://www.wildlifenearyou.com/best/"&gt;crowdsourcing photo ratings&lt;/a&gt;, how we use &lt;a href="http://www.freebase.com/"&gt;Freebase&lt;/a&gt; and &lt;a href="http://dbpedia.org/"&gt;DBpedia&lt;/a&gt; and how integrating with Flickr's machine tags gave us &lt;a href="http://code.flickr.com/blog/2010/02/10/5-questions-for-simon-willison/"&gt;a powerful location API for free&lt;/a&gt;. Here's the video of the talk, courtesy of &lt;a href="http://ianozsvald.com/2010/03/31/22nd-5-app-write-up-for-wildlife-plaques-robots-go-and-golf-gadgets/" title="22nd £5 App Write-up for WildLife, Plaques, Robots, Go and Golf Gadgets"&gt;Ian Oszvald&lt;/a&gt;:&lt;/p&gt;

&lt;object width="450" height="255"&gt;&lt;param name="allowfullscreen" value="true" /&gt;&lt;param name="allowscriptaccess" value="always" /&gt;&lt;param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=10578232&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1" /&gt;&lt;embed src="http://vimeo.com/moogaloop.swf?clip_id=10578232&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="450" height="255"&gt;&lt;/embed&gt;&lt;/object&gt;&lt;p&gt;&lt;a href="http://vimeo.com/10578232"&gt;£5 App #22 WildLifeNearYou by Simon Willison and Natalie Downe&lt;/a&gt; from &lt;a href="http://vimeo.com/user707645"&gt;IanProCastsCoUk&lt;/a&gt; on &lt;a href="http://vimeo.com"&gt;Vimeo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Secondly, I'm excited to note that WildlifeNearYou spin-off &lt;a href="http://owlsnearyou.com/"&gt;OwlsNearYou.com&lt;/a&gt; is featured in UK Wired magazine's Wired / Tired / Expired column... and we're Wired!&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.flickr.com/photos/simon/4511451405/"&gt;&lt;img src="http://simonwillison.net/static/2010/wired-owls-small.jpg" alt="Wired / Tired / Expired column from May 2010 Wired UK" width="450" height="176" /&gt;&lt;/a&gt;&lt;/p&gt;

    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/api"&gt;api&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/crowdsourcing"&gt;crowdsourcing&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/fivepoundapp"&gt;fivepoundapp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/flickr"&gt;flickr&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/freebase"&gt;freebase&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/owlsnearyou"&gt;owlsnearyou&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/my-talks"&gt;my-talks&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/wildlifenearyou"&gt;wildlifenearyou&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/wired"&gt;wired&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="api"/><category term="crowdsourcing"/><category term="fivepoundapp"/><category term="flickr"/><category term="freebase"/><category term="natalie-downe"/><category term="owlsnearyou"/><category term="my-talks"/><category term="wildlifenearyou"/><category term="wired"/></entry><entry><title>Going Nuts with CSS Transitions</title><link href="https://simonwillison.net/2009/Dec/14/transitions/#atom-tag" rel="alternate"/><published>2009-12-14T13:16:34+00:00</published><updated>2009-12-14T13:16:34+00:00</updated><id>https://simonwillison.net/2009/Dec/14/transitions/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://24ways.org/2009/going-nuts-with-css-transitions"&gt;Going Nuts with CSS Transitions&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat’s article for this year’s 24ways—adding special effects to images using CSS rotation, box shadows and the magical -webkit-transition property.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/24-ways"&gt;24-ways&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/animation"&gt;animation&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/rotation"&gt;rotation&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/webkit"&gt;webkit&lt;/a&gt;&lt;/p&gt;



</summary><category term="24-ways"/><category term="animation"/><category term="css"/><category term="natalie-downe"/><category term="rotation"/><category term="webkit"/></entry><entry><title>Styling buttons to look like links</title><link href="https://simonwillison.net/2009/Jun/10/styling/#atom-tag" rel="alternate"/><published>2009-06-10T02:11:30+00:00</published><updated>2009-06-10T02:11:30+00:00</updated><id>https://simonwillison.net/2009/Jun/10/styling/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://natbat.net/2009/Jun/10/styling-buttons-as-links/"&gt;Styling buttons to look like links&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat has a neat trick for styling submit buttons to look like regular links—so there’s absolutely no excuse for using a “delete” link when you should be using a POST request.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/buttons"&gt;buttons&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/forms"&gt;forms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/getpost"&gt;getpost&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/http"&gt;http&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/links"&gt;links&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/post"&gt;post&lt;/a&gt;&lt;/p&gt;



</summary><category term="buttons"/><category term="css"/><category term="forms"/><category term="getpost"/><category term="http"/><category term="links"/><category term="natalie-downe"/><category term="post"/></entry><entry><title>TiddlyPocketBook</title><link href="https://simonwillison.net/2009/May/28/tiddlypocketbook/#atom-tag" rel="alternate"/><published>2009-05-28T01:24:45+00:00</published><updated>2009-05-28T01:24:45+00:00</updated><id>https://simonwillison.net/2009/May/28/tiddlypocketbook/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://tiddlypocketbook.com/"&gt;TiddlyPocketBook&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Paul Downey took Nat’s dinky pocketbooks CSS and combined it with TiddlyWiki to create a single page pocketbook editor.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/paul-downey"&gt;paul-downey&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pocketbook"&gt;pocketbook&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/tiddlywiki"&gt;tiddlywiki&lt;/a&gt;&lt;/p&gt;



</summary><category term="css"/><category term="javascript"/><category term="natalie-downe"/><category term="paul-downey"/><category term="pocketbook"/><category term="tiddlywiki"/></entry><entry><title>Dinky pocketbooks with WebKit transforms</title><link href="https://simonwillison.net/2009/May/22/dinky/#atom-tag" rel="alternate"/><published>2009-05-22T00:33:49+00:00</published><updated>2009-05-22T00:33:49+00:00</updated><id>https://simonwillison.net/2009/May/22/dinky/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://natbat.net/2009/May/21/pocketbooks/"&gt;Dinky pocketbooks with WebKit transforms&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat used 90 degree CSS transform rotations in print stylesheets for WebKit and Safari to create printable cut-out-and-fold pocketbooks from A4 pages. Very neat.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/csstransforms"&gt;csstransforms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pocketbooks"&gt;pocketbooks&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/printstyles"&gt;printstyles&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/rotation"&gt;rotation&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/safari"&gt;safari&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/webkit"&gt;webkit&lt;/a&gt;&lt;/p&gt;



</summary><category term="css"/><category term="csstransforms"/><category term="natalie-downe"/><category term="pocketbooks"/><category term="printstyles"/><category term="rotation"/><category term="safari"/><category term="webkit"/></entry><entry><title>djangopeople.net on GitHub</title><link href="https://simonwillison.net/2009/May/4/djangopeople/#atom-tag" rel="alternate"/><published>2009-05-04T18:12:15+00:00</published><updated>2009-05-04T18:12:15+00:00</updated><id>https://simonwillison.net/2009/May/4/djangopeople/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://github.com/simonw/djangopeople.net/tree/master"&gt;djangopeople.net on GitHub&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I’ve released the source code for Django People, the geographical community site developed last year by myself and Natalie Downe (it hasn’t otherwise been touched since April last year, so it needs porting to Django 1.1). If you want a new feature on the site, implement it and I’ll see about merging it in.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django-people"&gt;django-people&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/git"&gt;git&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/github"&gt;github&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/open-source"&gt;open-source&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;&lt;/p&gt;



</summary><category term="django"/><category term="django-people"/><category term="git"/><category term="github"/><category term="natalie-downe"/><category term="open-source"/><category term="projects"/><category term="python"/></entry><entry><title>Practical, maintainable CSS</title><link href="https://simonwillison.net/2009/Mar/12/practical/#atom-tag" rel="alternate"/><published>2009-03-12T00:46:51+00:00</published><updated>2009-03-12T00:46:51+00:00</updated><id>https://simonwillison.net/2009/Mar/12/practical/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://natbat.net/2009/Mar/10/practical-maintainable-css/"&gt;Practical, maintainable CSS&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat’s posted slides and a video from her latest talk at last week’s Brighton Girl Geeks evening.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="http://natbat.net/"&gt;Natalie Downe&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/girlgeeks"&gt;girlgeeks&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;&lt;/p&gt;



</summary><category term="css"/><category term="girlgeeks"/><category term="natalie-downe"/></entry><entry><title>Tweetersation</title><link href="https://simonwillison.net/2008/Oct/2/tweetersation/#atom-tag" rel="alternate"/><published>2008-10-02T17:08:19+00:00</published><updated>2008-10-02T17:08:19+00:00</updated><id>https://simonwillison.net/2008/Oct/2/tweetersation/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://tweetersation.com/"&gt;Tweetersation&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat and my latest side project: a JSONP API powered tool to more easily follow conversations between people on Twitter, by combining their tweets in to a single timeline.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/api"&gt;api&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jsonp"&gt;jsonp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/tweetersation"&gt;tweetersation&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/twitter"&gt;twitter&lt;/a&gt;&lt;/p&gt;



</summary><category term="api"/><category term="javascript"/><category term="jsonp"/><category term="natalie-downe"/><category term="projects"/><category term="tweetersation"/><category term="twitter"/></entry><entry><title>CSS Systems for writing maintainable CSS</title><link href="https://simonwillison.net/2008/Sep/28/systems/#atom-tag" rel="alternate"/><published>2008-09-28T23:30:39+00:00</published><updated>2008-09-28T23:30:39+00:00</updated><id>https://simonwillison.net/2008/Sep/28/systems/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://natbat.net/2008/Sep/28/css-systems/"&gt;CSS Systems for writing maintainable CSS&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat has published the slides and notes from her BarCamp presentation this morning. I’m really excited about her approach, which involves designing a “CSS system” of markup patterns and CSS that embodies the design of an individual site. Future maintenance can then take this overall system in to account, which is assisted by a defined ordering system and shared vocabulary.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/barcamplondon5"&gt;barcamplondon5&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/csspatterns"&gt;csspatterns&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/maintainability"&gt;maintainability&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/markup"&gt;markup&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;&lt;/p&gt;



</summary><category term="barcamplondon5"/><category term="css"/><category term="csspatterns"/><category term="maintainability"/><category term="markup"/><category term="natalie-downe"/></entry><entry><title>addSizes.js: Snazzy automatic link file-size generation</title><link href="https://simonwillison.net/2008/Aug/30/addsizesjs/#atom-tag" rel="alternate"/><published>2008-08-30T10:39:35+00:00</published><updated>2008-08-30T10:39:35+00:00</updated><id>https://simonwillison.net/2008/Aug/30/addsizesjs/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://natbat.net/2008/Aug/27/addSizes/"&gt;addSizes.js: Snazzy automatic link file-size generation&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Posted to Nat’s snazzy new blog: a script that uses my json-head API to grab the file size of linked documents on a page and insert those sizes in to the document.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/addsizes"&gt;addsizes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/json"&gt;json&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jsonhead"&gt;jsonhead&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jsonp"&gt;jsonp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;&lt;/p&gt;



</summary><category term="addsizes"/><category term="javascript"/><category term="json"/><category term="jsonhead"/><category term="jsonp"/><category term="natalie-downe"/></entry><entry><title>Django People</title><link href="https://simonwillison.net/2008/Jan/23/djangopeople/#atom-tag" rel="alternate"/><published>2008-01-23T02:00:58+00:00</published><updated>2008-01-23T02:00:58+00:00</updated><id>https://simonwillison.net/2008/Jan/23/djangopeople/#atom-tag</id><summary type="html">
    &lt;p&gt;I'm constantly surprised by the number of people I run in to at conferences (or even in one case on the train) who are using &lt;a href="http://www.djangoproject.com/"&gt;Django&lt;/a&gt; but are completely invisible to the Django community.  It seems that this is the downside of having good documentation: many people just read it and start building, without ever showing their face on the mailing lists or IRC.&lt;/p&gt;

&lt;p&gt;So, &lt;a href="http://notes.natbat.net/"&gt;Natalie&lt;/a&gt; and I have just launched &lt;a href="http://djangopeople.net/"&gt;djangopeople.net&lt;/a&gt; - a site that encourages Django developers to create a profile and stamp themselves on a map. Every country in the world gets its &lt;a href="http://djangopeople.net/gb/"&gt;own page&lt;/a&gt;, as does &lt;a href="http://djangopeople.net/us/ks/"&gt;every US state&lt;/a&gt; (for the moment; I may add other country's regional subdivisions in the future). The focus of the site is firmly on location, and I'm hoping to add features in the future that encourage people to get involved with local Django user groups and meet like-minded developers.&lt;/p&gt;

&lt;p&gt;So, if you develop sites in Django head over there and create yourself a profile. If possible, upload a photo too - it makes the site look nice! Please leave any feedback or bug reports as comments on this post.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/community"&gt;community&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django-people"&gt;django-people&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="community"/><category term="django"/><category term="django-people"/><category term="natalie-downe"/><category term="projects"/></entry><entry><title>Poorly Macbook, ineffective error message design</title><link href="https://simonwillison.net/2008/Jan/13/errors/#atom-tag" rel="alternate"/><published>2008-01-13T23:31:49+00:00</published><updated>2008-01-13T23:31:49+00:00</updated><id>https://simonwillison.net/2008/Jan/13/errors/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://notes.natbat.net/2008/01/11/poorly-macbook/"&gt;Poorly Macbook, ineffective error message design&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat’s MacBook died the other day, throwing out some impressively meaningless error symbols. How exactly are you meant to Google for a circle with a line through it?


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/apple"&gt;apple&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/errors"&gt;errors&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/macbook"&gt;macbook&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/searchability"&gt;searchability&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/usability"&gt;usability&lt;/a&gt;&lt;/p&gt;



</summary><category term="apple"/><category term="errors"/><category term="google"/><category term="macbook"/><category term="natalie-downe"/><category term="searchability"/><category term="usability"/></entry><entry><title>Back To The Future of Print</title><link href="https://simonwillison.net/2007/Dec/9/print/#atom-tag" rel="alternate"/><published>2007-12-09T00:56:45+00:00</published><updated>2007-12-09T00:56:45+00:00</updated><id>https://simonwillison.net/2007/Dec/9/print/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://24ways.org/2007/back-to-the-future-of-print"&gt;Back To The Future of Print&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat’s contribution to 24 ways: a long needed update on the state of the art in print stylesheets.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/24-ways"&gt;24-ways&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/print"&gt;print&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/printstylesheets"&gt;printstylesheets&lt;/a&gt;&lt;/p&gt;



</summary><category term="24-ways"/><category term="css"/><category term="natalie-downe"/><category term="print"/><category term="printstylesheets"/></entry><entry><title>Brighton geek venues</title><link href="https://simonwillison.net/2007/Aug/16/brighton/#atom-tag" rel="alternate"/><published>2007-08-16T01:38:29+00:00</published><updated>2007-08-16T01:38:29+00:00</updated><id>https://simonwillison.net/2007/Aug/16/brighton/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://brighton.geekvenues.com/"&gt;Brighton geek venues&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Nat’s latest project: a neat Google Maps mashup listing venues for geek events in Brighton, managed using Google MyMaps to edit a KML file.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/brighton"&gt;brighton&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/geek"&gt;geek&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-maps"&gt;google-maps&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/googlemymaps"&gt;googlemymaps&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/kml"&gt;kml&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/mashup"&gt;mashup&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/mymaps"&gt;mymaps&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/natalie-downe"&gt;natalie-downe&lt;/a&gt;&lt;/p&gt;



</summary><category term="brighton"/><category term="geek"/><category term="google"/><category term="google-maps"/><category term="googlemymaps"/><category term="kml"/><category term="mashup"/><category term="mymaps"/><category term="natalie-downe"/></entry></feed>