<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: latimes</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/latimes.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2019-05-30T04:35:42+00:00</updated><author><name>Simon Willison</name></author><entry><title>Los Angeles Weedmaps analysis</title><link href="https://simonwillison.net/2019/May/30/los-angeles-weedmaps-analysis/#atom-tag" rel="alternate"/><published>2019-05-30T04:35:42+00:00</published><updated>2019-05-30T04:35:42+00:00</updated><id>https://simonwillison.net/2019/May/30/los-angeles-weedmaps-analysis/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://nbviewer.jupyter.org/github/datadesk/la-weedmaps-analysis/blob/master/notebook.ipynb"&gt;Los Angeles Weedmaps analysis&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Ben Welsh at the LA Times published this Jupyter notebook showing the full working behind a story they published about LA’s black market weed dispensaries. I picked up several useful tricks from it—including how to load points into a geopandas GeoDataFrame (in epsg:4326 aka WGS 84) and how to then join that against the LA Times neighborhoods GeoJSON boundaries file.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/palewire/status/1133723284116086784"&gt;Ben Welsh&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/data-journalism"&gt;data-journalism&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/geospatial"&gt;geospatial&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/latimes"&gt;latimes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pandas"&gt;pandas&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jupyter"&gt;jupyter&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ben-welsh"&gt;ben-welsh&lt;/a&gt;&lt;/p&gt;



</summary><category term="data-journalism"/><category term="geospatial"/><category term="latimes"/><category term="pandas"/><category term="jupyter"/><category term="ben-welsh"/></entry><entry><title>Train Crash Leads LA Times to Create Django Database on Deadline</title><link href="https://simonwillison.net/2009/Jan/21/poynter/#atom-tag" rel="alternate"/><published>2009-01-21T17:19:30+00:00</published><updated>2009-01-21T17:19:30+00:00</updated><id>https://simonwillison.net/2009/Jan/21/poynter/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://www.poynter.org/column.asp?id=52&amp;amp;aid=150818"&gt;Train Crash Leads LA Times to Create Django Database on Deadline&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
A story from last September. I didn’t know the LA Times used Django. UPDATE: Yes I did, I introduced their panel about it at DjangoCon. Sorry, mind like a sieve sometimes.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/data-journalism"&gt;data-journalism&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/latimes"&gt;latimes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/newspapers"&gt;newspapers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;&lt;/p&gt;



</summary><category term="data-journalism"/><category term="django"/><category term="latimes"/><category term="newspapers"/><category term="python"/></entry></feed>