Google Maps and OpenStreetMap Data Views – Find The 10 Differences


Google Maps had The Atlantic over for a chat about how they work up their ‘deep map’ from various sources. It’s interesting to read about how Google invests incredible amounts of money and manpower to try and do the best job possible of capturing ground truth without people on the ground.

The article contains some ‘data views’ of Google Maps data in various stages of being worked up. I don’t know if it’s actual screenshots of an editing environment, but regardless, it’s an interesting peek behind the scenes that I had not had before.

This is from the article:

The Google Maps editing environment. Source: The Atlantic

This is about the same area loaded into the OpenStreetMap desktop editor JOSM:

The same area in the OpenStreetMap editor JOSM

Now you can look long and hard to try and make out ten or maybe a hundred differences in the data, but there’s one difference between these two views that reaches much deeper. The data behind Google Maps you will never get to see, let alone touch. The data in OpenStreetMap on the other hand is there for anyone to download, use, make great products out of and, most importantly, edit and improve. That difference marks a cardinal characteristic of the Google Maps platform that the article failed to raise. Consider that itch scratched.

Post-Redaction Repair Tools: Maps and the Remap-A-Tron


Although the OpenStreetMap data license change from CC-BY-SA to ODbL is not formally done, the redaction bot has done its thing. It was activated in mid-July after rigorous testing and months of development to go over the entire planet, square degree by square degree, and delete any tainted node, way and relation or revert it to a untainted state. (An object is tainted, simply put, when it was at some point edited by a contributor who did not agree to the license change and the new contributor terms.)

OpenStreetMap Redaction bot progress map screenshot taken on July 17th as the bot was just progressing to Switzerland and re-running London.

Although less than 1% of all data was removed in the redaction process, some areas are much more strongly affected than others, depending on the amount and previous activitiy of local ‘decliners’ as well as pre-emptive remapping of tainted features. This was made possible by tools that identified potentially tainted objects, such as Simon Poole’s Cleanmap and a JOSM plugin. (These tools have since been discontinued.)

Looking at the situation in the US, what we’re left with after the license change redaction is a map that has issues. Well, let’s say it has more issues now than it had before the redaction. Just looking at the street network, which is the one single most important dimension of OpenStreetMap if you ask me, we don’t have to look very hard to find missing  street segments, entire missing streets, and messy geometry. We had messy geometry before, because of the inherent messiness of TIGER/Line data off of which the US road network is based, but the redaction process left us with some really, ehm, interesting spaghetti in places.

Messy way geometries as a result of the redaction, near Los Angeles, CA.

Missing way segment as a result of the redaction, north of Honolulu, HI

And then there’s the invisible results of the redaction: lost attributes such as speed limits, lane count, direction of flow – but I would suggest we fix the road geometries and topological integrity of the main road network first, so the OpenStreetMap data is routable and the routes make sense. In order to do that effectively, the community needs tools. Fortunately, we have some available already. There’s Toby Murray’s map showing suspicious way segments last touched by the redaction bot. Kai Krueger provided a routing grid page that calculates route times and distances between major US cities, and compares those to reference data to expose potential routing issues. And there’s also the venerable OSM Inspector tool that provides a layer showing objects affected by the redaction process.

I also took a stab at a small set of tools that I hope will be helpful in identifying the remaining issues in the road network for the US: a Redacted / Deleted Ways map, and the Remap-A-Tron.

Redacted / Deleted Ways Map

Not unlike the redaction layer in OSM Inspector, this map duo exposes features affected by the redaction process, with a few notable differences. The main difference is that it has a clear focus on the US road network data. The Redacted Ways Map differentiates between Motorway, Trunk, Primary, Secondary and lower class roads, in an attempt to make it easier for remappers to seek out and prioritize the more important roads.

The Redacted Ways Map showing different road classes in appropriate coloring for easy prioritization of remapping efforts

The Deleted Ways map complements the Redacted Ways map. It only shows the way segments that were completely obliterated by the redaction bot. The focus on the road network means that only ways with a highway tag are visualized here.

The Deleted Ways map differentiates between ways that are likely to already have been remapped, and those that still need to be remapped.

If you look at this map, you will notice that there are two different classes of deleted ways, displaued in red and green respectively. The difference is in remapping status. The red ways are probably not remapped yet, while the green ways are likely to have been remapped already. I make this distinction by periodically comparing the deleted way geometries with newly mapped way geometries in the database. Using the Hausdorff distance algorithm as the main component, I devised an algorithm that reasonably accurately predicts whether a way has already been remapped.

Remap-A-Tron

I came up with another way to leverage the knowledge of remappedness of the OpenStreetMap street network. Building on an idea I already had semi-developed a while ago, I built a web application that serves up one deleted way geometry overlaid onto a current OpenStreetMap basemap. Using a simple menu or handy keyboard shortcuts, you can flag the segment as already remapped, skip it for whatever reason, or load that area into JOSM or Potlatch right away to remap it.

The Remap-A-Tron serves up one non-remapped important way segment at a time for remappers to work on.

The data backend is refreshed every (US) night, so there should not be too much of a lag. Currently, the app serves up only non-remapped important segments: motorways down to tertiary class roads. There are about 1900 segments in this category as of the time of writing. If a handful of people spend an evening with the Remap-A-Tron, this should be down to zero in a matter of days. Once we complete the repairs on the important roads, I can tweak the app to include all other highways (~15K segments not remapped) and ultimately all ways (~52K segments not remapped).

I built this tool with versatility in mind. After the street remapping is done, it could easily be repurposed to identify other issues in OSM that are identifiable on the Mapnik map: self-intersecting ways, road connectivity issues, missing bridge tags, maybe even missing traffic lights? I would love to hear your ideas.

If this turns out to be useful and used, I will try and increase the coverage to the entire world. For that to happen, the Remap-A-Tron will need to find a home that is not the server next to my desk, though..

Happy remapping!

Life After Redaction: Detecting Remapped Ways


There are some pretty awesome tools out there to help with the remapping effort after the redaction bot made its sweep across the OpenStreetMap database. (Does this sound like Latin to you? Read up on the license change and the redaction process here.) Geofabrik’s OSM Inspektor shows all the objects affected by the redaction. It is likely the most comprehensive view of the result of the license change redaction. Numerous other tools are listed on the Remapping wiki page. Most of these tools will show you, in some shape or form, the effects of the redaction process: which nodes, ways and relations have been deleted or reverted to a previous, ‘ODbL Clean’ version of the object.

I want to see if we can take it a step further and determine whether an object has already been remapped. This is useful for monitoring remapping progress as well as determining where to focus efforts when you want to contribute to the remapping effort.

For now, I am going to stick with ways. I think maintaining, or reinstating, a good quality routable road network is an important objective for OSM anyway, and especially at this point in time, when many roads are broken due to redaction.

Let’s start by locating a deleted way here in the US using my own Redaction Affected / Deleted Ways Map. That’s easy enough around severely affected Austin, TX:

 

I am going to use three comparison parameters to determine whether this way is likely to already have been remapped:

  1. The Hausdorff distance between the deleted geometry and any new geometries in that area
  2. The highway type of the deleted and any new geometries in that area
  3. The length of the deleted and any new geometries in that area

For this to work, I will need a table with all the ways deleted by the redaction bot. This is easy enough to compile by looking at the changesets created by the redaction account, but Frederik Ramm was kind enough to send the list of OSM IDs to me, so all I had to do is extract the deleted ways by ID from a pre-redaction database. The comparison can then be run on that table and a ways table from a current planet:

 

It is immediately clear that this way is very likely already remapped if we look at the top candidate, with object ID 172171755. It has a very small Hausdorff distance compared to the deleted way 30760760, it is tagged with the same highway= type, and the lengths are almost identical.

Sure enough, when fire up JOSM and load this area, it is clear that this area has been remapped:

 

(Selected objects are version 1 and created after July 18, 2012).

I need to do some more testing and tweaking on the query, but I will soon integrate this in the Redaction Affected / Deleted Ways Map.

 

A Look At Stale OpenStreetMap Data


Lazy people go straight here and here. But you’re not that person, are you?

The Wikimania conference is around the corner, and it’s close to home this year – in Washington, DC. DC already has a lot of resident geo geeks and mappers. With all the open, collaborative knowledge minded people in town, there is huge momentum for an OpenStreetMap Mapping Party, and I am excited to help run it! The party is taking place on Sunday, July 15 – the unconference day of Wikimania 2012. (There will  also be lots of other open mapping things going on, do check the program. The entry for the mapping party is kind of sparse, but hey – it’s a mapping party. What more is there to say?)

The question of where to direct the eager mappers quickly arose. In the beginning, that would have been an easy one as the map was without form and void. Nowadays, with the level of maturity of the map and the community OpenStreetMap has reached, it can be a lot harder. DC, with all its past mapping parties, well curated data imports and active mapping community, looks to be handsomely mapped. To pick a good destination for a mapping party requires a look under the hood.

A good indicator for areas that may need some mapping love is data staleness, defined loosely as the amount of time that has passed since someone last touched the data. A neighborhood with lots of stale data may have had one or more active mappers in the past, but they may have moved away or on to other things. While staleness is not a measure of completeness, it can point us at weak areas and neighborhoods in that way.

I did a staleness analysis for a selection of DC nodes and ways. I filtered out the nodes that have tags associated with them, and the ways that are not building outlines. (DC has seen a significant import of building outlines, which would mess up my analysis and the visualization.) And because today was procratination day, I went the extra mile and made the visualization into a web map and put the thing on GitHub. I documented the (pretty straightforward) process step by step on the project wiki, for those who want to roll their own maps, and those interested in doing something useful with OpenStreetMap data other than just making a standard map.

Below are two screenshots – one for DC and another for Amsterdam, another city for which I did the analysis. (A brief explanation of what you see is below the images.) It takes all of 15 minutes from downloading the data to publishing the HTML page, so I could easily do more. But procrastination day is over. Buy me a beer or an Aperol spritz in DC and I’ll see what I can do.

About these screenshots: The top one shows the Mall and surroundings in DC, where we see that the area around the Capitol has not been touched much in the last few years, hence the dark purple color of a lot of the linear features there. The area around the White House on the other hand has received some mapping love lately, with quite a few ways bright green, meaning they have been touched in the last 90 days.

Similar differences in the Amsterdam screenshot below the DC one. The Vondelpark area was updated very recently, while the (arguably much nicer) Rembrandtpark is pale purple – last updates between 1 and 2 years ago.

Note that the individual tagged nodes are not visible in these screenshots. They would clutter up the visualization too much at this scale. In the interactive maps, you could zoom in to see those.

As always, I love to talk about this with you, so share your thoughts, ideas for improvements, and any ol’ comment.

Detecting Highway Trouble in OpenStreetMap


For the impatient: do you want to get to work solving highway trouble in OpenStreetMap right away? Download the Trouble File here!

Making pretty and useful maps with freely available OpenStreetMap data has never been so easy and so much fun to do. The website Switch2OSM is an excellent starting point, and with great tools like MapBox’s TileMill at your disposal, experimenting with digital cartography is almost effortless. Design bureau Stamen shows us some beautiful examples of digital cartography based on OpenStreetMap data. Google starting to charge  for using their maps API provides a compelling push factor for some to start down this road, and the likes of Foursquare and Apple lead the way.

With all the eyes on OpenStreetMap as a source of pretty maps now, you would almost forget that the usefulness of freely available OpenStreetMap data extends way beyond that. One of the more compelling uses of OpenStreetMap data is routing and navigation, and things have been moving there. Skobbler has succeeded in making a tangible dent in the turn-by-turn navigation market for mobile devices in some countries, offering similar functionality as TomTom but at a much, much lower price point, using freely available OpenStreetMap data. MapQuest and CloudMade offer routing APIs based on OpenStreetMap. New open source routing software projects OSRM and MoNav show promise with very fast route calculation and a full feature set, and both are built from the ground up to work with OpenStreetMap data.

Routing puts very different, much stricter requirements on the source data than map rendering. For a pretty map, it does not matter much if roads in the source data do not always connect or lack usage or turn restriction information. For routing, this makes all the difference. Topological errors and lacking usage restriction metadata make for incorrect routes. They will direct you to turn left onto a one-way street, get off the highway for no apparent reason, even if there is no exit. That may seem funny if you read about it in a British tabloid, but it’s annoying when you’re on a road trip, and totally unacceptable if you depend on routing software for your business. So unless the data is pretty much flawless, we won’t see major providers of routing and navigation products make the switch to OpenStreetMap that some have so eagerly made for their base maps.

It turns out the data is not flawless. A study done at the University of Heidelberg shows that even for Germany, the country with the most prolific OpenStreetMap community by a distance, the data is not on par with commercial road network data when compared on key characteristics for routing. (Even though the study predicts that in a few months, it will be).

Turning to the US, the situation is bound to be much worse. With a much smaller community that is spread pretty thin geographically (and in some regions, almost nonexistent), and the TIGER import as a very challenging starting point, there is no way that any routing based on OpenStreetMap data in the US is going to be anywhere near perfect. Sure, the most obvious routing related problems with the TIGER data were identified and weeded out in an early effort (led by aforementioned CloudMade) shortly after the import, but many challenges still remain.

In an effort to make OpenStreetMap data more useful for routing in the US, I started to identify some of those challenges. Routing is most severely affected by problems with the primary road network, so I decided to start from there. Using some modest PostGIS magic, I isolated a set of Highway Trouble Points. The Trouble breaks down into four main classes:

Bridge Trouble

This is the case where a road crossing over or under a highway is not tagged as a bridge, and even worse, shares vertices with the highway, as illustrated below. This tricks routing software into thinking there is a turn opportunity there when there is not. This is bad enough if there actually is an exit, like in the example, but it gets really disastrous when there is not.

These cases take some practice to repair. It involves either deleting or ungluing the shared nodes, splitting the road that should be a bridge, and tagging it as a bridge=yes, layer=1.

Imaginary Exit Trouble

Sometimes, a local road or track will be connected to a highway, tricking routing software into possibly taking a shortcut. Repairing these is simple: unglue the shared node and move the end of the local road to where it actually ends, looking at the aerial imagery.

Service Road Trouble

The separate roadways of a highway are sometimes connected to allow emergency vehicles to make a U-turn. Regular traffic is not allowed to use these connector service ways, but during the TIGER import they were usually tagged as public access roads, again potentially tricking routing software into taking a shortcut. I repair these by tagging them as highway=service and access=official, access=no, emergency=yes.

Rest Area Trouble

This is of secondary importance, as rest areas are usually not connected to the road network except for their on- and off-ramps. Finding these Trouble points was an unexpected by-product of the query I ran on the data. What we have here is rest areas that are not tagged as such, instead just existing as a group of ‘residential’ roads connecting to the highway features, without a motorway_link. While we’re at it, we can clean these up nicely by adding motorway_links at the on- and off-ramps, the other road features as highway=service, adding the necessary oneway=yes and identifying a node as highway=rest_area. It’s usually obvious if there are toilets=yes from the aerial image, too.

I have done test runs of the query on OSM data for Vermont and Missouri. The query is performed on a PostGIS database with the osmosis snapshot schema, optionally with the linestring extension, and goes like this:

DROP TABLE IF EXISTS candidates;
CREATE TABLE candidates AS
    WITH agg_intersections AS
    (
        WITH intersection_nodes_wayrefs AS
        (
            WITH intersection_nodes AS
            (
                SELECT
                    a.id AS node_id,
                    b.way_id,
                    a.geom
                FROM
                    nodes a,
                    way_nodes b
                WHERE
                    a.id = b.node_id AND
                    a.id IN
                    (
                        SELECT 
                            DISTINCT node_id
                        FROM 
                            way_nodes
                        GROUP BY 
                            node_id
                        HAVING 
                            COUNT(1) = 2
                    )
            )
            SELECT
                DISTINCT a.node_id AS node_id,
                b.id AS way_id,
                b.tags->'highway' AS osm_highway,
                a.geom AS geom,
                b.tags->'ref' AS osm_ref
            FROM
                intersection_nodes a,
                ways b
            WHERE
                a.way_id = b.id
        )
        SELECT
            node_id,
            array_agg(way_id) AS way_ids,
            array_agg(osm_highway) AS osm_highways,
            array_agg(osm_ref) AS osm_refs
        FROM 
            intersection_nodes_wayrefs
        GROUP BY 
            node_id
    )
    SELECT
        a.* ,
        b.geom AS node_geom,
        -- COMMENT NEXT LINE OUT IF YOU DON'T HAVE
        -- OR WANT WAY GEOMETRIES
        c.linestring AS way_geom
    FROM 
        agg_intersections a, 
        nodes b,
        ways c
    WHERE
        (
            'motorway' = ANY(osm_highways)
            AND NOT
            (
                'motorway_link' = ANY(osm_highways)
                OR
                'service' = ANY(osm_highways)
                OR 
                'motorway' = ALL(osm_highways)
                OR 
                'construction' = ANY(osm_highways)
            )
        )    
    AND
        a.node_id = b.id
    AND
        c.id = ANY(a.way_ids);
;

The query took about a minute to run for Vermont and about 5 minutes for Missouri. For Vermont, it yielded 77 points and for Missouri 193 points. You can download these files here, but note that I have already done much of the cleanup work in these states since, as part of my thinking on how to improve the query. It still yields a some false positives, notably points where a highway=motorway turns into a highway=trunk or  highway=primary, see below.

UPDATE: This query filters out these false positives, it uses the ST_Startpoint and ST_Endpoint PostGIS functions to determine if two line features ‘meet’:

DROP TABLE IF EXISTS candidates_noendpoints;
CREATE TABLE candidates_noendpoints AS

SELECT 
    DISTINCT c.node_id,
    c.node_geom
FROM
    ways a,
    ways b,
    candidates c
WHERE
    ST_Intersects(c.node_geom, a.linestring)
AND
    ST_Intersects(c.node_geom, b.linestring)    
AND NOT
(
    ST_Intersects(c.node_geom, ST_Union(ST_StartPoint(a.linestring),ST_Endpoint(a.linestring))) 
    AND
    ST_Intersects(c.node_geom, ST_Union(ST_StartPoint(b.linestring),ST_Endpoint(b.linestring))) 
)
;

This query requires the availability of line geometries for the ways, obviously.

UPDATE 2: The query as-is made the PostgreSQL server croak because it ran out of memory, so I had to redesign the query to rely much less on in-memory tables. I will provide the updated query to anyone interested. I’m going to leave the original SQL up there, it was meant to convey the approach and it still does. The whole US trouble file is available as an OSM XML file from here.

I plan to make the Highway Trouble files available on a regular basis for all 50 states if there’s an interest for them. And as always I’m very interested to hear your opinion: any Trouble I am missing? Ways to improve the query? Let me know.

A self-updating OpenStreetMap database of US bridges – a step-by-step guide.


I had what I thought was a pretty straightforward use case for OpenStreetMap data:

I want all bridges in the US that are mapped in OpenStreetMap in a PostGIS database.

There are about 125,000 of them – for now loosely defined as ‘ways that have the ‘bridge’ tag‘. So on the scale of OpenStreetMap data it’s a really small subset. In terms of the tools and processes needed, the task seems easy enough, and as long as you are satisfied with a one-off solution, it really is. You would need only four things:

  1. A planet file
  2. A boundary polygon for the United States
  3. A PostGIS database loaded with the osmosis snapshot schema and the linestring extension
  4. osmosis, the OpenStreetMap ETL swiss army tool.

That, and a single well-placed osmosis command:

bzcat planet-latest.osm.bz2 | \
osmosis --rx - \
--bp us.poly \
--tf accept-ways bridge=* \
--tf reject-relations \
--used-node \
--wp database=bridges user=osm password=osm

this will extract the planet file and pipe the output to osmosis. Osmosis’s –read-xml task consumes the xml stream, passes it to a –bounding-polygon task to clip the data using the US bounding polygon, a couple of –tag-filter tasks that throw out all relations and all ways except for those tagged ‘bridge=*’ (there’s a negligible number of ways tagged bridge=no, but catching all the different ways of tagging a ‘true’ value here is more work than it’s worth, if you ask me), a –used-node task that throws out all the nodes except for those that are used by the ways we are keeping, and finally a –write-pgsql task that writes it all the objects to the PostGIS database. (Osmosis can be overwhelming at first with its plethora of tasks and arguments, but if you break it down it’s really quite straightforward. It may help to use There’s also a graphical wrapper around osmosis called OSMembrane that may help to make this tool easier to understand and master.)

But for me, it didn’t end there.

OpenStreetMap data is continuously updated by more than a half million contributors around the world. People are adding, removing and changing features in OpenStreetMap around the clock. And those changes go straight into the live database. There’s no release schedule, no quality assurance. Every time one of those half a million people clicks ‘save’ in one of the OpenStreetMap editors there is, for all intents and purposes, a new OpenStreetMap version. That means the bridges database I just built is already obsolete even before the import is complete. For my yet to disclose purpose, that would not be acceptable. So let me specify my goal a little more precisely:

I want all bridges in the US that are mapped in OpenStreetMap in a PostGIS database that stays as up-to-date as possible, reflecting all the latest changes.

There is not one single ready-made solution for this, it turned out, so let me describe how I ended up doing it. It may not be the most common OpenStreetMap data processing use case out there, but it’s going to be useful for, for example, thematic overlay maps, if nothing else – even though the final step of importing into a geospatial database may need some tweaking.

After some less successful attempts I settled on the following workflow:

Strategy for keeping an up-to-date geographical and functional OpenStreetMap extract

This workflow uses a handful of specialized tools:

  1. osmosis, that we’re already familiar with
  2. osmconvert – a fast, comprehensive OpenStreetMap data file patching, converting and processing tool
  3. osmfilter – a tool to filter OpenStreetMap data by tags, tag values or feature type
  4. osmupdate – a tool to automate patching local OpenStreetMap data files, including downloading the change files from the server.

The trio osmconvert / osmfilter / osmupdate together can do most of the things osmosis can do, but do it a heck of a lot faster, and is more flexible in a few key aspects that we will see soon.

Let’s go through the numbered steps in the diagram one by one, explaining how each step is executed and how it works.

1. Planet file – The complete, raw OpenStreetMap data. A fresh one is made available every week on the main OpenStreetMap server, but your area of interest may be covered at one of the many mirrors, which can save you some download time and bandwidth. There is no planet mirror for the entire US, so I started with the global planet file. If you have a planet file that matches your area of interest, you can skip step 3 (but not the next step).

2. Bounding polygon – regardless whether you find an initial planet file that matches your area of interest nicely, you will need a bounding polygon in OSM POLY format for the incremental updates. You’ll find ready-made POLY files in several places, including the OpenStreetMap SVN tree and GeoFabrik (read the README though), or you can create them yourself from any file that OGR can read using ogr2poly.

3. Filter area of interest – to save on disk space, memory usage and processing time, we’re going to work only with the data that is inside our area of interest. There are quite a few ways to create geographical extracts from a planet file, but we’re going to use osmconvert for two reasons: a) it’s fast! (osmosis takes about 4 hours and 45 minutes to do this, osmconvert takes 2 hours. This is on an AMD Phenom II X4 965 machine with 16GB RAM) b) it outputs the o5m format for which the next tool in the chain, osmfilter, is optimized.

bzcat planet-latest.osm.bz2 | ./osmconvert – -B=us.poly  -o=us.o5m

4. Filter features of interest – The second step is creating a file that holds only the features that we are interested in. We could have done this together with the previous step in one go, but as the diagram shows we will need the output file of step 3 (the US planet file) for the incremental update process. Here, osmfilter comes into play

osmfilter us.o5m --keep= --keep-ways="bridge=" --out-o5m > us-bridges.o5m

osmfilter works in much the same way as the osmosis –tag-filter task. It accepts arguments to drop specific feature types, or to keep features that have a specific tags. In this case, we want to drop everything (–keep=) but the ways that have the key ‘bridge’ (–keep-ways=”bridge=”). We have osmfilter output the result in the efficient o5m format. (o5m lies in between the OSM xml and pbf formats in terms of file size, and was designed as a compromise between the two. One of the design goals for the o5m format was the ability to merge two files really fast, something we will be relying on in this process.)

5. Convert to pbf - The trio osmconvert / osmfilter / osmupdate is designed to handle file-based data and has no interface for PostGIS, so we need to fall back on osmosis for this step. As osmosis cannot read o5m files, we need to convert to pbf first:

osmconvert us-bridges.o5m -b=-180,-90,180,90 --drop-broken-refs -o=us-bridges.osm.pbf

Wait a minute. A lot more happened there than just a format conversion. Let’s take a step back. Because we’re working with a geographical extract of the planet file, we need to be concerned about referential integrity. Because the way objects in OpenStreetMap don’t have an inherent geometry attached to them, any process looking to filter ways based on a bounding box or polygon needs to go back to the nodes referenced and see if they are within the bounds. It then needs to decide what to do with ways that are partly within the bounds: either cut them at the bounds, dropping all nodes that lie outside the bounds (‘hard cut;), include the entire way (‘soft cut’) or drop the entire way. As the –drop-broken-refs argument name suggests, we are doing the latter here. This means that data is potentially lost near the bounds, which is not what we actually want. We need to do it this way though, because the planet update (step 7) cannot take referential integrity into account without resorting to additional (expensive) API calls. (Consider this case: a way is entirely outside the bounds on t0. Between updates, one of the nodes is moved inside the bounds, so the way would be included in the extract now. But the old file does not contain the rest of the nodes comprising that way, nor are they in the delta files that are used in the update process – so the full geometry of the new way cannot be known.)

One way to compensate for the data loss is by buffering the bounding polygon. That would yield false positives, but that may be acceptable. It is how I solved this. What’s best for your case depends on your scenario.

The -b=-180,-90,180,90 option defining a global bounding box seems superfluous, but is actually necessary to circumvent a bug in the –drop-broken-refs task that would leave only nodes in the data.

6. Initial database import – This is a straightforward step that can be done with a simple osmosis command:

osmosis --rb us-bridges.osm.pbf --wp database=bridges user=osm password=osm

This reads the pbf file we just created (–rb) and writes it directly to the database ‘bridges’ using the credentials provided (–wp). If you want way geometries, be sure to load the linestring schema extension in addition to the snapshot schema when creating the database:

psql -U osm -d bridges -f /path/to/osmosis/script/pgsnapshot_schema_0.6.sql
psql -U osm -d bridges -f /path/to/osmosis/script/pgsnapshot_schema_0.6_linestring.sql

osmosis will detect this on import, there is no need to tell osmosis to create the line geometries.

Note that we are using the direct write task (–wp) which is fine for smaller datasets. If your dataset is much larger, you’re going to see real performance benefits from using the dump task (–wpd) and load the dumps into the database using the load script provided with osmosis.

Now that we have the initial import done, we can start the incremental updates. This is where the real fun is!

7. Updating the planet file – This is where osmupdate really excels in flexibility over osmosis. I had not used this tool before and was amazed by how it Just Works. What osmupdate does is look at the input file for the timestamp, intelligently grab all the daily, hourly and minutely diff files from the OpenStreetMap server, and apply them to generate an up-to-date output file. It relies on the osmconvert program that we used before to do the actual patching of the data files, so osmconvert needs to be in your path for it to function. You can pass osmconvert options in, which allows us to apply the bounding polygon in one go:

osmupdate us.o5m us-new.o5m B=us.poly

8. Filter features of interest for the updated planet file – This is a repetition of step 4, but applied to the updated planet file:

osmfilter us-new.o5m --keep= --keep-ways="bridge=" --out-o5m > us-bridges-new.o5m

We also drop the broken references from this new data file:

osmconvert us-bridges-new.o5m -b=-180,-90,180,90 --drop-broken-refs -o=us-bridges-new-nbr.o5m

9. Derive a diff file – We now have our original bridges data file, derived from the planet we downloaded, and the new bridges file derived from the updated planet file. What we need next is a diff file we can apply to our database. This file should be in the OSM Change file format, the same format that is used to publish the diffs for the planet osmconvert used in step 7. This is another task at which osmconvert excels: it can derive a change file from two o5m input files really fast:

osmconvert us-bridges.o5m us-bridges-new-nbr.o5m --diff --fake-lonlat -o=diff-bridges.osc

Again, there’s a little more going on than just deriving a diff file, isn’t there? What is that –fake-lonlat argument? As it turns out, osmconvert creates osc files that don’t have coordinate attributes for nodes that are to be deleted. To do so would be unnecessary, you really only need a node ID to know which node to delete, there is no need to repeat other attributes of the node. But some processing software, including osmosis, requires these attributes to be present, even if the node is in a <delete> block.

10. Update the database – With the osc file defining all the changes since the initial import available, we can instruct osmosis to update the database:

osmosis --wxc diff-bridges.osc --wpc database=bridges user=osm password=osm

..And we’re done. Almost. To keep the database up-to-date, we need to automate steps 7 through 10, and add some logic to move and delete a few files to create a consistent initial state for the replication process. I ended up creating a shell script for this and adding a crontab entry to have it run every three hours. This interval seemed like a good trade-off between server load and data freshness. The incremental update script takes about 11  minutes to complete: about 6 minutes for updating the US planet file, 4 minutes for filtering the bridges, and less than a minute to derive the changes, patch the database and clean up. Here’s some log output from the script, that by the way I’d be happy to share with anyone interested in using or improving it:

Tue Mar  6 03:00:01 MST 2012: update us bridges script 20120304v5 starting...
Tue Mar  6 03:00:01 MST 2012: updating US planet...
Tue Mar  6 03:06:28 MST 2012: filtering US planet...
Tue Mar  6 03:10:11 MST 2012: dropping broken references...
Tue Mar  6 03:10:12 MST 2012: deriving changes...
Tue Mar  6 03:10:13 MST 2012: updating database...
Tue Mar  6 03:10:16 MST 2012: cleaning up...
Tue Mar  6 03:10:30 MST 2012: finished successfully in 629 seconds!
Tue Mar  6 03:10:30 MST 2012:  215744 bridges in the database
Tue Mar  6 06:00:01 MST 2012: update us bridges script 20120304v5 starting...
Tue Mar  6 06:00:01 MST 2012: updating US planet...
Tue Mar  6 06:06:10 MST 2012: filtering US planet...
Tue Mar  6 06:10:38 MST 2012: dropping broken references...
Tue Mar  6 06:10:40 MST 2012: deriving changes...
Tue Mar  6 06:10:40 MST 2012: updating database...
Tue Mar  6 06:10:43 MST 2012: cleaning up...
Tue Mar  6 06:10:53 MST 2012: finished successfully in 652 seconds!
Tue Mar  6 06:10:53 MST 2012:  215748 bridges in the database
Tue Mar  6 09:00:02 MST 2012: update us bridges script 20120304v5 starting...
Tue Mar  6 09:00:02 MST 2012: updating US planet...
Tue Mar  6 09:06:47 MST 2012: filtering US planet...
Tue Mar  6 09:11:23 MST 2012: dropping broken references...
Tue Mar  6 09:11:24 MST 2012: deriving changes...
Tue Mar  6 09:11:26 MST 2012: updating database...
Tue Mar  6 09:11:29 MST 2012: cleaning up...
Tue Mar  6 09:11:44 MST 2012: finished successfully in 702 seconds!
Tue Mar  6 09:11:44 MST 2012:  215749 bridges in the database

Wrapping up

I’ll spend another blog post on my purpose of having this self-updating bridges database sometime soon. It has something to do with comparing and conflating bridges between OpenStreetMap and the National Bridge Inventory. The truth is I am not quite sure how that should be done just yet. I already did some preliminary work on conflation queries in PostGIS and that looks quite promising, but not promising enough (by far) to automate the process of importing NBI data into OSM. Given that NBI is a point database, and bridges in OSM are typically linear features, this would be hard to do anyway.

I’d like to thank Markus Weber, the principal author of osmupdate / osmconvert / osmfilter, for his kind and patient help with refining the process, and for creating a great tool set!

The State Of The OpenStreetMap Road Network In The US


Looks can be deceiving – we all know that. Did you know it also applies to maps? To OpenStreetMap? Let me give you an example.

Head over to osm.org and zoom in to an area outside the major metros in the United States. What you’re likely to see is an OK looking map. It may not be the most beautiful thing you’ve ever seen, but the basics are there: place names, the roads, railroads, lakes, rivers and streams, maybe some land use. Pretty good for a crowdsourced map!

What you’re actually likely looking at is a bunch of data that is imported – not crowdsourced – from a variety of sources ranging from the National Hydrography Dataset to TIGER. This data is at best a few years old and, in the case of TIGER, a topological mess with sometimes very little bearing on the actual ground truth.

TIGER alignment example

The horrible alignment of TIGER ways, shown on top of an aerial imagery base layer. Click on the image for an animation of how this particular case was fixed in OSM. Image from the OSM Wiki.

For most users of OpenStreetMap (not the contributors), the only thing they will ever see is the rendered map. Even for those who are going to use the raw data, the first thing they’ll refer to to get a sense of the quality is the rendered map on osm.org. The only thing that the rendered map really tells you about the data quality, however, is that it has good national coverage for the road network, hydrography and a handful of other feature classes.

To get a better idea of the data quality that underlies the rendered map, we have to look at the data itself. I have done this before in some detail for selected metropolitan areas, but not yet on a national level. This post marks the beginning of that endeavour.

I purposefully kept the first iteration of analyses simple, focusing on the quality of the road network, using the TIGER import as a baseline. I did opt for a fine geographical granularity, choosing counties (and equivalent) as the geographical unit. I designed the following analysis metrics:

  • Number of users involved in editing OSM ways – this metric tells us something about the amount of peer validation. If more people are involved in the local road network, there is a better chance that contributors are checking each other’s work. Note that this metric covers all linear features found, not only actual roads.
  • Average version increase over the TIGER imported roads – this metric provides insight into the amount of work done on improving TIGER roads. A value close to zero means that very little TIGER improvements were done for the study area, which means that all the alignment and topology problems are likely mostly still there.
  • Percentage of TIGER roads – this says something about contributor activity entering new roads (and paths). A lower value means more new roads added after the TIGER import. This is a sign that more committed mappers have been active in the area — entering new roads arguably requires more effort and knowledge than editing existing TIGER roads. A lower value here does not necessarily mean that the TIGER-imported road network has been supplemented with things like bike and footpaths – it can also be caused by mappers replacing TIGER roads with new features, for example as part of a remapping effort. That will typically not be a significant proportion, though.
  • Percentage of untouched TIGER roads – together with the average version increase, this metric shows us the effort that has gone into improving the TIGER import. A high percentage here means lots of untouched, original TIGER roads, which is almost always a bad thing.

Analysis Results

Below are map visualizations of the analysis results for these four metrics, on both the US State and County levels. I used the State and County (and equivalent) borders from the TIGER 2010 dataset for defining the study areas. These files contain 52 state features and 3221 county (and equivalent) features. Hawaii is not on the map, but the analysis was run on all 52 areas (the 50 states plus DC and Puerto Rico – although the planet file I used did not contain Puerto Rico data, so technically there’s valid results for 51 study areas on the state level).

I will let the maps mostly speak for themselves. Below the results visualisations, I will discuss ideas for further work building on this, as well as some technical background.

Map showing the number of contributors to ways, by state

Map showing the average version increase over TIGER imported ways, by state

Map showing the percentage of TIGER ways, by state

Map showing the percentage of untouched TIGER ways, by state

Map showing the number of users involved in ways, by county

Map showing the average version increase over TIGER imported ways, by county

Map showing the percentage of TIGER ways

Map showing the percentage untouched TIGER roads by county

Further work

This initial stats run for the US motivates me to do more with the technical framework I built for it. With that in place, other metrics are relatively straightforward to add to the mix. I would love to hear your ideas, here are some of my own.

Breakdown by road type – It would be interesting to break the analysis down by way type: highways / interstates, primary roads, other roads. The latter category accounts for the majority of the road features, but does not necessarily see the most intensive maintenance by mappers. A breakdown of the analysis will shed some light on this.

Full history – For this analysis, I used a snapshot Planet file from February 2, 2012. A snapshot planet does not contain any historical information about the features – only the current feature version is represented. In a next iteration of this analysis, I would like to use the full history planets that have been available for a while now. Using full history enables me to see how many users have been involved in creating and maintaining ways through time, and how many of them have been active in the last month / year. It also offers an opportunity to identify periods in time when the local community has been particularly active.

Relate users to population / land area – The absolute number of users who contributed to OSM in an area is only mildly instructive. It’d be more interesting if that number were related to the population of that area, or to the land area. Or a combination. We might just find out how many mappers it takes to ‘cover’ an area (i.e. get and keep the other metrics above certain thresholds).

Routing specific metrics – One of the most promising applications of OSM data, and one of the most interesting commercially, is routing. Analyzing the quality of the road network is an essential part of assessing the ‘cost’ of using OpenStreetMap in lieu of other road network data that costs real money. A shallow analysis like I’ve done here is not going to cut it for that purpose though. We will need to know about topological consistency, correct and complete mapping of turn restrictions, grade separations, lanes, traffic lights, and other salient features. There is only so much of that we can do without resorting to comparative analysis, but we can at least devise some quantitative metrics for some.

Technical Background

  • I used the State and County (and equivalent) borders from the TIGER 2010 dataset to determine the study areas.
  • I used osm-history-splitter (by Peter Körner) to do the actual splitting. For this, I needed to convert the TIGER shapefiles to OSM POLY files, for which I used ogr2poly, written by Josh Doe.
  • I used Jochen Topf‘s osmium, more specifically osmjs, for the data processing. The script I ran on all the study areas lives in github.
  • I collated all the results using some python and bash hacking. I used the PostgreSQL COPY function to import the results into a PostgreSQL table.
  • Using a PostgreSQL view, I combined the analysis result data with the geometry tables (which I previously imported into Postgis using shp2pgsql).
  • I exported the views as shapefiles using ogr2ogr, which also offers the option of simplifying the geometries in one step. Useful because the non-generalized counties shapefile is 230MB and takes a long time to load in a GIS).
  • I created the visualizations in Quantum GIS, using its excellent styling tool. I mostly used a quantiles distribution (for equal-sized bins) for the classes, which I tweaked to get prettier class breaks.

I’m planning to do an informal session on this process (focusing on the osmjs / osmium bit) at the upcoming OpenStreetMap hack weekend in DC. I hope to see you there!