Crumpled City Maps, Made With OpenStreetMap (And Other Data?)

We got this freaky crumpled map as a gift to bring on our upcoming trip to Rome. It’s made with OpenStreetMap data.

Crumpled City Map - Rome

Crumpled City Map - Rome

Crumpled City Map - Rome

Crumpled City Map - Rome

Crumpled City Map - Rome

Crumpled City Map - Rome

I wonder if they used any additional data and if so, did the publishers follow the directives of the CC-BY-SA license of the OpenStreetMap data? I see things appear on my crumpled map that are not currently in OpenStreetMap. An example is Parco Savello:

Crumpled Map - detailOpenStreetMap does not have this park at all:

Map data (c) OpenStreetMap and contributors.

If they did use other sources together with OpenStreetMap to make these maps, the resulting database derived work should also be licensed under CC-BY-SA. Does anyone know if that is the case?

Looking ahead – Important Topics For SOTM US 2012

I posted this message on behalf of the State Of The Map US 2012 Committee and the OpenStreetMap US Chapter Board out to several lists this morning:

Call for bids to host the second State Of The Map US Conference

The OpenStreetMap US Chapter is currently soliciting bids for 
hosting and organizing the State Of The Map US 2012 conference
(SOTMUS12), to be held in the second half of the year. We invite
you to put in a bid for SOTMUS12, considering the criteria
outlined on the OpenStreetMap wiki - see the links below.
The US Chapter board will work closely with the selected bid team
to make SOTMUS12 a success.
Please enter your bids as a sub-page on
The bid criteria are here:

The deadline for entering your bid is 31 January 2012. The winning
bid will be announced by the SOTMUS12 committee on 10 February 2012.

If you have any questions, do not hesitate to contact the bid
committee through Martijn van Exel,

We look forward to receiving your bids!

The SOTMUS12 bid committee and the OSM US Chapter

That means we are going to have another SOTM US this year! I could not be more excited and am looking forward to seeing the bids for hosting this wonderful event come in.

The first and last proper SOTM US was held in Atlanta in 2010. The ‘main’ SOTM conference was held in Denver last September, so there was no real need to host a separate SOTM US – but now we’re in 2012, so a new regional State Of The Map is called for. Off the top of my head I can identify three key challenges more or less specific to the US that I would personally love to see addressed at this conference:

  • Community Expansion – Although the US community has some really committed members who spend countless hours improving the US map, we need to think about expanding the community. First and foremost, we need many more local communities and thus more local community leaders. There are still major cities without a real OSM community (people getting together, organizing local events) and it is my belief that without that, you will never get the best map there can be. Second, we need to find ways to leverage more casual mapping. This is a global challenge for OpenStreetMap, but particularly in the US, where we lean so heavily on imported data, there is lots of room for microtasking and single-purpose editing tools to allow people who don’t want to invest a huge amount of time in learning new skills to contribute by fixing small things.
  • Corporate Interest – Early corporate adopters of OpenStreetMap data in the US – CloudMade, MapQuest, Microsoft – have all contributed back to the community in awesome ways – with community support, sponsoring, free tools, data mirrors, aerial imagery and lots more. 2012 may very well see more corporate interest. How is OpenStreetMap going to channel that interest and ensure that we will continue to keep that spirit of mutual benefit alive? What can we, as the OpenStreetMap community, do to accommodate large scale data consumers? Is it our job to do that? SOTM US will be a great place to address those questions.
  • Government Collaborations – With pretty much all US government produced data being in the public domain, there is a certain self-evidence to the mutual relevance between OpenStreetMap and US governmental institutions tasked with maintaining geospatial data. The USGS has extensive experience with crowdsourcing techniques and have been working with OpenStreetMap for some time now. With the government budget situation being as it is (do I hear someone mention Iowa?), I expect an increasing number of government agencies will start looking into crowdsourcing as a way to keep up with changes in the real world. Collaborating with OpenStreetMap would be a good way for them to jumpstart crowdsourcing initiatives, but there are many open questions around authoritativeness and licensing. Again, State Of The Map would be an excellent platform to discuss them.

I am just scratching the surface with these three topics. I am sure that we will see a varied an interesting program that will attract community members, techies, delegates from (mapping) companies, academia and government alike, and I hope that you will be among them at SOTM US 2012 – wherever it will turn out to be!

Happy bidding!

Aerial Imagery for OpenStreetMap

Anyone can contribute to OpenStreetMap, and there’s several complementary techniques for mapping. One of them is to use aerial or satellite imagery that has been cleared for use as reference material for OpenStreetMap. Yahoo! was the first company to grant such a license on their worldwide imagery to OpenStreetMap. More recently, OpenStreetMappers have been able to trace over Bing aerial and satellite imagery, which meant an increase in resolution and coverage for most regions in the world.

Editing OpenStreetMap with the help of Bing imagery

There have also been special cases where companies have provided imagery resources for smaller geographical areas, as was the case after the Haiti earthquake. Many national, regional and local governments have aerial imagery available as well, and some have provided their imagery to OpenStreetMap for mapping purposes (here‘s an overview).

Why are local OpenStreetMap groups and individuals engaging with their governments to gain access to aerial imagery when OpenStreetMap already has Bing as a great, worldwide resource? There are many. Bing does not cover the whole world in high resolution imagery, which is needed to map roads, buildings, sports facilities and most other features you see on the map. Where high resolution imagery is not available from Bing, local resources may be able to fill the gap.

Another reason for wanting to use local aerial imagery is currency. Bing imagery can be years old – the age of the imagery varies greatly. Without local knowledge it can be hard to establish what the age of the Bing imagery for your region is, although this tool can help. If the Bing imagery is more than a few years old for the region you want to map, you will not be able to derive the latest changes – new roads, a new residential area, a torn down baseball stadium – from that imagery. Mappers who are unaware of the age of the imagery may put in features that are no longer there, or destroy more current information thinking they are actually ‘improving’ the map!

So even though Bing is a great resource for OpenStreetMap, access to local aerial imagery may help improve OpenStreetMap a lot! So if you’re an OpenStreetMap contributor, check with your local government’s GIS department and see what they have. They may be willing to grant OpenStreetMap access to it. If you represent a local, regional or national government and have imagery available, contact a local OpenStreetMap group (or me if you can’t find one) and see how you can help make the free and open world map even better!


I got in touch with the GIS department of the state of Utah right after I moved here, and it turns out they have some great resources. They have 1m imagery from 2011 covering the entire state and 25cm imagery from 2009 covering the major urban areas. You can see the importance of current imagery in this side-by-side comparison of Bing imagery (top) and the state imagery (bottom):

How to add an imagery layer to JOSM

The screenshots are taken right from the advanced OpenStreetMap editor JOSM. It’s really quite straightforward to add a new background image resource to JOSM, provided it’s available as either a WMS service or preferably a tiled map service following the TMS protocol. To add an image resource, go to Edit > Preferences… (F12) and click on the imagery tab (the green one that says WMS TMS). The top part represents all the imagery layers that are built into JOSM by default. The bottom part represents the active layers, and you can add your own there by clicking the grey + next to the list. A smaller window pops up; this is where you add the imagery layer details: its name (you can make that up) and the base URL of the WMS or TMS service.

For WMS, that’s the URL of the service without any of the WMS parameters. Clicking ‘Get Layers’ will retrieve the available layers from the service and present them. All you need to do is select the layer you want to add and press OK. The layer should now appear in the Imagery menu of JOSM. In the case of the Utah imagery, the URL is

For TMS, the process is a little different: you need to figure out which parts of the TMS URL represent the zoom level and the x and y location of the image tile in the URL, and insert them as {zoom}, {x} and {y}. Look at the default layers if you get confused. The provider of the service will also be able to help you figure it out!

Don’t forget to always get permission from the service provider first! The fact that the service is publicly available does not mean that it’s OK to use it for mapping in OpenStreetMap! Discuss with the provider of the imagery, and if in doubt, get in touch with the community through the legal-talk mailing list or IRC. And don’t forget to document the imagery on the wiki: edit the relevant page for your region, town or country, and add the resource to the Imagery Overview page.

Voting for the 2012 OpenStreetMap US Chapter Board is open

UPDATE: I got elected! Looking forward to this challenge a lot. I wish there was a stronger mandate from the community – there really weren’t very many votes. This is one of the things we will focus on: getting more people to join the OpenStreetMap US Chapter, and give them good reasons why they should.

The board of the US Chapter has run its term and it’s time for elections! After a nomination period which ended a second before midnight today, the voting period has now begun. This means that any paid up member of the OpenStreetMap US Chapter can cast their vote for up to five of the nominees. The details are all here, including the list of nominees.

Needless to say, if you care about the OpenStreetMap in the US, be it as a community member or as a (potential) user, it is in your interest to have a US Chapter board in place that serves to support the US OpenStreetMap community and its user base. If you’re not a member of the US Chapter yet, you can sign up now and be eligible to vote immediately – voting is open until this Sunday.

If you want to learn more about the US Chapter, head over to the web site. The OpenStreetMap Wiki has some more information about the proposed relationship between the local chapters and the OpenStreetMap Foundation. The Foundation also runs a Working Group that is looking into formalizing the Local Chapter idea.

Surprisingly – to me at least – is that only one of the current board members is running again. With so little experience carried over, it’s going to be a big challenge for the new board to settle into a good routine, picking up best practices from the current and past boards.

Also, I would have liked to see more candidates to choose between.

As I am running for a seat on the board as well, I will not discuss the individual nominees. Luckily, most of them have responded to my call to put up a manifesto, so you can make an informed voting decision if you – like me – do not know all the candidates personally.

What are you waiting for? Sign up if you haven’t already, and vote!

KU versus KSU – the OpenStreetMap showdown

Kate Chapman asked on the OpenStreetMap US Facebook page where all we would be mapping this Labor Day weekend. I’m in Lawrence, KS right now so that’s where I’m mapping. I was surprised to see that the KU Campus is not very well represented in OpenStreetMap at all — unlike most university campuses I’ve seen. I was lucky enough to get here just in time for the start of the college football season to at least put in the Memorial Stadium and surrounding lots, but there’s lots more to do. As fellow OpenStreetMapper Toby Murray pointed out, neighboring KSU is much better mapped. I’m sure that I would feel more compelled to improve the situation would I live here, but it does make me stop and think about how a city with a major university apparently does not have many active mappers.

I am always curious about local OpenStreetMap dynamics, so I decided to do a visual and numerical comparison between these two Kansas university towns. I made two visual comparisons using a 0.01 degree grid. For each grid cell, I calculated the mean version and the mean age of the way features — age being the time that passed between the last edit and Sept 1st. On top of the grid, I projected the way features themselves in thicker, transparent lines to create a sense of feature density. Here’s the results:

Mean version:

Mean age:

The KSU campus, with much detail in the buildings and footways.

A visual inspection of these images shows that Manhattan has received more recent OpenStreetMap contributions overall. Lawrence seems to have a little more green areas in terms of mean version, although not in the parts of town that really matter most: downtown and the campus area. In terms of way feature density, Manhattan shows much more density especially around campus. This really shows when you zoom in on the campus, where a lot of detail in the building outlines and footways shows.

This visual analysis is nice to get an initial impression, but you really also need a numerical analysis to back it up. I did a quick analysis run collecting some stats for the data for both towns. It turns out that Lawrence has 0.4% non-TIGER ways, whereas Manhattan has 0.77%. These are ways that do not have any TIGER source tags. That most likely indicates a way that was not part of the TIGER import, and was thus added by a human OpenStreetMap contributor. Of the ways that have TIGER tags, Lawrence has only an average 1.31 increase in version, while Manhattan shows an average 2.56 increase in version. This means that the TIGER imported ways in Manhattan have received more subsequent attention by human contributors than in Lawrence. Also, of the TIGER ways in Manhattan, only 8% are still in their untouched 2007 state. In Lawrence, that figure is 19%.

This is just a quick analysis and as such does not answer the real question of why Lawrence’s OpenStreetMap data is so much worse than Manhattan’s? There’s a lot of potential answers to this. They all boil down to an unbalance in local contributors: Lawrence is a sizeable town with a 30,000 student university, and it seems none of those students are actively mapping the KU campus — or any other local highlights for that matter: I was at two local cafes today that seemed popular with the student crowd and neither of them were in OpenStreetMap. Manhattan, 80 miles to the east, has a somewhat smaller university but is much better mapped.

Local OpenStreetMap quality still varies a lot. It only takes a handful of contributors to improve the local data, but if there’s nobody who generates awareness locally, the momentum will not be there. So someone here in Lawrence, in the KU Geography department for example, get on the task and bring OpenStreetMap to life here! There’s only so much I can map…

A New OpenStreetMap Visualization: Version Contour Lines

We just wrapped up a weekend of OpenStreetMap hacking at the wonderful LinuxHotel. In this post, I am going to share a visualization idea I discussed in the car on the way here and seemed like enough fun to spend a day hacking on: using contour lines to visualize quality metrics for OpenStreetMap. I’ve been wanting to implement something like this for a while now, even though some similar efforts already exist, notably the OSM Inspector and OSMatrix.

OSM Inspector and OSMatrix

Jochen Topf’s OSM Inspector, a great tool for visualizing potential sources of error in the OpenStreetMap database. Although the Inspector is extensible and quite powerful, it focuses on individual data elements rather than providing the bird’s eye view on quality that I want to provide.

The OSMatrix tool is a recent effort by the great folks at the Geography department of the Unieversity of Heidelberg. OSMatrix provides an overlay of hexagonal cells visualizing a range of metrics for the data in each cell. It looks great and tells the kind of story I want to tell, but the tool only takes into account the current planet data, whereas I want to be able to tap into the wealth of information contained in the full history. Incidentally, the wish for better ways of managing full history data was one of the motivations for having this hack weekend in the first place.

The visualizations I want to provide should tell the story behind OpenStreetMap data from a local perspective. This goes beyond a plain time lapse visualization of the growth of the map from its inception, because OpenStreetMap is not just a database – more than anything, it’s people. Any visualization attempting to tell the story of OpenStreetMap should be about people as well as data.

Evolution of Shevchenko village and Pivnichnjy residential area in Dnipropetrovsk on OpenStreetMap

Hacking my way to version contour lines

All these big plans notwithstanding, I am taking a very pragmatic approach here. My short-term goal is just to get something out there and collect feedback and generate ideas on how to take it from there, both on a technical level as well as on what to show. More than anything, I wanted something presentable at the end of a short weekend.

I decided to go with a small planet extract for the Amsterdam region, and attempt a contour lines visualization of the average version number. Here’s the approach I took in broad strokes:

  • Download NL planet
  • Extract Amsterdam bounding box
  • Import into PostgreSQL
  • Create a grid based on version numbers
  • Create contour lines from grid
  • Convert lines to polygons
  • Styling
  • Creating WMS-T layer
  • Putting it all together in an OpenLayers application

Let’s see how it worked out. First a screen shot of the final result to give you an idea of where we’re headed:

Planet preparations

The Dutch OpenStreetMap servers provide a daily extract for BeNeLux (Belgium, the Netherlands, Luxembourg), available from here. The PBF format is strongly preferable to the legacy XML format: it’s smaller and processing is much, much faster, and it’s supported by the major OpenStreetMap data processing tools.

Once loaded, it takes only a single omosis command to cut an extract from a bounding box and load that into a PostgreSQL database:

osmosis --rb file="planet-benelux-latest.osm.pbf" --bb left=4.71 bottom=52.27 right=5.08 top=52.47 --sort --wp database="amsterdam" user="mvexel" 

I am assuming here that you are familiar with creating a PostgreSQL database and preparing it for Osmosis usage. This page on the OpenStreetMap wiki provides more background if needed.


Next is creating a grid based on attribute data. There’s probably a plethora of ways to go about this, but I for one had never done it before. I guess I am a neogeographer after all. After looking into ways to do it in PostgreSQL, I stumbled on gdal_grid, a command line utility that is part of the GDAL suite. It exposes the grid creation functions of GDAL for command line use and seemed to do what I need. There is one snag though: it takes point data as an input source and as such can only handle the node data from OpenStreetMap. This is inherent in the process: grids are an aggregation of point data. For now, I am not going to care too much, but I will get back to this towards the end of this post.

gdal_grid offers several methods to calculate the cell values based on the individual point values. One is based on inverse square weighing, one on moving averages, and the simplest one just takes the value of the nearest point to the grid reference point (called the grid node). All methods operate on an ellipse centered at the grid node, taking into account all points within the ellipse for the calculation of the cell value. Some more background on the various methods and their math can be found here.

I experimented some with the first two methods, discarding the nearest neighbor method offhand as too coarse. The inverse square method is computationally much more intensive than the moving averages method, resulting in much longer processing times. Turning gdal_grid with inverse square weighing loose on my Amsterdam extract, which contains just over 1.1 million nodes, did not complete within the hour I was prepared to wait for it to do so. The moving averages method completed in about 15 minutes. Don’t ask for machine specs because I have no idea. Bug the sysadmins at #osm-nl for that ;P

I did a little experimenting with the ellipse size as well. As expected, bigger ellipse sizes make for a much smoother result with all extremes averaged away. It looks pretty, but does not tell me what I want to know. And bear with me: the end result is pretty in a way, too..

gdal_grid can output to any GDAL supported format, although I’m not sure all of them would make sense. I had it create a GeoTIFF using the following command:

gdal_grid -zfield "version" -l nodes PG:dbname=amsterdam -a average:radius1=0.01:radius2=0.008:angle=30 version_ways_001.tiff 

Contour lines

The grid could be used for visualization directly, but I believe it does not tell a good story. A square grid to me suggests to much of a technical abstraction. I want contour lines because they emphasize the dynamics of the people that create OpenStreetMap. Of course this is a subjective observation, but that’s what visualization is about – telling a story involves conveying a feeling. I want people from inside the project as well as outsite to get a feeling for the map data.

GDAL also incorporates functions to generate contour lines from a grid. Luckily, these functions are wrapped in a command line tool, gdal_contour, so I can abstain from coding :). gdal_contour takes a small number of parameters, of which the nodata and interval ones are particularly relevant for how the visualization turns out. I ended up choosing an interval of 0.2, which I found generates the best signal-to-noise balance in the image.

gdal_contour -a elevation version_ways_001.tiff version_ways.shp -i 0.2 

It seemed unfortunate at first that the gdal_contour tool outputs linestrings and not closed polygons. Most linestrings are closed anyway and it would be nice to be able to create color areas. I made an attempt to convert the linestrings to polygons, first in a python script using OGR functions (failed due to my lack of understanding of the OGR Python bindings), and later also in PostGIS (that worked), but ended up using the linestrings after all, for reasons that will become apparent soon


Next time I promise I will do a Mapnik stylesheet, but for now I am resorting to the AtlasStyler – Geoserver combo that I know well. AtlasStyler is a visual style editor that takes a PostGIS table or some other vector data source, and provides a nice GUI for classification, symbology and labeling. The created style can be exported as an SLD, which can be used in GeoServer.

AtlasStyler does not let you select the geometry column you want to visualize if your table has more than one. I did not notice this at first, and started an attempt to style the linestring geometries using quantile classification and a red-green color range. This came so close to what I wanted to achieve that I decided not to bother sorting out that multiple geometry columns issue.

The Result

I imported the SLD saved in AtlasStyler as a new style in GeoServer, and applied it to a newly created version lines layer (creating a layer from a PostGIS table in Geoserver is really easy, refer to the documentation for more background).

Because I chose thicker lines, the lower zoom levels pretty much look like filled polygons, while on higher zoom levels you would still be able to see the map.

Next Steps and More Ideas

This is a hacky prototype and and I chose to not make it publicly available just yet, firstly because it only covers a very small area and secondly because I will have very little time in the next week(s) to respond to comments and improve it.
The first thing I’d want to do is extend the coverage to at least the whole of NL. Also, discussing the version layer with the other OpenStreetMap Hack Weekend people gave me some ideas for further development. Let me summarize those below to conclude this post.

My initial idea was to aggregate node, way and relation versions into one visualization. Due to technical limitations I ended up visualizing only node versions – but that may be a better way anyway. Way versions tell a different story than node versions. Ways change version much less often, thus the average versions of ways are bound to be lower. Also, when a way does get a version increase it’s more likely to be a significant contribution to the map. It would thus make sense to add way versions as a separate countour lines layer instead of finding a way to incorporate them in to the node versions layer.

Also, the aesthetics of the visualization could be improved in a number of ways. First, filtering out the contour lines for version < 1 would deemphasize the holes in the data, for example large bodies of water or unused or unmapped land. As it is, the sharp red edges are distracting. Also, I’d love to smooth out the contour lines, but couldn’t seem to find a way in gdal_contour to do that. Can’t be hard though? I would also be nice to have labels on some contour lines, for example the whole integer lines. I know mapnik can do that, not so sure about Geoserver.

What’s Next

The first results also inspired some ideas for more visualizations. An interesting and easy one would be to visualize the time since the last edit. This would clearly show areas that have been abandoned by mappers. A somewhat more elaborate visualization would be the average number of edits in the lifespan of the feature. This would paint a clearer picture of the overall activity in a region. For that however, we need access to the full history, which is not available in the default planet. Full history is also a lot more data and on top of that – I may have mentioned this before – the current tools and data models cannot store full history in an easily accessible way.

That brings us full circle to the reason why we organized this Hack Weekend to begin with – to think about storing and retrieving OpenStreetMap history in a way that makes queries like ‘who all have contributed to this feature / this area?’ or ‘what did the map for this area look like two years ago?’. A lot of ideas were discussed to make this possible, both on the storage / database side of things as well as on the toolchain. I am glad that we got a group of people together who are all engaged with this topic on different levels, such as Jochen Topf bringing in his experience with osmium / osmjs, Peter Körner with his ongoing work on adapting the PBF format to allow history and creating a tool for making full history extracts, and Stefan de Konink with his strong background in database performance. I would like to say a heartfelt thanks to all who were there for making it a productive and fun weekend, to LinuxHotel for providing the perfect setting, and last but not least to our sponsor, OpenThesaurus, for helping to make this weekend possible!

Bing Aerial Photos for OpenStreetMap – Great, But Are They Recent?

Bing, Microsoft’s decision engine that also includes Bing Maps, the former Virtual Earth platform, announced a week ago that OpenStreetMap can use all their worldwide aerial imagery to improve their free and open wiki world map. As these things tend to go with OpenStreetMap, the ink of the blog post announcing this news was not even dry when new versions of the OpenStreetMap editors were ready that included the Bing Maps Aerial layer as a backdrop. As soon as the formal go-ahead was given by the Bing legal team, the new editors went live. Since then, active OpenStreetMap volunteers have been tracing, checking and completing data like there’s no tomorrow, kicking OpenStreetMap into a higher gear once again.

Data Quality

As the initial excitement is wearing off and mappers get more acquainted with the new Bing layer, a need arises for more profound insight into this emerging primary mapping resource. One question that almost immediately pops up when using the Bing Aerial layer for OpenStreetMap editing is: what is the quality of this imagery? This question pans out into a few different quality dimensions. Firstly, the resolution of the images – which seems to be generally better than the Yahoo aerial imagery that OpenStreetMap has been relying on as a background layer. Secondly the spatial accuracy, which can be an issue for large scale mapping. Spatial accuracy can vary in aerial imagery, due to the process of orthorectification. The error is usually not very big, but an error of even a few metres can be prohibitive for large scale mapping. Lastly, the the age of the photo is a crucial quality component. This temporal dimension of the Bing imagery data quality is particularly interesting when OpenStreetMap contributors work on an area without knowing the current ground reality. Another case where imagery age is of particular consequence is when an area has data from a previous import of known age and origin. It only makes sense to refine the map based on Bing imagery when that imagery is actually of a more recent date than the import. With the recent 3DShapes import in the Netherlands, we are dealing with that exact situation. It would be good to be able to easily ascertain if the Bing imagery for a particular tile is of more recent date than the 3DShapes. So I built a tool to do just that.

The Bing Date Map shows a full screen Bing Aerial map using the Bing Maps SDK. Every 256×256 tile of imagery is overlaid with a transparent tile showing the photo date for that tile. This information is extracted from the HTTP headers:

mvexel$ curl -I ''
HTTP/1.1 200 OK
Cache-Control: public, max-age=31536000
Content-Length: 7255
Content-Type: image/jpeg
Expires: Sat, 03 Dec 2011 17:02:31 GMT
Last-Modified: Mon, 23 Aug 2010 17:40:15 GMT
ETag: "7"
Server: Microsoft-IIS/7.5
X-VE-TFE: DB30022438
X-VE-TILEMETA-CaptureDatesRange: 7/1/2004-7/31/2004
X-VE-TILEMETA-Product-IDs: 3
X-VE-TBE: 0023726
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
Date: Fri, 03 Dec 2010 17:02:31 GMT
Connection: keep-alive

The X-VE-TILEMETA-CaptureDatesRange key gives a date range for the capture date that we are going to use. Usually, the range seems to be one month, so that’s the resolution we’re going to work with. I wrote a PHP script that takes a tile quadkey, which is a unique geographical identifier, as input. It retrieves the HTTP HEAD as displayed above, parses the dates into a legible format and writes the date string onto a transparent, 256×256 pixel PNG image which is the output of the script. Using the Bing Maps SDK, it is then very straightforward to overlay this pseudo-tileserver as a TileLayer onto the Aerial base map:

var tileSource = new Microsoft.Maps.TileSource({uriConstructor: 'http://server/tile.php?t={quadkey}'});    
var tilelayer= new Microsoft.Maps.TileLayer({ mercator: tileSource, opacity: 1 });

The result looks like this:

You can give it a try here. I might (be forced to) close it down as firing HTTP HEAD requests directly at Microsoft’s tile servers probably violates their terms of use.