The MT Blog

Finding More Summits

23 July 2017
Simon, G4TJC

How we really do it

Earlier I described how you can use flooding in Google Earth to find summits that meet your prominence criterion (usually 150m for SOTA). This is how I first started looking for summits and soon I'd bagged a few new ones not previously listed on Tenerife in the Canary Islands. However, if I wanted to cover a larger area, use an elevation data set other than Google Earth's or if I just got fed up with staring at the screen I would need something else.

Software

There are many programs for use with geographical problems. Just search on-line for "GIS" and you will find loads. Some good ones are even free, such as QGIS. However, only two programs that I know of will detect peaks based on their prominence. These are Winprom and Landserf.

Winprom was written specifically for prominence analysis by mathematician Edward Earl. Tragically he died in a hiking accident in Alaska. However, the program is seeing renewed development.

Landserf was written by Jo Wood, Professor of Visual Analytics at City University, London. It addresses several analysis and visualisation problems, including prominence analysis. This is the approach used the most by the SOTA summits team.

I should mention that as well as using raw rasterized data it is also possible to use vector data in the form of elevation contours. This has been used for some associations, but I'm more familiar with the raster-based approach.

Data

So where do we get our data? The first place one would think to look would be the national mapping agencies. For example the British Ordnance Survey or the USGS in the USA both provide free digital elevation models now. We could indeed use these, but each has a different provenance (different original sources and different processing methods). On the whole it's easier to stick with something that covers nearly the whole planet in a consistent manner.

Right now our preferred source is the Shuttle Radar Topography Mission.These data were collected from Space Shuttle Endeavour using a synthetic aperture RADAR system in 2000. Data at a resolution of 1 arc second are available to high latitudes. Above 60° we have to find an alternative such as ASTER.

That's what we use now, but it's not the end of the story. In many parts of the world LIDAR surveys have been made, often for purposes of oil exploration. Free high-resolution LIDAR data are in use for parts of Spain for example. Also, other satellite missions are in progress, promising incredibly high resolutions using RADAR again. We hope eventually that better public data sets will become available to us.

Using Landserf

Landserf is written in Java and can run under various operating systems. I swap between Linux and Windows according to whatever is convenient.

 SRTM data (hgt files) can be loaded directly. Typically (unless you are examining a very small patch) you will need to load several tiles and combine them. In the example I have combined four tiles covering north-west Wales. These are actually Terrain5 tiles from OSGB.

Terrain5 DEM

OSGB Terrain5 tiles of Eryri with prominence analysis overlaid

The next step is to launch Landserf's Analyse -> Peak classification menu. Here you specify that you want a minimum drop of 150m. Once the analysis is complete you can add the vector overlay of summits, cols and ridge lines, as shown above.

Here the big blob just below centre is Snowdon, GW/NW-001. To the north-east of it lie first the Glyderau and then the Carneddau.

Scripting

Using Landserf by hand is certainly viable for a small association. For anything much bigger than the example it's really nice to be able to automate it.

Thankfully Landserf includes a scripting facility, so you can write out in advance which tiles to load, &c. However, this still requires a lot of typing. So what we do in the summits team is run another script to write the script for us! Here we are indebted to Csaba, YO6PIB - the brains behind a set of Python programs. Given an association code, Csaba's script will first translate that into a country name. This is then used to look up a set of polygons defining the outline of the country. The SRTM tiles required to cover the polygons are then pulled in from a USGS server and a Landserf script generated. Next we launch Landserf, loading the generated script. Once this has run we have a set of files describing the summit locations and prominences in various formats. Some more scripting helps us to compare these with any summit locations already in the database or provided for a new association.

The analysis stage can take a very long time, even with a powerful workstation computer with fast multi-core processor and a lot of RAM. The program is performing least-squares fitting of parabolic surfaces to the elevation data to find the maxima (summits) and saddle points. Ridge lines linking these are then followed. Obviously covering a larger area takes more processing, but it seems that beyond about 25 to 30 1-degree-square 1-arcsecond tiles (around 300 – 400 million data points) Landserf begins to get stuck and the analysis can take days to run, even on a high-spec PC. Therefore to cover the larger associations it is necessary to split them up into smaller patches of tiles for analysis, stitching the results together afterwards. I prefer to avoid this if possible, as it can miss very distant key cols, despite allowing overlap between patches. However, we can watch for this and down-sample the data to process at lower resolution if necessary.

Armed now with a putative summits list in various formats (tables in csv and various mapping formats such as kml for Google Earth) we start the main job! No data set is perfect - vertical errors are easily 10m or more. Sometimes this is down to random noise, but also we can see effects such as the early RADAR return from tree tops giving a higher elevation reading for wooded areas. So, our preferred reference is the best available map for the area - typically the maps from the national mapping agency. We don't necessarily regard these as definitive, as discrepancies from our analyses can reveal certain errors in the map. But on the whole, and until something better comes along, we take these as the best guess at the way things really are. How to check the analysis against the map? Usually this is time for the Mk.I eyeball! It helps if you can load the map and the analysis into a program such as Google Earth or QGIS. But even then it still takes say a minute or two per summit, just to zoom to the summit and col positions to check that they look Ok.

QGIS

A snippet of our ZL3 analysis loaded into QGIS. for checking
LINZ map is crown copyright reserved, licensed under Creative Commons 4.0

Often they're not Ok. There may be a position error - that's an easy fix (just drag it in QGIS). Elevation can be more of a problem, as it might mean what you thought was the higher summit of a pair is actually lower. Likewise with cols and the pairing of cols to summits. You can even end up with a whole chain of summits/cols doing a merry dance because one elevation was out of step! Whilst checking against the maps we also gather any summit names we can identify. This stage is one where a good local team can really help us out. Sometimes the best maps aren't available online, so we really depend on the AM and RMs to scour their local resources to make best sense of the analysis.

So finally we have a table of summits and cols that are our best representation of the way the topography really stands, we've found all the names we can and we can publish it. Hopefully the noise in the data won't have caused too many good summits to be dropped or (worse) false summits to be listed. Next we just have to agree on points bands, winter bonus, &c, &c...