I have written a just-about-working audio atlas using OSM and Marine Border data. I can control my long/lat position with the cursor keys and get spoken feedback on the country/sea/ocean I am currently in.When I download all the admin_levels for a country I get spoken feedback on all the subdivisions. And I have a “What’s near me” feature which I can browse in order of either bearing or distance when I overlay data for, say, the cities in a country…
For example, I can go to somewhere in the UK and get the followinng announced:
And that Southampton is 5 kilometres at a bearing of 135 degrees.
And if I go south 50 kilometres I end up in the English Channel.
All this is utterly fantastic because as a blind person I’ve not be able to get such detailed geographic info in many years.
I am struggling though with a number of aspects of rendering an audio map based on the data I have. I am guessing that the problems might well have been solved for graphical rendering.
Here are 3 examples for working out physical geography.
Terratorial waters are included in admin level boundary relations
This makes it difficult to know when exactly a position is on land or on sea. How is this managed graphically?
For example, when rendering a pixel just off the coast but within the terratorial waters of the main land, how can you know if you are on land or in water?
Mountain ranges do not have polygons defining their area. Mountain ranges are made up of ridge lines and a node to position a label.
A mountain node does not state the mountain range it is in so it is not possible to make a bounding box for all mountains of the same range.
How are mountain ranges rendered graphically?
The Himalayas are very long (as well as tall ) so is it possible to work out when a point is in or out of this mountain range?
Very little info on deserts from what I have seen with a brief look.
How are the extents of deserts rendered graphically?
So there is so much I don’t know that my assumptions might all be wrong and my lack of experience might be playing against me. However, I would very much like to know the general principles of rendering maps as well as specifics as above.
Some of the principles I’m thinking of are around coping with different zoom levels. Do I need to preprocess different versions of my data for different zoom levels? A global view is so different to manage compared to a 5km2 area. Do I need to do tiles? I think I’ll need an r*tree soon also.
Any help much appreciated.
8 posts - 5 participants
Ce sujet de discussion accompagne la publication sur https://community.openstreetmap.org/t/rendering-an-audio-map-using-osm-data-can-lessons-from-graphical-rendering-help/108791