.
+ timeout
+ Defines a timeout in seconds for the request.
+ If not defined, the global default timeout setting will be used.
+
@@ -1312,6 +1316,14 @@
URL Template
Some WMS servers use the Referer request header to authenticate requests;
this parameter provides one.
+ source projection
+ Names a geographic projection, explained in Projections, that
+ coordinates should be transformed to for requests.
+
+ timeout
+ Defines a timeout in seconds for the request.
+ If not defined, the global default timeout setting will be used.
+
diff --git a/CHANGELOG b/CHANGELOG.md
similarity index 70%
rename from CHANGELOG
rename to CHANGELOG.md
index 9bfef885..70b93a1b 100644
--- a/CHANGELOG
+++ b/CHANGELOG.md
@@ -1,3 +1,130 @@
+v0.10.1
+-------
+* Update building transforms to work with `_` separated properties. The queries upstream changed to return `_` as the separator instead of `:`. See [#806](https://github.com/mapzen/vector-datasource/issues/806).
+
+v0.10.0
+-------
+* Add `is_bicycle_related` transform. See [#152](https://github.com/mapzen/TileStache/pull/152).
+* Normalize cycling related properties so shared properties for `cycleway:left` and `cycleway:right` are deleted and projected into `cycleway` directly, and if `cycleway:both` is included but `cycleway` is not, project that onto `cycleway`. [#150](https://github.com/mapzen/TileStache/pull/150).
+* Don't label generate label placements for unnamed stone and rock features. See [#151](https://github.com/mapzen/TileStache/pull/151).
+* Don't generate label placements for point or line features (for islands), make it configurable. See [#153](https://github.com/mapzen/TileStache/pull/153) and [#154](https://github.com/mapzen/TileStache/pull/154).
+* Update road kind for `whitewater=portage_way`. See [#140](https://github.com/mapzen/TileStache/pull/140).
+* Remove junk `highway=minor` and `highway=footpath` from logic. See [#137](https://github.com/mapzen/TileStache/pull/137).
+* Remove outdated building kind calculation. See [#139](https://github.com/mapzen/TileStache/pull/139).
+* Remove outdated `road_sort_key` transform. See [#142](https://github.com/mapzen/TileStache/pull/142).
+* Remove outdated roads functions (logic is now carried in YAML queries), and fix duplicate points bug. See [#143](https://github.com/mapzen/TileStache/pull/143).
+* Remove outdated boundary transforms (logic is now carried in YAML queries). See [#138](https://github.com/mapzen/TileStache/pull/138).
+* Add transform to convert `admin_level` to an int. See [#136](https://github.com/mapzen/TileStache/pull/136).
+* Add transform for convert `capacity` in pois layer to int. See [#146](https://github.com/mapzen/TileStache/pull/146).
+* Add function to convert `height` values to meters. See [#145](https://github.com/mapzen/TileStache/pull/145).
+* Add function to convert `elevation` values meters. See [#147](https://github.com/mapzen/TileStache/pull/147).
+* Add sort order for `peak` and `volcano` features. See [#148](https://github.com/mapzen/TileStache/pull/148).
+* Add end zoom for remove duplicates function. See [#149](https://github.com/mapzen/TileStache/pull/149).
+* Add `is_empty` property for geometry proxy for Shapely's STRtree. See [#135](https://github.com/mapzen/TileStache/pull/135).
+* Delegate quantization to mapbox-vector-tile. See [#141](https://github.com/mapzen/TileStache/pull/141).
+
+v0.9.0
+------
+* After merging, simplify with a very small tolerance to remove duplicate and almost-colinear points
+* Add transform to update parenthetical properties
+* Add ability to drop parenthetical features below some given zoom level
+* Add function to add construction state to stations
+* Adjust tile rank score for stations to take into account the different types of routes
+* Remove temporary properties which shouldn't be public
+* Add 'root_relation_id' for linking related features together in a site or public transport 'stop area' or 'stop area group'
+* Remove now unused landuse kind mapping transform
+* Add uic_ref transform
+
+v0.8.0
+------
+* Allow code in `drop_features_where` function.
+* Add bounding box clipping to exterior boundaries transform to improve water boundary performance.
+* Add kind normalisation functions for social facilities and medical features.
+* Update road sort key value function.
+* Add CSV spreadsheet property matching functions to provide sort_key value lookups for landuse, roads, and most other layers.
+* Use more precision for json formatter on z16 and higher.
+
+v0.7.0
+------
+
+* Add function to normalise tourism kind and related properties.
+* Add function to drop properties from features under some configurable set of conditions.
+* Implement merging for linear features.
+* Add code to normalise leisure kinds for fitness-related POIs.
+* Add transform to include aeroway tag for `kind=gate`.
+
+v0.6.0
+------
+* Ensure that the `population` property, if present, is an integer.
+* Add IATA short (3-character) codes to airports.
+* Interpret road kinds for pistes, motorway junctions, racetracks and piers.
+* Normalise `is_link`, `is_tunnel` and `is_bridge` so that it is not present in the negative; it should only be present when its value is positive.
+* Re-order aerialways to be above all road types, including bridges, unless overriden by a `layer` property.
+* Added sort order properties for beaches and winter sports areas.
+* Added a function to remove abandoned pistes from the output.
+
+v0.5.1
+------
+* Protect against empty snapped geometries - resolves segfault
+
+v0.5.0
+------
+* Update sorts for `pois` and `places`
+* Update `kind` calculation for roads to include `aerialway`s
+* Add volume calculation transform
+* Update road sort key calculation
+* Remove `scalerank` for neighbourhoods
+* Allow exterior boundaries to be snapped to a grid
+* Improve exterior boundaries processing
+* Add post processing functions:
+ - generate address points from buildings
+ - drop certain features
+ - drop certain feature properties
+ - remove features with zero area
+ - remove duplicate features
+ - normalize duplicate stations
+ - only keep the first N features matching a criteria
+ - rank features based on a key
+ - normalize `aerialway`s
+ - numeric min filter
+ - copy features across `layers`
+ - replace geometry with a representative point
+
+v0.4.1
+------
+* Make admin boundaries post-processing filter work with boundary linestring fragments rather than needing an oriented polygon.
+
+v0.4.0
+------
+* Add transform to dynamically create polygon `label_placement` features using representative point for existing layers (or exporte in new layer) as long as the input feature has a `name` or `sport` tag.
+* Ported PostGIS SQL logic from Vector Datasource to Python, with spatial intersection logic for `maritime` boundaries.
+* Use the input data's `kind` property when provided when determining building feature's kind.
+* Add `ferry` lines to high roads logic.
+
+v0.3.0
+------
+* Add intracut (intersection) algorithm for cutting within layers.
+* Add smarts for dealing with maritime boundary attributes
+* Add tranform for water `tunnel`s
+
+v0.2.0
+-----
+* Stable
+
+--------------------------------------------------------------------------------
+
+Pre-fork changelog
+------------------
+
+2014-05-10: 1.49.10
+- Fixed Travis build.
+- Fixed import errors for case-insensitive filesystems.
+- Added TileStache Vagrant machine configuration.
+- Fixed some memcache testing problems.
+
+2014-05-10: 1.49.9
+- Moved everything to PyPI and fixed VERSION kerfuffle.
+
2013-07-02: 1.49.8
- Dropped Proxy provider srs=900913 check.
- Updated most JSON mime-types from text/json to application/json.
diff --git a/Makefile b/Makefile
index 11926d12..75d97120 100644
--- a/Makefile
+++ b/Makefile
@@ -1,111 +1,59 @@
-VERSION:=$(shell cat VERSION)
-PACKAGE=TileStache-$(VERSION)
-TARBALL=$(PACKAGE).tar.gz
+VERSION:=$(shell cat TileStache/VERSION)
DOCROOT=tilestache.org:public_html/tilestache/www
-all: $(TARBALL)
- #
-
-live: $(TARBALL) doc
- scp $(TARBALL) $(DOCROOT)/download/
+live: doc
rsync -Cr doc/ $(DOCROOT)/doc/
- python setup.py register
-
-$(TARBALL): doc
- mkdir $(PACKAGE)
- ln setup.py $(PACKAGE)/
- ln README.md $(PACKAGE)/
- ln VERSION $(PACKAGE)/
- ln tilestache.cfg $(PACKAGE)/
- ln tilestache.cgi $(PACKAGE)/
-
- mkdir $(PACKAGE)/TileStache
- ln TileStache/*.py $(PACKAGE)/TileStache/
-
- rm $(PACKAGE)/TileStache/__init__.py
- cp TileStache/__init__.py $(PACKAGE)/TileStache/__init__.py
- perl -pi -e 's#\bN\.N\.N\b#$(VERSION)#' $(PACKAGE)/TileStache/__init__.py
-
- mkdir $(PACKAGE)/TileStache/Vector
- ln TileStache/Vector/*.py $(PACKAGE)/TileStache/Vector/
-
- mkdir $(PACKAGE)/TileStache/Goodies
- ln TileStache/Goodies/*.py $(PACKAGE)/TileStache/Goodies/
-
- mkdir $(PACKAGE)/TileStache/Goodies/Caches
- ln TileStache/Goodies/Caches/*.py $(PACKAGE)/TileStache/Goodies/Caches/
-
- mkdir $(PACKAGE)/TileStache/Goodies/Providers
- ln TileStache/Goodies/Providers/*.py $(PACKAGE)/TileStache/Goodies/Providers/
- ln TileStache/Goodies/Providers/*.ttf $(PACKAGE)/TileStache/Goodies/Providers/
-
- mkdir $(PACKAGE)/TileStache/Goodies/VecTiles
- ln TileStache/Goodies/VecTiles/*.py $(PACKAGE)/TileStache/Goodies/VecTiles/
-
- mkdir $(PACKAGE)/scripts
- ln scripts/*.py $(PACKAGE)/scripts/
-
- mkdir $(PACKAGE)/examples
- #ln examples/*.py $(PACKAGE)/examples/
-
- mkdir $(PACKAGE)/doc
- ln doc/*.html $(PACKAGE)/doc/
-
- mkdir $(PACKAGE)/man
- ln man/*.1 $(PACKAGE)/man/
-
- tar -czf $(TARBALL) $(PACKAGE)
- rm -rf $(PACKAGE)
+ python setup.py sdist upload
doc:
mkdir doc
- pydoc -w TileStache
- pydoc -w TileStache.Core
- pydoc -w TileStache.Caches
- pydoc -w TileStache.Memcache
- pydoc -w TileStache.Redis
- pydoc -w TileStache.S3
- pydoc -w TileStache.Config
- pydoc -w TileStache.Vector
- pydoc -w TileStache.Vector.Arc
- pydoc -w TileStache.Geography
- pydoc -w TileStache.Providers
- pydoc -w TileStache.Mapnik
- pydoc -w TileStache.MBTiles
- pydoc -w TileStache.Sandwich
- pydoc -w TileStache.Pixels
- pydoc -w TileStache.Goodies
- pydoc -w TileStache.Goodies.Caches
- pydoc -w TileStache.Goodies.Caches.LimitedDisk
- pydoc -w TileStache.Goodies.Caches.GoogleCloud
- pydoc -w TileStache.Goodies.Providers
- pydoc -w TileStache.Goodies.Providers.Composite
- pydoc -w TileStache.Goodies.Providers.Cascadenik
- pydoc -w TileStache.Goodies.Providers.PostGeoJSON
- pydoc -w TileStache.Goodies.Providers.SolrGeoJSON
- pydoc -w TileStache.Goodies.Providers.MapnikGrid
- pydoc -w TileStache.Goodies.Providers.MirrorOSM
- pydoc -w TileStache.Goodies.Providers.Monkeycache
- pydoc -w TileStache.Goodies.Providers.UtfGridComposite
- pydoc -w TileStache.Goodies.Providers.UtfGridCompositeOverlap
- pydoc -w TileStache.Goodies.Providers.TileDataOSM
- pydoc -w TileStache.Goodies.Providers.Grid
- pydoc -w TileStache.Goodies.Providers.GDAL
- pydoc -w TileStache.Goodies.AreaServer
- pydoc -w TileStache.Goodies.StatusServer
- pydoc -w TileStache.Goodies.Proj4Projection
- pydoc -w TileStache.Goodies.ExternalConfigServer
- pydoc -w TileStache.Goodies.VecTiles
- pydoc -w TileStache.Goodies.VecTiles.server
- pydoc -w TileStache.Goodies.VecTiles.client
- pydoc -w TileStache.Goodies.VecTiles.geojson
- pydoc -w TileStache.Goodies.VecTiles.topojson
- pydoc -w TileStache.Goodies.VecTiles.mvt
- pydoc -w TileStache.Goodies.VecTiles.wkb
- pydoc -w TileStache.Goodies.VecTiles.ops
-
- pydoc -w scripts/tilestache-*.py
+ python -m pydoc -w TileStache
+ python -m pydoc -w TileStache.Core
+ python -m pydoc -w TileStache.Caches
+ python -m pydoc -w TileStache.Memcache
+ python -m pydoc -w TileStache.Redis
+ python -m pydoc -w TileStache.S3
+ python -m pydoc -w TileStache.Config
+ python -m pydoc -w TileStache.Vector
+ python -m pydoc -w TileStache.Vector.Arc
+ python -m pydoc -w TileStache.Geography
+ python -m pydoc -w TileStache.Providers
+ python -m pydoc -w TileStache.Mapnik
+ python -m pydoc -w TileStache.MBTiles
+ python -m pydoc -w TileStache.Sandwich
+ python -m pydoc -w TileStache.Pixels
+ python -m pydoc -w TileStache.Goodies
+ python -m pydoc -w TileStache.Goodies.Caches
+ python -m pydoc -w TileStache.Goodies.Caches.LimitedDisk
+ python -m pydoc -w TileStache.Goodies.Caches.GoogleCloud
+ python -m pydoc -w TileStache.Goodies.Providers
+ python -m pydoc -w TileStache.Goodies.Providers.Composite
+ python -m pydoc -w TileStache.Goodies.Providers.Cascadenik
+ python -m pydoc -w TileStache.Goodies.Providers.PostGeoJSON
+ python -m pydoc -w TileStache.Goodies.Providers.SolrGeoJSON
+ python -m pydoc -w TileStache.Goodies.Providers.MapnikGrid
+ python -m pydoc -w TileStache.Goodies.Providers.MirrorOSM
+ python -m pydoc -w TileStache.Goodies.Providers.Monkeycache
+ python -m pydoc -w TileStache.Goodies.Providers.UtfGridComposite
+ python -m pydoc -w TileStache.Goodies.Providers.UtfGridCompositeOverlap
+ python -m pydoc -w TileStache.Goodies.Providers.TileDataOSM
+ python -m pydoc -w TileStache.Goodies.Providers.Grid
+ python -m pydoc -w TileStache.Goodies.Providers.GDAL
+ python -m pydoc -w TileStache.Goodies.AreaServer
+ python -m pydoc -w TileStache.Goodies.StatusServer
+ python -m pydoc -w TileStache.Goodies.Proj4Projection
+ python -m pydoc -w TileStache.Goodies.ExternalConfigServer
+ python -m pydoc -w TileStache.Goodies.VecTiles
+ python -m pydoc -w TileStache.Goodies.VecTiles.server
+ python -m pydoc -w TileStache.Goodies.VecTiles.client
+ python -m pydoc -w TileStache.Goodies.VecTiles.geojson
+ python -m pydoc -w TileStache.Goodies.VecTiles.topojson
+ python -m pydoc -w TileStache.Goodies.VecTiles.mvt
+ python -m pydoc -w TileStache.Goodies.VecTiles.wkb
+ python -m pydoc -w TileStache.Goodies.VecTiles.ops
+
+ python -m pydoc -w scripts/tilestache-*.py
mv TileStache.html doc/
mv TileStache.*.html doc/
@@ -119,4 +67,4 @@ doc:
clean:
find TileStache -name '*.pyc' -delete
- rm -rf $(TARBALL) doc
+ rm -rf doc
diff --git a/README.md b/README.md
index 38e07a80..96310d7a 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,12 @@
-#TileStache
+# Note: This repository is deprecated, and no longer supported.
+
+To serve tiles, please have a look at our [documentation for getting started](https://github.com/tilezen/vector-datasource/wiki/Mapzen-Vector-Tile-Service).
+
+##TileStache
_a stylish alternative for caching your map tiles_
-[](https://travis-ci.org/migurski/TileStache)
+[](https://travis-ci.org/TileStache/TileStache)
**TileStache** is a Python-based server application that can serve up map tiles
based on rendered geographic data. You might be familiar with [TileCache](http://tilecache.org),
@@ -52,7 +56,7 @@ Install the pure python modules with pip:
Install pip (http://www.pip-installer.org/) like:
- curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py
+ curl -O -L https://raw.github.com/pypa/pip/master/contrib/get-pip.py
sudo python get-pip.py
Install Mapnik via instructions at:
diff --git a/TileStache/Config.py b/TileStache/Config.py
index 456ceaef..9ce2d665 100644
--- a/TileStache/Config.py
+++ b/TileStache/Config.py
@@ -124,6 +124,10 @@ def __init__(self, cache, dirpath):
self.dirpath = dirpath
self.layers = {}
+ # adding custom_layer to extend multiprovider to support comma separated layernames
+ self.custom_layer_name = ","
+ self.custom_layer_dict = {'provider': {'class': 'TileStache.Goodies.VecTiles:MultiProvider', 'kwargs': {'names': []}}}
+
self.index = 'text/plain', 'TileStache bellows hello.'
class Bounds:
@@ -204,7 +208,7 @@ def buildConfiguration(config_dict, dirpath='.'):
URL including the "file://" prefix.
"""
scheme, h, path, p, q, f = urlparse(dirpath)
-
+
if scheme in ('', 'file'):
sys.path.insert(0, path)
@@ -216,6 +220,8 @@ def buildConfiguration(config_dict, dirpath='.'):
for (name, layer_dict) in config_dict.get('layers', {}).items():
config.layers[name] = _parseConfigfileLayer(layer_dict, config, dirpath)
+ config.layers[config.custom_layer_name] = _parseConfigfileLayer(config.custom_layer_dict, config, dirpath)
+
if 'index' in config_dict:
index_href = urljoin(dirpath, config_dict['index'])
index_body = urlopen(index_href).read()
diff --git a/TileStache/Core.py b/TileStache/Core.py
index f9be3ccf..6013588c 100644
--- a/TileStache/Core.py
+++ b/TileStache/Core.py
@@ -347,13 +347,14 @@ def name(self):
return None
- def getTileResponse(self, coord, extension, ignore_cached=False):
+ def getTileResponse(self, coord, extension, ignore_cached=False, suppress_cache_write=False):
""" Get status code, headers, and a tile binary for a given request layer tile.
Arguments:
- coord: one ModestMaps.Core.Coordinate corresponding to a single tile.
- extension: filename extension to choose response type, e.g. "png" or "jpg".
- ignore_cached: always re-render the tile, whether it's in the cache or not.
+ - suppress_cache_write: don't save the tile to the cache
This is the main entry point, after site configuration has been loaded
and individual tiles need to be rendered.
@@ -393,7 +394,7 @@ def getTileResponse(self, coord, extension, ignore_cached=False):
try:
lockCoord = None
- if self.write_cache:
+ if (not suppress_cache_write) and self.write_cache:
# this is the coordinate that actually gets locked.
lockCoord = self.metatile.firstCoord(coord)
@@ -417,9 +418,9 @@ def getTileResponse(self, coord, extension, ignore_cached=False):
tile = e.tile
save = False
- if not self.write_cache:
+ if suppress_cache_write or (not self.write_cache):
save = False
-
+
if format.lower() == 'jpeg':
save_kwargs = self.jpeg_options
elif format.lower() == 'png':
@@ -655,6 +656,8 @@ def setSaveOptionsPNG(self, optimize=None, palette=None, palette256=None):
if palette256 is not None:
self.palette256 = bool(palette256)
+ else:
+ self.palette256 = None
class KnownUnknown(Exception):
""" There are known unknowns. That is to say, there are things that we now know we don't know.
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/GeomEncoder.py b/TileStache/Goodies/VecTiles/OSciMap4/GeomEncoder.py
new file mode 100644
index 00000000..d3b05929
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/OSciMap4/GeomEncoder.py
@@ -0,0 +1,353 @@
+
+################################################################################
+# Copyright (c) QinetiQ Plc 2003
+#
+# Licensed under the LGPL. For full license details see the LICENSE file.
+################################################################################
+
+"""
+A parser for the Well Text Binary format of OpenGIS types.
+"""
+#
+# 2.5d spec: http://gdal.velocet.ca/projects/opengis/twohalfdsf.html
+#
+
+import sys, traceback, struct
+
+
+# based on xdrlib.Unpacker
+class _ExtendedUnPacker:
+ """
+ A simple binary struct parser, only implements the types that are need for the WKB format.
+ """
+
+ def __init__(self,data):
+ self.reset(data)
+ self.setEndianness('XDR')
+
+ def reset(self, data):
+ self.__buf = data
+ self.__pos = 0
+
+ def get_position(self):
+ return self.__pos
+
+ def set_position(self, position):
+ self.__pos = position
+
+ def get_buffer(self):
+ return self.__buf
+
+ def done(self):
+ if self.__pos < len(self.__buf):
+ raise ExceptionWKBParser('unextracted data remains')
+
+ def setEndianness(self,endianness):
+ if endianness == 'XDR':
+ self._endflag = '>'
+ elif endianness == 'NDR':
+ self._endflag = '<'
+ else:
+ raise ExceptionWKBParser('Attempt to set unknown endianness in ExtendedUnPacker')
+
+ def unpack_byte(self):
+ i = self.__pos
+ self.__pos = j = i+1
+ data = self.__buf[i:j]
+ if len(data) < 1:
+ raise EOFError
+ byte = struct.unpack('%sB' % self._endflag, data)[0]
+ return byte
+
+ def unpack_uint32(self):
+ i = self.__pos
+ self.__pos = j = i+4
+ data = self.__buf[i:j]
+ if len(data) < 4:
+ raise EOFError
+ uint32 = struct.unpack('%si' % self._endflag, data)[0]
+ return uint32
+
+ def unpack_short(self):
+ i = self.__pos
+ self.__pos = j = i+2
+ data = self.__buf[i:j]
+ if len(data) < 2:
+ raise EOFError
+ short = struct.unpack('%sH' % self._endflag, data)[0]
+ return short
+
+ def unpack_double(self):
+ i = self.__pos
+ self.__pos = j = i+8
+ data = self.__buf[i:j]
+ if len(data) < 8:
+ raise EOFError
+ return struct.unpack('%sd' % self._endflag, data)[0]
+
+class ExceptionWKBParser(Exception):
+ '''This is the WKB Parser Exception class.'''
+ def __init__(self, value):
+ self.value = value
+ def __str__(self):
+ return `self.value`
+
+class GeomEncoder:
+
+ _count = 0
+
+ def __init__(self, tileSize):
+ """
+ Initialise a new WKBParser.
+
+ """
+
+ self._typemap = {1: self.parsePoint,
+ 2: self.parseLineString,
+ 3: self.parsePolygon,
+ 4: self.parseMultiPoint,
+ 5: self.parseMultiLineString,
+ 6: self.parseMultiPolygon,
+ 7: self.parseGeometryCollection}
+ self.coordinates = []
+ self.index = []
+ self.position = 0
+ self.lastX = 0
+ self.lastY = 0
+ self.dropped = 0
+ self.num_points = 0
+ self.isPoint = True
+ self.tileSize = tileSize - 1
+ self.first = True
+
+ def parseGeometry(self, geometry):
+
+
+ """
+ A factory method for creating objects of the correct OpenGIS type.
+ """
+
+ self.coordinates = []
+ self.index = []
+ self.position = 0
+ self.lastX = 0
+ self.lastY = 0
+ self.isPoly = False
+ self.isPoint = True;
+ self.dropped = 0;
+ self.first = True
+ # Used for exception strings
+ self._current_string = geometry
+
+ reader = _ExtendedUnPacker(geometry)
+
+ # Start the parsing
+ self._dispatchNextType(reader)
+
+
+ def _dispatchNextType(self,reader):
+ """
+ Read a type id from the binary stream (reader) and call the correct method to parse it.
+ """
+
+ # Need to check endianess here!
+ endianness = reader.unpack_byte()
+ if endianness == 0:
+ reader.setEndianness('XDR')
+ elif endianness == 1:
+ reader.setEndianness('NDR')
+ else:
+ raise ExceptionWKBParser("Invalid endianness in WKB format.\n"\
+ "The parser can only cope with XDR/big endian WKB format.\n"\
+ "To force the WKB format to be in XDR use AsBinary(,'XDR'")
+
+
+ geotype = reader.unpack_uint32()
+
+ mask = geotype & 0x80000000 # This is used to mask of the dimension flag.
+
+ srid = geotype & 0x20000000
+ # ignore srid ...
+ if srid != 0:
+ reader.unpack_uint32()
+
+ dimensions = 2
+ if mask == 0:
+ dimensions = 2
+ else:
+ dimensions = 3
+
+ geotype = geotype & 0x1FFFFFFF
+ # Despatch to a method on the type id.
+ if self._typemap.has_key(geotype):
+ self._typemap[geotype](reader, dimensions)
+ else:
+ raise ExceptionWKBParser('Error type to dispatch with geotype = %s \n'\
+ 'Invalid geometry in WKB string: %s' % (str(geotype),
+ str(self._current_string),))
+
+ def parseGeometryCollection(self, reader, dimension):
+ try:
+ num_geoms = reader.unpack_uint32()
+
+ for _ in xrange(0,num_geoms):
+ self._dispatchNextType(reader)
+
+ except:
+ _, value, tb = sys.exc_info()[:3]
+ error = ("%s , %s \n" % (type, value))
+ for bits in traceback.format_exception(type,value,tb):
+ error = error + bits + '\n'
+ del tb
+ raise ExceptionWKBParser("Caught unhandled exception parsing GeometryCollection: %s \n"\
+ "Traceback: %s\n" % (str(self._current_string),error))
+
+
+ def parseLineString(self, reader, dimensions):
+ self.isPoint = False;
+ try:
+ num_points = reader.unpack_uint32()
+
+ self.num_points = 0;
+
+ for _ in xrange(0,num_points):
+ self.parsePoint(reader,dimensions)
+
+ self.index.append(self.num_points)
+ #self.lastX = 0
+ #self.lastY = 0
+ self.first = True
+
+ except:
+ _, value, tb = sys.exc_info()[:3]
+ error = ("%s , %s \n" % (type, value))
+ for bits in traceback.format_exception(type,value,tb):
+ error = error + bits + '\n'
+ del tb
+ print error
+ raise ExceptionWKBParser("Caught unhandled exception parsing Linestring: %s \n"\
+ "Traceback: %s\n" % (str(self._current_string),error))
+
+
+ def parseMultiLineString(self, reader, dimensions):
+ try:
+ num_linestrings = reader.unpack_uint32()
+
+ for _ in xrange(0,num_linestrings):
+ self._dispatchNextType(reader)
+
+ except:
+ _, value, tb = sys.exc_info()[:3]
+ error = ("%s , %s \n" % (type, value))
+ for bits in traceback.format_exception(type,value,tb):
+ error = error + bits + '\n'
+ del tb
+ raise ExceptionWKBParser("Caught unhandled exception parsing MultiLineString: %s \n"\
+ "Traceback: %s\n" % (str(self._current_string),error))
+
+
+ def parseMultiPoint(self, reader, dimensions):
+ try:
+ num_points = reader.unpack_uint32()
+
+ for _ in xrange(0,num_points):
+ self._dispatchNextType(reader)
+ except:
+ _, value, tb = sys.exc_info()[:3]
+ error = ("%s , %s \n" % (type, value))
+ for bits in traceback.format_exception(type,value,tb):
+ error = error + bits + '\n'
+ del tb
+ raise ExceptionWKBParser("Caught unhandled exception parsing MultiPoint: %s \n"\
+ "Traceback: %s\n" % (str(self._current_string),error))
+
+
+ def parseMultiPolygon(self, reader, dimensions):
+ try:
+ num_polygons = reader.unpack_uint32()
+ for n in xrange(0,num_polygons):
+ if n > 0:
+ self.index.append(0);
+
+ self._dispatchNextType(reader)
+ except:
+ _, value, tb = sys.exc_info()[:3]
+ error = ("%s , %s \n" % (type, value))
+ for bits in traceback.format_exception(type,value,tb):
+ error = error + bits + '\n'
+ del tb
+ raise ExceptionWKBParser("Caught unhandled exception parsing MultiPolygon: %s \n"\
+ "Traceback: %s\n" % (str(self._current_string),error))
+
+
+ def parsePoint(self, reader, dimensions):
+ x = reader.unpack_double()
+ y = reader.unpack_double()
+
+ if dimensions == 3:
+ reader.unpack_double()
+
+ xx = int(round(x))
+ # flip upside down
+ yy = self.tileSize - int(round(y))
+
+ if self.first or xx - self.lastX != 0 or yy - self.lastY != 0:
+ self.coordinates.append(xx - self.lastX)
+ self.coordinates.append(yy - self.lastY)
+ self.num_points += 1
+ else:
+ self.dropped += 1;
+
+ self.first = False
+ self.lastX = xx
+ self.lastY = yy
+
+
+ def parsePolygon(self, reader, dimensions):
+ self.isPoint = False;
+ try:
+ num_rings = reader.unpack_uint32()
+
+ for _ in xrange(0,num_rings):
+ self.parseLinearRing(reader,dimensions)
+
+ self.isPoly = True
+
+ except:
+ _, value, tb = sys.exc_info()[:3]
+ error = ("%s , %s \n" % (type, value))
+ for bits in traceback.format_exception(type,value,tb):
+ error = error + bits + '\n'
+ del tb
+ raise ExceptionWKBParser("Caught unhandled exception parsing Polygon: %s \n"\
+ "Traceback: %s\n" % (str(self._current_string),error))
+
+ def parseLinearRing(self, reader, dimensions):
+ self.isPoint = False;
+ try:
+ num_points = reader.unpack_uint32()
+
+ self.num_points = 0;
+
+ # skip the last point
+ for _ in xrange(0,num_points-1):
+ self.parsePoint(reader,dimensions)
+
+ # skip the last point
+ reader.unpack_double()
+ reader.unpack_double()
+ if dimensions == 3:
+ reader.unpack_double()
+
+ self.index.append(self.num_points)
+
+ self.first = True
+
+ except:
+ _, value, tb = sys.exc_info()[:3]
+ error = ("%s , %s \n" % (type, value))
+ for bits in traceback.format_exception(type,value,tb):
+ error = error + bits + '\n'
+ del tb
+ raise ExceptionWKBParser("Caught unhandled exception parsing LinearRing: %s \n"\
+ "Traceback: %s\n" % (str(self._current_string),error))
\ No newline at end of file
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/StaticKeys/__init__.py b/TileStache/Goodies/VecTiles/OSciMap4/StaticKeys/__init__.py
new file mode 100644
index 00000000..9a594591
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/OSciMap4/StaticKeys/__init__.py
@@ -0,0 +1,82 @@
+'''
+Created on Sep 13, 2012
+
+@author: jeff
+'''
+
+strings = [
+'access',
+'addr:housename',
+'addr:housenumber',
+'addr:interpolation',
+'admin_level',
+'aerialway',
+'aeroway',
+'amenity',
+'area',
+'barrier',
+'bicycle',
+'brand',
+'bridge',
+'boundary',
+'building',
+'construction',
+'covered',
+'culvert',
+'cutting',
+'denomination',
+'disused',
+'embankment',
+'foot',
+'generator:source',
+'harbour',
+'highway',
+'historic',
+'horse',
+'intermittent',
+'junction',
+'landuse',
+'layer',
+'leisure',
+'lock',
+'man_made',
+'military',
+'motorcar',
+'name',
+'natural',
+'oneway',
+'operator',
+'population',
+'power',
+'power_source',
+'place',
+'railway',
+'ref',
+'religion',
+'route',
+'service',
+'shop',
+'sport',
+'surface',
+'toll',
+'tourism',
+'tower:type',
+'tracktype',
+'tunnel',
+'water',
+'waterway',
+'wetland',
+'width',
+'wood',
+
+'height',
+'min_height',
+'roof:shape',
+'roof:height',
+'rank']
+
+keys = dict(zip(strings,range(0, len(strings)-1)))
+
+def getKeys():
+ return keys
+
\ No newline at end of file
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/StaticVals/__init__.py b/TileStache/Goodies/VecTiles/OSciMap4/StaticVals/__init__.py
new file mode 100644
index 00000000..0c0689f2
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/OSciMap4/StaticVals/__init__.py
@@ -0,0 +1,260 @@
+vals = {
+"yes" : 0,
+"residential" : 1,
+"service" : 2,
+"unclassified" : 3,
+"stream" : 4,
+"track" : 5,
+"water" : 6,
+"footway" : 7,
+"tertiary" : 8,
+"private" : 9,
+"tree" : 10,
+"path" : 11,
+"forest" : 12,
+"secondary" : 13,
+"house" : 14,
+"no" : 15,
+"asphalt" : 16,
+"wood" : 17,
+"grass" : 18,
+"paved" : 19,
+"primary" : 20,
+"unpaved" : 21,
+"bus_stop" : 22,
+"parking" : 23,
+"parking_aisle" : 24,
+"rail" : 25,
+"driveway" : 26,
+"8" : 27,
+"administrative" : 28,
+"locality" : 29,
+"turning_circle" : 30,
+"crossing" : 31,
+"village" : 32,
+"fence" : 33,
+"grade2" : 34,
+"coastline" : 35,
+"grade3" : 36,
+"farmland" : 37,
+"hamlet" : 38,
+"hut" : 39,
+"meadow" : 40,
+"wetland" : 41,
+"cycleway" : 42,
+"river" : 43,
+"school" : 44,
+"trunk" : 45,
+"gravel" : 46,
+"place_of_worship" : 47,
+"farm" : 48,
+"grade1" : 49,
+"traffic_signals" : 50,
+"wall" : 51,
+"garage" : 52,
+"gate" : 53,
+"motorway" : 54,
+"living_street" : 55,
+"pitch" : 56,
+"grade4" : 57,
+"industrial" : 58,
+"road" : 59,
+"ground" : 60,
+"scrub" : 61,
+"motorway_link" : 62,
+"steps" : 63,
+"ditch" : 64,
+"swimming_pool" : 65,
+"grade5" : 66,
+"park" : 67,
+"apartments" : 68,
+"restaurant" : 69,
+"designated" : 70,
+"bench" : 71,
+"survey_point" : 72,
+"pedestrian" : 73,
+"hedge" : 74,
+"reservoir" : 75,
+"riverbank" : 76,
+"alley" : 77,
+"farmyard" : 78,
+"peak" : 79,
+"level_crossing" : 80,
+"roof" : 81,
+"dirt" : 82,
+"drain" : 83,
+"garages" : 84,
+"entrance" : 85,
+"street_lamp" : 86,
+"deciduous" : 87,
+"fuel" : 88,
+"trunk_link" : 89,
+"information" : 90,
+"playground" : 91,
+"supermarket" : 92,
+"primary_link" : 93,
+"concrete" : 94,
+"mixed" : 95,
+"permissive" : 96,
+"orchard" : 97,
+"grave_yard" : 98,
+"canal" : 99,
+"garden" : 100,
+"spur" : 101,
+"paving_stones" : 102,
+"rock" : 103,
+"bollard" : 104,
+"convenience" : 105,
+"cemetery" : 106,
+"post_box" : 107,
+"commercial" : 108,
+"pier" : 109,
+"bank" : 110,
+"hotel" : 111,
+"cliff" : 112,
+"retail" : 113,
+"construction" : 114,
+"-1" : 115,
+"fast_food" : 116,
+"coniferous" : 117,
+"cafe" : 118,
+"6" : 119,
+"kindergarten" : 120,
+"tower" : 121,
+"hospital" : 122,
+"yard" : 123,
+"sand" : 124,
+"public_building" : 125,
+"cobblestone" : 126,
+"destination" : 127,
+"island" : 128,
+"abandoned" : 129,
+"vineyard" : 130,
+"recycling" : 131,
+"agricultural" : 132,
+"isolated_dwelling" : 133,
+"pharmacy" : 134,
+"post_office" : 135,
+"motorway_junction" : 136,
+"pub" : 137,
+"allotments" : 138,
+"dam" : 139,
+"secondary_link" : 140,
+"lift_gate" : 141,
+"siding" : 142,
+"stop" : 143,
+"main" : 144,
+"farm_auxiliary" : 145,
+"quarry" : 146,
+"10" : 147,
+"station" : 148,
+"platform" : 149,
+"taxiway" : 150,
+"limited" : 151,
+"sports_centre" : 152,
+"cutline" : 153,
+"detached" : 154,
+"storage_tank" : 155,
+"basin" : 156,
+"bicycle_parking" : 157,
+"telephone" : 158,
+"terrace" : 159,
+"town" : 160,
+"suburb" : 161,
+"bus" : 162,
+"compacted" : 163,
+"toilets" : 164,
+"heath" : 165,
+"works" : 166,
+"tram" : 167,
+"beach" : 168,
+"culvert" : 169,
+"fire_station" : 170,
+"recreation_ground" : 171,
+"bakery" : 172,
+"police" : 173,
+"atm" : 174,
+"clothes" : 175,
+"tertiary_link" : 176,
+"waste_basket" : 177,
+"attraction" : 178,
+"viewpoint" : 179,
+"bicycle" : 180,
+"church" : 181,
+"shelter" : 182,
+"drinking_water" : 183,
+"marsh" : 184,
+"picnic_site" : 185,
+"hairdresser" : 186,
+"bridleway" : 187,
+"retaining_wall" : 188,
+"buffer_stop" : 189,
+"nature_reserve" : 190,
+"village_green" : 191,
+"university" : 192,
+"1" : 193,
+"bar" : 194,
+"townhall" : 195,
+"mini_roundabout" : 196,
+"camp_site" : 197,
+"aerodrome" : 198,
+"stile" : 199,
+"9" : 200,
+"car_repair" : 201,
+"parking_space" : 202,
+"library" : 203,
+"pipeline" : 204,
+"true" : 205,
+"cycle_barrier" : 206,
+"4" : 207,
+"museum" : 208,
+"spring" : 209,
+"hunting_stand" : 210,
+"disused" : 211,
+"car" : 212,
+"tram_stop" : 213,
+"land" : 214,
+"fountain" : 215,
+"hiking" : 216,
+"manufacture" : 217,
+"vending_machine" : 218,
+"kiosk" : 219,
+"swamp" : 220,
+"unknown" : 221,
+"7" : 222,
+"islet" : 223,
+"shed" : 224,
+"switch" : 225,
+"rapids" : 226,
+"office" : 227,
+"bay" : 228,
+"proposed" : 229,
+"common" : 230,
+"weir" : 231,
+"grassland" : 232,
+"customers" : 233,
+"social_facility" : 234,
+"hangar" : 235,
+"doctors" : 236,
+"stadium" : 237,
+"give_way" : 238,
+"greenhouse" : 239,
+"guest_house" : 240,
+"viaduct" : 241,
+"doityourself" : 242,
+"runway" : 243,
+"bus_station" : 244,
+"water_tower" : 245,
+"golf_course" : 246,
+"conservation" : 247,
+"block" : 248,
+"college" : 249,
+"wastewater_plant" : 250,
+"subway" : 251,
+"halt" : 252,
+"forestry" : 253,
+"florist" : 254,
+"butcher" : 255}
+
+def getValues():
+ return vals
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/TagRewrite/__init__.py b/TileStache/Goodies/VecTiles/OSciMap4/TagRewrite/__init__.py
new file mode 100644
index 00000000..3ce39679
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/OSciMap4/TagRewrite/__init__.py
@@ -0,0 +1,109 @@
+import logging
+
+# TODO test the lua osm2pgsql for preprocessing !
+#
+# fix tags from looking things up in wiki where a value should be used with a specific key,
+# i.e. one combination has a wiki page and more use in taginfo and the other does not
+# TODO add:
+# natural=>meadow
+# landuse=>greenhouse,public,scrub
+# aeroway=>aerobridge
+# leisure=>natural_reserve
+
+def fixTag(tag, zoomlevel):
+ drop = False
+
+ if tag[1] is None:
+ drop = True
+
+ key = tag[0].lower();
+
+ if key == 'highway':
+ # FIXME remove ; separated part of tags
+ return (key, tag[1].lower().split(';')[0])
+
+ # fixed in osm
+ #if key == 'leisure':
+ # value = tag[1].lower();
+ # if value in ('village_green', 'recreation_ground'):
+ # return ('landuse', value)
+ # else:
+ # return (key, value)
+
+ elif key == 'natural':
+ value = tag[1].lower();
+ #if zoomlevel <= 9 and not value in ('water', 'wood'):
+ # return None
+
+ if value in ('village_green', 'meadow'):
+ return ('landuse', value)
+ if value == 'mountain_range':
+ drop = True
+ else:
+ return (key, value)
+
+ elif key == 'landuse':
+ value = tag[1].lower();
+ #if zoomlevel <= 9 and not value in ('forest', 'military'):
+ # return None
+
+ # strange for natural_reserve: more common this way round...
+ if value in ('park', 'natural_reserve'):
+ return ('leisure', value)
+ elif value == 'field':
+ # wiki: Although this landuse is rendered by Mapnik, it is not an officially
+ # recognised OpenStreetMap tag. Please use landuse=farmland instead.
+ return (key, 'farmland')
+ elif value in ('grassland', 'scrub'):
+ return ('natural', value)
+ else:
+ return (key, value)
+
+ elif key == 'oneway':
+ value = tag[1].lower();
+ if value in ('yes', '1', 'true'):
+ return (key, 'yes')
+ else:
+ drop = True
+
+ elif key == 'area':
+ value = tag[1].lower();
+ if value in ('yes', '1', 'true'):
+ return (key, 'yes')
+ # might be used to indicate that a closed way is not an area
+ elif value in ('no'):
+ return (key, 'no')
+ else:
+ drop = True
+
+ elif key == 'bridge':
+ value = tag[1].lower();
+ if value in ('yes', '1', 'true'):
+ return (key, 'yes')
+ elif value in ('no', '-1', '0', 'false'):
+ drop = True
+ else:
+ return (key, value)
+
+ elif key == 'tunnel':
+ value = tag[1].lower();
+ if value in ('yes', '1', 'true'):
+ return (key, 'yes')
+ elif value in ('no', '-1', '0', 'false'):
+ drop = True
+ else:
+ return (key, value)
+
+ elif key == 'water':
+ value = tag[1].lower();
+ if value in ('lake;pond'):
+ return (key, 'pond')
+ else:
+ return (key, value)
+
+ if drop:
+ logging.debug('drop tag: %s %s' % (tag[0], tag[1]))
+ return None
+
+ return tag
+
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/TileData_v4.proto b/TileStache/Goodies/VecTiles/OSciMap4/TileData_v4.proto
new file mode 100644
index 00000000..183954d7
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/OSciMap4/TileData_v4.proto
@@ -0,0 +1,92 @@
+// Protocol Version 4
+
+package org.oscim.database.oscimap4;
+
+message Data {
+ message Element {
+
+ // number of geometry 'indices'
+ optional uint32 num_indices = 1 [default = 1];
+
+ // number of 'tags'
+ optional uint32 num_tags = 2 [default = 1];
+
+ // elevation per coordinate
+ // (pixel relative to ground meters)
+ // optional bool has_elevation = 3 [default = false];
+
+ // reference to tile.tags
+ repeated uint32 tags = 11 [packed = true];
+
+ // A list of number of coordinates for each geometry.
+ // - polygons are separated by one '0' index
+ // - for single points this can be omitted.
+ // e.g 2,2 for two lines with two points each, or
+ // 4,3,0,4,3 for two polygons with four points in
+ // the outer ring and 3 points in the inner.
+
+ repeated uint32 indices = 12 [packed = true];
+
+ // single delta encoded coordinate x,y pairs scaled
+ // to a tile size of 4096
+ // note: geometries start at x,y = tile size / 2
+
+ repeated sint32 coordinates = 13 [packed = true];
+
+ //---------------- optional items ---------------
+ // osm layer [-5 .. 5] -> [0 .. 10]
+ optional uint32 layer = 21 [default = 5];
+
+ // intended for symbol and label placement, not used
+ //optional uint32 rank = 32 [packed = true];
+
+ // elevation per coordinate
+ // (pixel relative to ground meters)
+ // repeated sint32 elevation = 33 [packed = true];
+
+ // building height, precision 1/10m
+ //repeated sint32 height = 34 [packed = true];
+
+ // building height, precision 1/10m
+ //repeated sint32 min_height = 35 [packed = true];
+ }
+
+ required uint32 version = 1;
+
+ // tile creation time
+ optional uint64 timestamp = 2;
+
+ // tile is completely water (not used yet)
+ optional bool water = 3;
+
+ // number of 'tags'
+ required uint32 num_tags = 11;
+ optional uint32 num_keys = 12 [default = 0];
+ optional uint32 num_vals = 13 [default = 0];
+
+ // strings referenced by tags
+ repeated string keys = 14;
+ // separate common attributes from label to
+ // allow
+ repeated string values = 15;
+
+ // (key[0xfffffffc] | type[0x03]), value pairs
+ // key: uint32 -> reference to key-strings
+ // type 0: attribute -> uint32 reference to value-strings
+ // type 1: string -> uint32 reference to label-strings
+ // type 2: sint32
+ // type 3: float
+ // value: uint32 interpreted according to 'type'
+
+ repeated uint32 tags = 16 [packed = true];
+
+
+ // linestring
+ repeated Element lines = 21;
+
+ // polygons (MUST be implicitly closed)
+ repeated Element polygons = 22;
+
+ // points (POIs)
+ repeated Element points = 23;
+}
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/TileData_v4_pb2.py b/TileStache/Goodies/VecTiles/OSciMap4/TileData_v4_pb2.py
new file mode 100644
index 00000000..0b1ab288
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/OSciMap4/TileData_v4_pb2.py
@@ -0,0 +1,203 @@
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+
+from google.protobuf import descriptor
+from google.protobuf import message
+from google.protobuf import reflection
+from google.protobuf import descriptor_pb2
+# @@protoc_insertion_point(imports)
+
+
+
+DESCRIPTOR = descriptor.FileDescriptor(
+ name='TileData_v4.proto',
+ package='org.oscim.database.oscimap4',
+ serialized_pb='\n\x11TileData_v4.proto\x12\x1borg.oscim.database.oscimap4\"\xe2\x03\n\x04\x44\x61ta\x12\x0f\n\x07version\x18\x01 \x02(\r\x12\x11\n\ttimestamp\x18\x02 \x01(\x04\x12\r\n\x05water\x18\x03 \x01(\x08\x12\x10\n\x08num_tags\x18\x0b \x02(\r\x12\x13\n\x08num_keys\x18\x0c \x01(\r:\x01\x30\x12\x13\n\x08num_vals\x18\r \x01(\r:\x01\x30\x12\x0c\n\x04keys\x18\x0e \x03(\t\x12\x0e\n\x06values\x18\x0f \x03(\t\x12\x10\n\x04tags\x18\x10 \x03(\rB\x02\x10\x01\x12\x38\n\x05lines\x18\x15 \x03(\x0b\x32).org.oscim.database.oscimap4.Data.Element\x12;\n\x08polygons\x18\x16 \x03(\x0b\x32).org.oscim.database.oscimap4.Data.Element\x12\x39\n\x06points\x18\x17 \x03(\x0b\x32).org.oscim.database.oscimap4.Data.Element\x1a\x88\x01\n\x07\x45lement\x12\x16\n\x0bnum_indices\x18\x01 \x01(\r:\x01\x31\x12\x13\n\x08num_tags\x18\x02 \x01(\r:\x01\x31\x12\x10\n\x04tags\x18\x0b \x03(\rB\x02\x10\x01\x12\x13\n\x07indices\x18\x0c \x03(\rB\x02\x10\x01\x12\x17\n\x0b\x63oordinates\x18\r \x03(\x11\x42\x02\x10\x01\x12\x10\n\x05layer\x18\x15 \x01(\r:\x01\x35')
+
+
+
+
+_DATA_ELEMENT = descriptor.Descriptor(
+ name='Element',
+ full_name='org.oscim.database.oscimap4.Data.Element',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ fields=[
+ descriptor.FieldDescriptor(
+ name='num_indices', full_name='org.oscim.database.oscimap4.Data.Element.num_indices', index=0,
+ number=1, type=13, cpp_type=3, label=1,
+ has_default_value=True, default_value=1,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='num_tags', full_name='org.oscim.database.oscimap4.Data.Element.num_tags', index=1,
+ number=2, type=13, cpp_type=3, label=1,
+ has_default_value=True, default_value=1,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='tags', full_name='org.oscim.database.oscimap4.Data.Element.tags', index=2,
+ number=11, type=13, cpp_type=3, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=descriptor._ParseOptions(descriptor_pb2.FieldOptions(), '\020\001')),
+ descriptor.FieldDescriptor(
+ name='indices', full_name='org.oscim.database.oscimap4.Data.Element.indices', index=3,
+ number=12, type=13, cpp_type=3, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=descriptor._ParseOptions(descriptor_pb2.FieldOptions(), '\020\001')),
+ descriptor.FieldDescriptor(
+ name='coordinates', full_name='org.oscim.database.oscimap4.Data.Element.coordinates', index=4,
+ number=13, type=17, cpp_type=1, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=descriptor._ParseOptions(descriptor_pb2.FieldOptions(), '\020\001')),
+ descriptor.FieldDescriptor(
+ name='layer', full_name='org.oscim.database.oscimap4.Data.Element.layer', index=5,
+ number=21, type=13, cpp_type=3, label=1,
+ has_default_value=True, default_value=5,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ ],
+ extensions=[
+ ],
+ nested_types=[],
+ enum_types=[
+ ],
+ options=None,
+ is_extendable=False,
+ extension_ranges=[],
+ serialized_start=397,
+ serialized_end=533,
+)
+
+_DATA = descriptor.Descriptor(
+ name='Data',
+ full_name='org.oscim.database.oscimap4.Data',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ fields=[
+ descriptor.FieldDescriptor(
+ name='version', full_name='org.oscim.database.oscimap4.Data.version', index=0,
+ number=1, type=13, cpp_type=3, label=2,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='timestamp', full_name='org.oscim.database.oscimap4.Data.timestamp', index=1,
+ number=2, type=4, cpp_type=4, label=1,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='water', full_name='org.oscim.database.oscimap4.Data.water', index=2,
+ number=3, type=8, cpp_type=7, label=1,
+ has_default_value=False, default_value=False,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='num_tags', full_name='org.oscim.database.oscimap4.Data.num_tags', index=3,
+ number=11, type=13, cpp_type=3, label=2,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='num_keys', full_name='org.oscim.database.oscimap4.Data.num_keys', index=4,
+ number=12, type=13, cpp_type=3, label=1,
+ has_default_value=True, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='num_vals', full_name='org.oscim.database.oscimap4.Data.num_vals', index=5,
+ number=13, type=13, cpp_type=3, label=1,
+ has_default_value=True, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='keys', full_name='org.oscim.database.oscimap4.Data.keys', index=6,
+ number=14, type=9, cpp_type=9, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='values', full_name='org.oscim.database.oscimap4.Data.values', index=7,
+ number=15, type=9, cpp_type=9, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='tags', full_name='org.oscim.database.oscimap4.Data.tags', index=8,
+ number=16, type=13, cpp_type=3, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=descriptor._ParseOptions(descriptor_pb2.FieldOptions(), '\020\001')),
+ descriptor.FieldDescriptor(
+ name='lines', full_name='org.oscim.database.oscimap4.Data.lines', index=9,
+ number=21, type=11, cpp_type=10, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='polygons', full_name='org.oscim.database.oscimap4.Data.polygons', index=10,
+ number=22, type=11, cpp_type=10, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ descriptor.FieldDescriptor(
+ name='points', full_name='org.oscim.database.oscimap4.Data.points', index=11,
+ number=23, type=11, cpp_type=10, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ ],
+ extensions=[
+ ],
+ nested_types=[_DATA_ELEMENT, ],
+ enum_types=[
+ ],
+ options=None,
+ is_extendable=False,
+ extension_ranges=[],
+ serialized_start=51,
+ serialized_end=533,
+)
+
+_DATA_ELEMENT.containing_type = _DATA;
+_DATA.fields_by_name['lines'].message_type = _DATA_ELEMENT
+_DATA.fields_by_name['polygons'].message_type = _DATA_ELEMENT
+_DATA.fields_by_name['points'].message_type = _DATA_ELEMENT
+DESCRIPTOR.message_types_by_name['Data'] = _DATA
+
+class Data(message.Message):
+ __metaclass__ = reflection.GeneratedProtocolMessageType
+
+ class Element(message.Message):
+ __metaclass__ = reflection.GeneratedProtocolMessageType
+ DESCRIPTOR = _DATA_ELEMENT
+
+ # @@protoc_insertion_point(class_scope:org.oscim.database.oscimap4.Data.Element)
+ DESCRIPTOR = _DATA
+
+ # @@protoc_insertion_point(class_scope:org.oscim.database.oscimap4.Data)
+
+# @@protoc_insertion_point(module_scope)
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/__init__.py b/TileStache/Goodies/VecTiles/OSciMap4/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/TileStache/Goodies/VecTiles/OSciMap4/pbf_test.py b/TileStache/Goodies/VecTiles/OSciMap4/pbf_test.py
new file mode 100644
index 00000000..ef56ba43
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/OSciMap4/pbf_test.py
@@ -0,0 +1,24 @@
+#!/usr/bin/env python
+
+#
+# protoc --python_out=. proto/TileData.proto
+#
+
+import sys
+import TileData_v4_pb2
+
+if __name__ == "__main__" :
+ if len(sys.argv) != 2 :
+ print>>sys.stderr, "Usage:", sys.argv[0], ""
+ sys.exit(1)
+
+ tile = TileData_v4_pb2.Data()
+
+ try:
+ f = open(sys.argv[1], "rb")
+ tile.ParseFromString(f.read()[4:])
+ f.close()
+ except IOError:
+ print sys.argv[1] + ": Could not open file. Creating a new one."
+
+ print tile
\ No newline at end of file
diff --git a/TileStache/Goodies/VecTiles/__init__.py b/TileStache/Goodies/VecTiles/__init__.py
index d8761e73..808e6af9 100644
--- a/TileStache/Goodies/VecTiles/__init__.py
+++ b/TileStache/Goodies/VecTiles/__init__.py
@@ -1,9 +1,7 @@
''' VecTiles implements client and server support for efficient vector tiles.
VecTiles implements a TileStache Provider that returns tiles with contents
-simplified, precision reduced and often clipped. The MVT format in particular
-is designed for use in Mapnik with the VecTiles Datasource, which can read
-binary MVT tiles.
+simplified, precision reduced and often clipped.
VecTiles generates tiles in two JSON formats, GeoJSON and TopoJSON.
diff --git a/TileStache/Goodies/VecTiles/geojson.py b/TileStache/Goodies/VecTiles/geojson.py
index 66383889..5d18ff35 100644
--- a/TileStache/Goodies/VecTiles/geojson.py
+++ b/TileStache/Goodies/VecTiles/geojson.py
@@ -6,41 +6,15 @@
from shapely.wkb import loads
from shapely.geometry import asShape
-from ... import getTile
-from ...Core import KnownUnknown
from .ops import transform
float_pat = compile(r'^-?\d+\.\d+(e-?\d+)?$')
charfloat_pat = compile(r'^[\[,\,]-?\d+\.\d+(e-?\d+)?$')
# floating point lat/lon precision for each zoom level, good to ~1/4 pixel.
-precisions = [int(ceil(log(1<I', len(wkb)), wkb, _pack('>I', len(prop)), prop])
-
- body = _compress(_pack('>I', len(features)) + ''.join(parts))
-
- file.write('\x89MVT')
- file.write(_pack('>I', len(body)))
- file.write(body)
-
-def _next_int(file):
- ''' Read the next big-endian 4-byte unsigned int from a file.
- '''
- return _unpack('!I', file.read(4))[0]
+ wkb, props, fid = feature
+ _features.append({
+ 'geometry': wkb,
+ 'properties': props,
+ 'id': fid,
+ })
+
+ return {
+ 'name': name or '',
+ 'features': _features
+ }
diff --git a/TileStache/Goodies/VecTiles/oscimap.py b/TileStache/Goodies/VecTiles/oscimap.py
new file mode 100644
index 00000000..7dfe6d93
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/oscimap.py
@@ -0,0 +1,231 @@
+import types
+from OSciMap4 import TileData_v4_pb2
+from OSciMap4.GeomEncoder import GeomEncoder
+from OSciMap4.StaticVals import getValues
+from OSciMap4.StaticKeys import getKeys
+from OSciMap4.TagRewrite import fixTag
+from TileStache.Core import KnownUnknown
+import re
+import logging
+import struct
+
+statickeys = getKeys()
+staticvals = getValues()
+
+# custom keys/values start at attrib_offset
+attrib_offset = 256
+
+# coordindates are scaled to this range within tile
+extents = 4096
+
+# tiles are padded by this number of pixels for the current zoom level (OSciMap uses this to cover up seams between tiles)
+padding = 5
+
+def encode(file, features, coord, layer_name=''):
+ layer_name = layer_name or ''
+ tile = VectorTile(extents)
+
+ for feature in features:
+ tile.addFeature(feature, coord, layer_name)
+
+ tile.complete()
+
+ data = tile.out.SerializeToString()
+ file.write(struct.pack(">I", len(data)))
+ file.write(data)
+
+def merge(file, feature_layers, coord):
+ ''' Retrieve a list of OSciMap4 tile responses and merge them into one.
+
+ get_tiles() retrieves data and performs basic integrity checks.
+ '''
+ tile = VectorTile(extents)
+
+ for layer in feature_layers:
+ tile.addFeatures(layer['features'], coord, layer['name'])
+
+ tile.complete()
+
+ data = tile.out.SerializeToString()
+ file.write(struct.pack(">I", len(data)))
+ file.write(data)
+
+class VectorTile:
+ """
+ """
+ def __init__(self, extents):
+ self.geomencoder = GeomEncoder(extents)
+
+ # TODO count to sort by number of occurrences
+ self.keydict = {}
+ self.cur_key = attrib_offset
+
+ self.valdict = {}
+ self.cur_val = attrib_offset
+
+ self.tagdict = {}
+ self.num_tags = 0
+
+ self.out = TileData_v4_pb2.Data()
+ self.out.version = 4
+
+
+ def complete(self):
+ if self.num_tags == 0:
+ logging.info("empty tags")
+
+ self.out.num_tags = self.num_tags
+
+ if self.cur_key - attrib_offset > 0:
+ self.out.num_keys = self.cur_key - attrib_offset
+
+ if self.cur_val - attrib_offset > 0:
+ self.out.num_vals = self.cur_val - attrib_offset
+
+ def addFeatures(self, features, coord, this_layer):
+ for feature in features:
+ self.addFeature(feature, coord, this_layer)
+
+ def addFeature(self, row, coord, this_layer):
+ geom = self.geomencoder
+ tags = []
+
+ #height = None
+ layer = None
+ # add layer tag
+ tags.append(self.getTagId(('layer_name', this_layer)))
+ for k, v in row[1].iteritems():
+ if v is None:
+ continue
+
+ # the vtm stylesheet expects the heights to be an integer,
+ # multiplied by 100
+ if this_layer == 'buildings' and k in ('height', 'min_height'):
+ try:
+ v = int(v * 100)
+ except ValueError:
+ logging.warning('vtm: Invalid %s value: %s' % (k, v))
+
+ tag = str(k), str(v)
+
+ # use unsigned int for layer. i.e. map to 0..10
+ if "layer" == tag[0]:
+ layer = self.getLayer(tag[1])
+ continue
+
+ tag = fixTag(tag, coord.zoom)
+
+ if tag is None:
+ continue
+
+ tags.append(self.getTagId(tag))
+
+ if len(tags) == 0:
+ logging.debug('missing tags')
+ return
+
+ geom.parseGeometry(row[0])
+ feature = None;
+
+ geometry_type = None
+ if geom.isPoint:
+ geometry_type = 'Point'
+ feature = self.out.points.add()
+ # add number of points (for multi-point)
+ if len(geom.coordinates) > 2:
+ logging.info('points %s' %len(geom.coordinates))
+ feature.indices.append(len(geom.coordinates)/2)
+ else:
+ # empty geometry
+ if len(geom.index) == 0:
+ logging.debug('empty geom: %s %s' % row[1])
+ return
+
+ if geom.isPoly:
+ geometry_type = 'Polygon'
+ feature = self.out.polygons.add()
+ else:
+ geometry_type = 'LineString'
+ feature = self.out.lines.add()
+
+ # add coordinate index list (coordinates per geometry)
+ feature.indices.extend(geom.index)
+
+ # add indice count (number of geometries)
+ if len(feature.indices) > 1:
+ feature.num_indices = len(feature.indices)
+
+ # add coordinates
+ feature.coordinates.extend(geom.coordinates)
+
+ # add geometry type to tags
+ geometry_type_tag = 'geometry_type', geometry_type
+ tags.append(self.getTagId(geometry_type_tag))
+
+ # add tags
+ feature.tags.extend(tags)
+ if len(tags) > 1:
+ feature.num_tags = len(tags)
+
+ # add osm layer
+ if layer is not None and layer != 5:
+ feature.layer = layer
+
+ #logging.debug('tags %d, indices %d' %(len(tags),len(feature.indices)))
+
+
+ def getLayer(self, val):
+ try:
+ l = max(min(10, int(val)) + 5, 0)
+ if l != 0:
+ return l
+ except ValueError:
+ logging.debug("layer invalid %s" %val)
+
+ return None
+
+ def getKeyId(self, key):
+ if key in statickeys:
+ return statickeys[key]
+
+ if key in self.keydict:
+ return self.keydict[key]
+
+ self.out.keys.append(key);
+
+ r = self.cur_key
+ self.keydict[key] = r
+ self.cur_key += 1
+ return r
+
+ def getAttribId(self, var):
+ if var in staticvals:
+ return staticvals[var]
+
+ if var in self.valdict:
+ return self.valdict[var]
+
+ self.out.values.append(var);
+
+ r = self.cur_val
+ self.valdict[var] = r
+ self.cur_val += 1
+ return r
+
+
+ def getTagId(self, tag):
+ # logging.debug(tag)
+
+ if self.tagdict.has_key(tag):
+ return self.tagdict[tag]
+
+ key = self.getKeyId(tag[0].decode('utf-8'))
+ val = self.getAttribId(tag[1].decode('utf-8'))
+
+ self.out.tags.append(key)
+ self.out.tags.append(val)
+ #logging.info("add tag %s - %d/%d" %(tag, key, val))
+ r = self.num_tags
+ self.tagdict[tag] = r
+ self.num_tags += 1
+ return r
diff --git a/TileStache/Goodies/VecTiles/server.py b/TileStache/Goodies/VecTiles/server.py
index c7f90c5d..3072aa62 100644
--- a/TileStache/Goodies/VecTiles/server.py
+++ b/TileStache/Goodies/VecTiles/server.py
@@ -1,8 +1,7 @@
''' Provider that returns PostGIS vector tiles in GeoJSON or MVT format.
VecTiles is intended for rendering, and returns tiles with contents simplified,
-precision reduced and often clipped. The MVT format in particular is designed
-for use in Mapnik with the VecTiles Datasource, which can read binary MVT tiles.
+precision reduced and often clipped.
For a more general implementation, try the Vector provider:
http://tilestache.org/doc/#vector-provider
@@ -11,10 +10,18 @@
from urlparse import urljoin, urlparse
from urllib import urlopen
from os.path import exists
+from shapely.wkb import dumps
+from shapely.wkb import loads
+
+import json
+from ... import getTile
+from ...Core import KnownUnknown
+from TileStache.Config import loadClassPath
try:
from psycopg2.extras import RealDictCursor
from psycopg2 import connect
+ from psycopg2.extensions import TransactionRollbackError
except ImportError, err:
# Still possible to build the documentation without psycopg2
@@ -22,11 +29,29 @@
def connect(*args, **kwargs):
raise err
-from . import mvt, geojson, topojson
+from . import mvt, geojson, topojson, oscimap
from ...Geography import SphericalMercator
from ModestMaps.Core import Point
-tolerances = [6378137 * 2 * pi / (2 ** (zoom + 8)) for zoom in range(20)]
+tolerances = [6378137 * 2 * pi / (2 ** (zoom + 8)) for zoom in range(22)]
+
+
+def make_transform_fn(transform_fns):
+ if not transform_fns:
+ return None
+
+ def transform_fn(shape, properties, fid, zoom):
+ for fn in transform_fns:
+ shape, properties, fid = fn(shape, properties, fid, zoom)
+ return shape, properties, fid
+ return transform_fn
+
+
+def resolve_transform_fns(fn_dotted_names):
+ if not fn_dotted_names:
+ return None
+ return map(loadClassPath, fn_dotted_names)
+
class Provider:
''' VecTiles provider for PostGIS data sources.
@@ -72,11 +97,30 @@ class Provider:
simplify_until:
Optional integer specifying a zoom level where no more geometry
simplification should occur. Default 16.
-
+
+ suppress_simplification:
+ Optional list of zoom levels where no dynamic simplification should
+ occur.
+
+ geometry_types:
+ Optional list of geometry types that constrains the results of what
+ kind of features are returned.
+
+ transform_fns:
+ Optional list of transformation functions. It will be
+ passed a shapely object, the properties dictionary, and
+ the feature id. The function should return a tuple
+ consisting of the new shapely object, properties
+ dictionary, and feature id for the feature.
+
+ sort_fn:
+ Optional function that will be used to sort features
+ fetched from the database.
+
Sample configuration, for a layer with no results at zooms 0-9, basic
selection of lines with names and highway tags for zoom 10, a remote
URL containing a query for zoom 11, and a local file for zooms 12+:
-
+
"provider":
{
"class": "TileStache.Goodies.VecTiles:Provider",
@@ -100,11 +144,11 @@ class Provider:
}
}
'''
- def __init__(self, layer, dbinfo, queries, clip=True, srid=900913, simplify=1.0, simplify_until=16):
+ def __init__(self, layer, dbinfo, queries, clip=True, srid=900913, simplify=1.0, simplify_until=16, suppress_simplification=(), geometry_types=None, transform_fns=None, sort_fn=None, simplify_before_intersect=False):
'''
'''
self.layer = layer
-
+
keys = 'host', 'user', 'password', 'database', 'port', 'dbname'
self.dbinfo = dict([(k, v) for (k, v) in dbinfo.items() if k in keys])
@@ -112,10 +156,21 @@ def __init__(self, layer, dbinfo, queries, clip=True, srid=900913, simplify=1.0,
self.srid = int(srid)
self.simplify = float(simplify)
self.simplify_until = int(simplify_until)
-
+ self.suppress_simplification = set(suppress_simplification)
+ self.geometry_types = None if geometry_types is None else set(geometry_types)
+ self.transform_fn_names = transform_fns
+ self.transform_fn = make_transform_fn(resolve_transform_fns(transform_fns))
+ if sort_fn:
+ self.sort_fn_name = sort_fn
+ self.sort_fn = loadClassPath(sort_fn)
+ else:
+ self.sort_fn_name = None
+ self.sort_fn = None
+ self.simplify_before_intersect = simplify_before_intersect
+
self.queries = []
self.columns = {}
-
+
for query in queries:
if query is None:
self.queries.append(None)
@@ -153,36 +208,46 @@ def renderTile(self, width, height, srs, coord):
if query not in self.columns:
self.columns[query] = query_columns(self.dbinfo, self.srid, query, bounds)
- tolerance = self.simplify * tolerances[coord.zoom] if coord.zoom < self.simplify_until else None
-
- return Response(self.dbinfo, self.srid, query, self.columns[query], bounds, tolerance, coord.zoom, self.clip)
+ if coord.zoom in self.suppress_simplification:
+ tolerance = None
+ else:
+ tolerance = self.simplify * tolerances[coord.zoom] if coord.zoom < self.simplify_until else None
+
+ return Response(self.dbinfo, self.srid, query, self.columns[query], bounds, tolerance, coord.zoom, self.clip, coord, self.layer.name(), self.geometry_types, self.transform_fn, self.sort_fn, self.simplify_before_intersect)
def getTypeByExtension(self, extension):
''' Get mime-type and format by file extension, one of "mvt", "json" or "topojson".
'''
if extension.lower() == 'mvt':
- return 'application/octet-stream+mvt', 'MVT'
+ return 'application/x-protobuf', 'MVT'
elif extension.lower() == 'json':
return 'application/json', 'JSON'
elif extension.lower() == 'topojson':
return 'application/json', 'TopoJSON'
-
+
+ elif extension.lower() == 'vtm':
+ return 'image/png', 'OpenScienceMap' # TODO: make this proper stream type, app only seems to work with png
+
else:
- raise ValueError(extension)
+ raise ValueError(extension + " is not a valid extension")
class MultiProvider:
''' VecTiles provider to gather PostGIS tiles into a single multi-response.
-
+
Returns a MultiResponse object for GeoJSON or TopoJSON requests.
-
+
names:
List of names of vector-generating layers from elsewhere in config.
-
+
+ ignore_cached_sublayers:
+ True if cache provider should not save intermediate layers
+ in cache.
+
Sample configuration, for a layer with combined data from water
and land areas, both assumed to be vector-returning layers:
-
+
"provider":
{
"class": "TileStache.Goodies.VecTiles:MultiProvider",
@@ -192,14 +257,20 @@ class MultiProvider:
}
}
'''
- def __init__(self, layer, names):
+ def __init__(self, layer, names, ignore_cached_sublayers=False):
self.layer = layer
self.names = names
-
+ self.ignore_cached_sublayers = ignore_cached_sublayers
+
+ def __call__(self, layer, names, ignore_cached_sublayers=False):
+ self.layer = layer
+ self.names = names
+ self.ignore_cached_sublayers = ignore_cached_sublayers
+
def renderTile(self, width, height, srs, coord):
''' Render a single tile, return a Response instance.
'''
- return MultiResponse(self.layer.config, self.names, coord)
+ return MultiResponse(self.layer.config, self.names, coord, self.ignore_cached_sublayers)
def getTypeByExtension(self, extension):
''' Get mime-type and format by file extension, "json" or "topojson" only.
@@ -209,79 +280,80 @@ def getTypeByExtension(self, extension):
elif extension.lower() == 'topojson':
return 'application/json', 'TopoJSON'
+
+ elif extension.lower() == 'vtm':
+ return 'image/png', 'OpenScienceMap' # TODO: make this proper stream type, app only seems to work with png
+ elif extension.lower() == 'mvt':
+ return 'application/x-protobuf', 'MVT'
+
else:
- raise ValueError(extension)
+ raise ValueError(extension + " is not a valid extension for responses with multiple layers")
class Connection:
''' Context manager for Postgres connections.
-
+
See http://www.python.org/dev/peps/pep-0343/
and http://effbot.org/zone/python-with-statement.htm
'''
def __init__(self, dbinfo):
self.dbinfo = dbinfo
-
+
def __enter__(self):
- self.db = connect(**self.dbinfo).cursor(cursor_factory=RealDictCursor)
+ conn = connect(**self.dbinfo)
+ conn.set_session(readonly=True, autocommit=True)
+ self.db = conn.cursor(cursor_factory=RealDictCursor)
return self.db
-
+
def __exit__(self, type, value, traceback):
self.db.connection.close()
class Response:
'''
'''
- def __init__(self, dbinfo, srid, subquery, columns, bounds, tolerance, zoom, clip):
+ def __init__(self, dbinfo, srid, subquery, columns, bounds, tolerance, zoom, clip, coord, layer_name, geometry_types, transform_fn, sort_fn, simplify_before_intersect):
''' Create a new response object with Postgres connection info and a query.
-
+
bounds argument is a 4-tuple with (xmin, ymin, xmax, ymax).
'''
self.dbinfo = dbinfo
self.bounds = bounds
self.zoom = zoom
self.clip = clip
-
- bbox = 'ST_MakeBox2D(ST_MakePoint(%.2f, %.2f), ST_MakePoint(%.2f, %.2f))' % bounds
- geo_query = build_query(srid, subquery, columns, bbox, tolerance, True, clip)
- merc_query = build_query(srid, subquery, columns, bbox, tolerance, False, clip)
- self.query = dict(TopoJSON=geo_query, JSON=geo_query, MVT=merc_query)
-
+ self.coord = coord
+ self.layer_name = layer_name
+ self.geometry_types = geometry_types
+ self.transform_fn = transform_fn
+ self.sort_fn = sort_fn
+
+ geo_query = build_query(srid, subquery, columns, bounds, tolerance, True, clip, simplify_before_intersect=simplify_before_intersect)
+ tol_idx = coord.zoom if 0 <= coord.zoom < len(tolerances) else -1
+ tol_val = tolerances[tol_idx]
+ oscimap_query = build_query(srid, subquery, columns, bounds, tolerance, False, clip, oscimap.padding * tol_val, oscimap.extents, simplify_before_intersect=simplify_before_intersect)
+ mvt_query = build_query(srid, subquery, columns, bounds, tolerance, False, clip, mvt.padding * tol_val, mvt.extents, simplify_before_intersect=simplify_before_intersect)
+ self.query = dict(TopoJSON=geo_query, JSON=geo_query, MVT=mvt_query, OpenScienceMap=oscimap_query)
+
def save(self, out, format):
'''
'''
- with Connection(self.dbinfo) as db:
- db.execute(self.query[format])
-
- features = []
-
- for row in db.fetchall():
- if row['__geometry__'] is None:
- continue
-
- wkb = bytes(row['__geometry__'])
- prop = dict([(k, v) for (k, v) in row.items()
- if k not in ('__geometry__', '__id__')])
-
- if '__id__' in row:
- features.append((wkb, prop, row['__id__']))
-
- else:
- features.append((wkb, prop))
+ features = get_features(self.dbinfo, self.query[format], self.geometry_types, self.transform_fn, self.sort_fn, self.coord.zoom)
if format == 'MVT':
- mvt.encode(out, features)
+ mvt.encode(out, features, self.coord, self.bounds, self.layer_name)
elif format == 'JSON':
- geojson.encode(out, features, self.zoom, self.clip)
+ geojson.encode(out, features, self.zoom)
elif format == 'TopoJSON':
ll = SphericalMercator().projLocation(Point(*self.bounds[0:2]))
ur = SphericalMercator().projLocation(Point(*self.bounds[2:4]))
- topojson.encode(out, features, (ll.lon, ll.lat, ur.lon, ur.lat), self.clip)
-
+ topojson.encode(out, features, (ll.lon, ll.lat, ur.lon, ur.lat))
+
+ elif format == 'OpenScienceMap':
+ oscimap.encode(out, features, self.coord, self.layer_name)
+
else:
- raise ValueError(format)
+ raise ValueError(format + " is not supported")
class EmptyResponse:
''' Simple empty response renders valid MVT or GeoJSON with no features.
@@ -293,86 +365,228 @@ def save(self, out, format):
'''
'''
if format == 'MVT':
- mvt.encode(out, [])
+ mvt.encode(out, [], None)
elif format == 'JSON':
- geojson.encode(out, [], 0, False)
+ geojson.encode(out, [], 0)
elif format == 'TopoJSON':
ll = SphericalMercator().projLocation(Point(*self.bounds[0:2]))
ur = SphericalMercator().projLocation(Point(*self.bounds[2:4]))
- topojson.encode(out, [], (ll.lon, ll.lat, ur.lon, ur.lat), False)
-
+ topojson.encode(out, [], (ll.lon, ll.lat, ur.lon, ur.lat))
+
+ elif format == 'OpenScienceMap':
+ oscimap.encode(out, [], None)
+
else:
- raise ValueError(format)
+ raise ValueError(format + " is not supported")
class MultiResponse:
'''
'''
- def __init__(self, config, names, coord):
+ def __init__(self, config, names, coord, ignore_cached_sublayers):
''' Create a new response object with TileStache config and layer names.
'''
self.config = config
self.names = names
self.coord = coord
-
+ self.ignore_cached_sublayers = ignore_cached_sublayers
+
def save(self, out, format):
'''
'''
if format == 'TopoJSON':
- topojson.merge(out, self.names, self.config, self.coord)
+ topojson.merge(out, self.names, self.get_tiles(format), self.config, self.coord)
elif format == 'JSON':
- geojson.merge(out, self.names, self.config, self.coord)
-
+ geojson.merge(out, self.names, self.get_tiles(format), self.config, self.coord)
+
+ elif format == 'OpenScienceMap':
+ feature_layers = []
+ layers = [self.config.layers[name] for name in self.names]
+ for layer in layers:
+ width, height = layer.dim, layer.dim
+ tile = layer.provider.renderTile(width, height, layer.projection.srs, self.coord)
+ if isinstance(tile,EmptyResponse): continue
+ feature_layers.append({'name': layer.name(), 'features': get_features(tile.dbinfo, tile.query["OpenScienceMap"], layer.provider.geometry_types, layer.provider.transform_fn, layer.provider.sort_fn, self.coord.zoom)})
+ oscimap.merge(out, feature_layers, self.coord)
+
+ elif format == 'MVT':
+ feature_layers = []
+ layers = [self.config.layers[name] for name in self.names]
+ for layer in layers:
+ width, height = layer.dim, layer.dim
+ tile = layer.provider.renderTile(width, height, layer.projection.srs, self.coord)
+ if isinstance(tile,EmptyResponse): continue
+ feature_layers.append({'name': layer.name(), 'features': get_features(tile.dbinfo, tile.query["MVT"], layer.provider.geometry_types, layer.provider.transform_fn, layer.provider.sort_fn, self.coord.zoom)})
+ mvt.merge(out, feature_layers, self.coord)
+
else:
- raise ValueError(format)
+ raise ValueError(format + " is not supported for responses with multiple layers")
+
+ def get_tiles(self, format):
+ unknown_layers = set(self.names) - set(self.config.layers.keys())
+
+ if unknown_layers:
+ raise KnownUnknown("%s.get_tiles didn't recognize %s when trying to load %s." % (__name__, ', '.join(unknown_layers), ', '.join(self.names)))
+
+ layers = [self.config.layers[name] for name in self.names]
+ mimes, bodies = zip(*[getTile(layer, self.coord, format.lower(), self.ignore_cached_sublayers, self.ignore_cached_sublayers) for layer in layers])
+ bad_mimes = [(name, mime) for (mime, name) in zip(mimes, self.names) if not mime.endswith('/json')]
+
+ if bad_mimes:
+ raise KnownUnknown('%s.get_tiles encountered a non-JSON mime-type in %s sub-layer: "%s"' % ((__name__, ) + bad_mimes[0]))
+
+ tiles = map(json.loads, bodies)
+ bad_types = [(name, topo['type']) for (topo, name) in zip(tiles, self.names) if topo['type'] != ('FeatureCollection' if (format.lower()=='json') else 'Topology')]
+
+ if bad_types:
+ raise KnownUnknown('%s.get_tiles encountered a non-%sCollection type in %s sub-layer: "%s"' % ((__name__, ('Feature' if (format.lower()=='json') else 'Topology'), ) + bad_types[0]))
+
+ return tiles
+
def query_columns(dbinfo, srid, subquery, bounds):
''' Get information about the columns returned for a subquery.
'''
with Connection(dbinfo) as db:
- #
- # While bounds covers less than the full planet, look for just one feature.
- #
- while (abs(bounds[2] - bounds[0]) * abs(bounds[2] - bounds[0])) < 1.61e15:
- bbox = 'ST_MakeBox2D(ST_MakePoint(%f, %f), ST_MakePoint(%f, %f))' % bounds
- bbox = 'ST_SetSRID(%s, %d)' % (bbox, srid)
-
- query = subquery.replace('!bbox!', bbox)
-
- db.execute(query + '\n LIMIT 1') # newline is important here, to break out of comments.
- row = db.fetchone()
-
- if row is None:
- #
- # Try zooming out three levels (8x) to look for features.
- #
- bounds = (bounds[0] - (bounds[2] - bounds[0]) * 3.5,
- bounds[1] - (bounds[3] - bounds[1]) * 3.5,
- bounds[2] + (bounds[2] - bounds[0]) * 3.5,
- bounds[3] + (bounds[3] - bounds[1]) * 3.5)
-
- continue
-
- column_names = set(row.keys())
- return column_names
-
-def build_query(srid, subquery, subcolumns, bbox, tolerance, is_geo, is_clipped):
+ bbox = 'ST_MakeBox2D(ST_MakePoint(%f, %f), ST_MakePoint(%f, %f))' % bounds
+ bbox = 'ST_SetSRID(%s, %d)' % (bbox, srid)
+
+ query = subquery.replace('!bbox!', bbox)
+
+ # newline is important here, to break out of comments.
+ db.execute(query + '\n LIMIT 0')
+ column_names = set(x.name for x in db.description)
+ return column_names
+
+
+def get_features(dbinfo, query, geometry_types, transform_fn, sort_fn, zoom,
+ n_try=1):
+ features = []
+
+ with Connection(dbinfo) as db:
+ try:
+ db.execute(query)
+ except TransactionRollbackError:
+ if n_try >= 5:
+ print 'TransactionRollbackError occurred 5 times'
+ raise
+ else:
+ return get_features(dbinfo, query, geometry_types,
+ transform_fn, sort_fn, zoom,
+ n_try=n_try + 1)
+ for row in db.fetchall():
+ assert '__geometry__' in row, 'Missing __geometry__ in feature result'
+ assert '__id__' in row, 'Missing __id__ in feature result'
+
+ wkb = bytes(row.pop('__geometry__'))
+ id = row.pop('__id__')
+
+ shape = loads(wkb)
+ if geometry_types is not None:
+ if shape.type not in geometry_types:
+ #print 'found %s which is not in: %s' % (geom_type, geometry_types)
+ continue
+
+ props = dict((k, v) for k, v in row.items() if v is not None)
+
+ if transform_fn:
+ shape, props, id = transform_fn(shape, props, id, zoom)
+ wkb = dumps(shape)
+
+ features.append((wkb, props, id))
+
+ if sort_fn:
+ features = sort_fn(features, zoom)
+
+ return features
+
+def build_query(srid, subquery, subcolumns, bounds, tolerance, is_geo, is_clipped, padding=0, scale=None, simplify_before_intersect=False):
''' Build and return an PostGIS query.
'''
+
+ # bounds argument is a 4-tuple with (xmin, ymin, xmax, ymax).
+ bbox = 'ST_MakeBox2D(ST_MakePoint(%.12f, %.12f), ST_MakePoint(%.12f, %.12f))' % (bounds[0] - padding, bounds[1] - padding, bounds[2] + padding, bounds[3] + padding)
bbox = 'ST_SetSRID(%s, %d)' % (bbox, srid)
geom = 'q.__geometry__'
- if is_clipped:
- geom = 'ST_Intersection(%s, %s)' % (geom, bbox)
+ # Special care must be taken when simplifying certain geometries (like those
+ # in the earth/water layer) to prevent tile border "seams" from forming:
+ # these occur when a geometry is split across multiple tiles (like a
+ # continuous strip of land or body of water) and thus, for any such tile,
+ # the part of that geometry inside of it lines up along one or more of its
+ # edges. If there's any kind of fine geometric detail near one of these
+ # edges, simplification might remove it in a way that makes the edge of the
+ # geometry move off the edge of the tile. See this example of a tile
+ # pre-simplification:
+ # https://cloud.githubusercontent.com/assets/4467604/7937704/aef971b4-090f-11e5-91b9-d973ef98e5ef.png
+ # and post-simplification:
+ # https://cloud.githubusercontent.com/assets/4467604/7937705/b1129dc2-090f-11e5-9341-6893a6892a36.png
+ # at which point a seam formed.
+ #
+ # To get around this, for any given tile bounding box, we find the
+ # contained/overlapping geometries and simplify them BEFORE
+ # cutting out the precise tile bounding bbox (instead of cutting out the
+ # tile and then simplifying everything inside of it, as we do with all of
+ # the other layers).
+
+ if simplify_before_intersect:
+ # Simplify, then cut tile.
+
+ if tolerance is not None:
+ # The problem with simplifying all contained/overlapping geometries
+ # for a tile before cutting out the parts that actually lie inside
+ # of it is that we might end up simplifying a massive geometry just
+ # to extract a small portion of it (think simplifying the border of
+ # the US just to extract the New York City coastline). To reduce the
+ # performance hit, we actually identify all of the candidate
+ # geometries, then cut out a bounding box *slightly larger* than the
+ # tile bbox, THEN simplify, and only then cut out the tile itself.
+ # This still allows us to perform simplification of the geometry
+ # edges outside of the tile, which prevents any seams from forming
+ # when we cut it out, but means that we don't have to simplify the
+ # entire geometry (just the small bits lying right outside the
+ # desired tile).
+
+ simplification_padding = padding + (bounds[3] - bounds[1]) * 0.1
+ simplification_bbox = (
+ 'ST_MakeBox2D(ST_MakePoint(%.12f, %.12f), '
+ 'ST_MakePoint(%.12f, %.12f))' % (
+ bounds[0] - simplification_padding,
+ bounds[1] - simplification_padding,
+ bounds[2] + simplification_padding,
+ bounds[3] + simplification_padding))
+ simplification_bbox = 'ST_SetSrid(%s, %d)' % (
+ simplification_bbox, srid)
+
+ geom = 'ST_Intersection(%s, %s)' % (geom, simplification_bbox)
+ geom = 'ST_MakeValid(ST_SimplifyPreserveTopology(%s, %.12f))' % (
+ geom, tolerance)
- if tolerance is not None:
- geom = 'ST_SimplifyPreserveTopology(%s, %.2f)' % (geom, tolerance)
+ assert is_clipped, 'If simplify_before_intersect=True, ' \
+ 'is_clipped should be True as well'
+ geom = 'ST_Intersection(%s, %s)' % (geom, bbox)
+
+ else:
+ # Cut tile, then simplify.
+
+ if is_clipped:
+ geom = 'ST_Intersection(%s, %s)' % (geom, bbox)
+
+ if tolerance is not None:
+ geom = 'ST_SimplifyPreserveTopology(%s, %.12f)' % (geom, tolerance)
if is_geo:
geom = 'ST_Transform(%s, 4326)' % geom
-
+
+ if scale:
+ # scale applies to the un-padded bounds, e.g. geometry in the padding area "spills over" past the scale range
+ geom = ('ST_TransScale(%s, %.12f, %.12f, %.12f, %.12f)'
+ % (geom, -bounds[0], -bounds[1],
+ scale / (bounds[2] - bounds[0]),
+ scale / (bounds[3] - bounds[1])))
+
subquery = subquery.replace('!bbox!', bbox)
columns = ['q."%s"' % c for c in subcolumns if c not in ('__geometry__', )]
@@ -390,6 +604,5 @@ def build_query(srid, subquery, subcolumns, bbox, tolerance, is_geo, is_clipped)
%(subquery)s
) AS q
WHERE ST_IsValid(q.__geometry__)
- AND q.__geometry__ && %(bbox)s
AND ST_Intersects(q.__geometry__, %(bbox)s)''' \
% locals()
diff --git a/TileStache/Goodies/VecTiles/sort.py b/TileStache/Goodies/VecTiles/sort.py
new file mode 100644
index 00000000..62fd21fe
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/sort.py
@@ -0,0 +1,107 @@
+from util import to_float
+
+# sort functions to apply to features
+
+
+def _sort_features_by_key(features, key):
+ features.sort(key=key)
+ return features
+
+
+def _by_feature_property(property_name):
+ def _feature_sort_by_property(feature):
+ wkb, properties, fid = feature
+ return properties.get(property_name)
+ return _feature_sort_by_property
+
+
+_by_feature_id = _by_feature_property('id')
+
+
+def _by_area(feature):
+ wkb, properties, fid = feature
+ default_value = -1000
+ sort_key = properties.get('area', default_value)
+ return sort_key
+
+
+def _sort_by_area_then_id(features):
+ features.sort(key=_by_feature_id)
+ features.sort(key=_by_area, reverse=True)
+ return features
+
+
+def _by_scalerank(feature):
+ wkb, properties, fid = feature
+ value_for_none = 1000
+ scalerank = properties.get('scalerank', value_for_none)
+ return scalerank
+
+
+def _by_population(feature):
+ wkb, properties, fid = feature
+ default_value = -1000
+ # depends on a transform run to convert population to an integer
+ population = properties.get('population')
+ return default_value if population is None else population
+
+
+def _by_transit_score(feature):
+ wkb, props, fid = feature
+ return props.get('mz_transit_score', 0)
+
+
+def _by_peak_elevation(feature):
+ wkb, props, fid = feature
+ kind = props.get('kind')
+ if kind != 'peak' and kind != 'volcano':
+ return 0
+ return props.get('elevation', 0)
+
+
+def _sort_by_transit_score_then_elevation_then_feature_id(features):
+ features.sort(key=_by_feature_id)
+ features.sort(key=_by_peak_elevation, reverse=True)
+ features.sort(key=_by_transit_score, reverse=True)
+ return features
+
+
+def buildings(features, zoom):
+ return _sort_by_area_then_id(features)
+
+
+def earth(features, zoom):
+ return _sort_features_by_key(features, _by_feature_id)
+
+
+def landuse(features, zoom):
+ return _sort_by_area_then_id(features)
+
+
+def _place_key_desc(feature):
+ sort_key = _by_population(feature), _by_area(feature)
+ return sort_key
+
+
+def places(features, zoom):
+ features.sort(key=_place_key_desc, reverse=True)
+ features.sort(key=_by_scalerank)
+ features.sort(key=_by_feature_property('mz_n_photos'), reverse=True)
+ features.sort(key=_by_feature_property('min_zoom'))
+ return features
+
+
+def pois(features, zoom):
+ return _sort_by_transit_score_then_elevation_then_feature_id(features)
+
+
+def roads(features, zoom):
+ return _sort_features_by_key(features, _by_feature_property('sort_key'))
+
+
+def water(features, zoom):
+ return _sort_by_area_then_id(features)
+
+
+def transit(features, zoom):
+ return _sort_features_by_key(features, _by_feature_id)
diff --git a/TileStache/Goodies/VecTiles/topojson.py b/TileStache/Goodies/VecTiles/topojson.py
index 04c775bf..dc79401a 100644
--- a/TileStache/Goodies/VecTiles/topojson.py
+++ b/TileStache/Goodies/VecTiles/topojson.py
@@ -1,42 +1,8 @@
from shapely.wkb import loads
import json
-from ... import getTile
from ...Core import KnownUnknown
-def get_tiles(names, config, coord):
- ''' Retrieve a list of named TopoJSON layer tiles from a TileStache config.
-
- Check integrity and compatibility of each, looking at known layers,
- correct JSON mime-types, "Topology" in the type attributes, and
- matching affine transformations.
- '''
- unknown_layers = set(names) - set(config.layers.keys())
-
- if unknown_layers:
- raise KnownUnknown("%s.get_tiles didn't recognize %s when trying to load %s." % (__name__, ', '.join(unknown_layers), ', '.join(names)))
-
- layers = [config.layers[name] for name in names]
- mimes, bodies = zip(*[getTile(layer, coord, 'topojson') for layer in layers])
- bad_mimes = [(name, mime) for (mime, name) in zip(mimes, names) if not mime.endswith('/json')]
-
- if bad_mimes:
- raise KnownUnknown('%s.get_tiles encountered a non-JSON mime-type in %s sub-layer: "%s"' % ((__name__, ) + bad_mimes[0]))
-
- topojsons = map(json.loads, bodies)
- bad_types = [(name, topo['type']) for (topo, name) in zip(topojsons, names) if topo['type'] != 'Topology']
-
- if bad_types:
- raise KnownUnknown('%s.get_tiles encountered a non-Topology type in %s sub-layer: "%s"' % ((__name__, ) + bad_types[0]))
-
- transforms = [topo['transform'] for topo in topojsons]
- unique_xforms = set([tuple(xform['scale'] + xform['translate']) for xform in transforms])
-
- if len(unique_xforms) > 1:
- raise KnownUnknown('%s.get_tiles encountered incompatible transforms: %s' % (__name__, list(unique_xforms)))
-
- return topojsons
-
def update_arc_indexes(geometry, merged_arcs, old_arcs):
''' Updated geometry arc indexes, and add arcs to merged_arcs along the way.
@@ -104,33 +70,30 @@ def decode(file):
'''
raise NotImplementedError('topojson.decode() not yet written')
-def encode(file, features, bounds, is_clipped):
- ''' Encode a list of (WKB, property dict) features into a TopoJSON stream.
-
- Also accept three-element tuples as features: (WKB, property dict, id).
-
+def encode(file, features, bounds):
+ ''' Encode a list of (WKB, property dict, id) features into a TopoJSON stream.
+
+ If no id is available, pass in None
+
Geometries in the features list are assumed to be unprojected lon, lats.
Bounds are given in geographic coordinates as (xmin, ymin, xmax, ymax).
'''
transform, forward = get_transform(bounds)
geometries, arcs = list(), list()
-
+
for feature in features:
- shape = loads(feature[0])
- geometry = dict(properties=feature[1])
+ wkb, props, fid = feature
+ shape = loads(wkb)
+ geometry = dict(properties=props)
geometries.append(geometry)
-
- if is_clipped:
- geometry.update(dict(clipped=True))
-
- if len(feature) >= 2:
- # ID is an optional third element in the feature tuple
- geometry.update(dict(id=feature[2]))
-
+
+ if fid is not None:
+ geometry['id'] = fid
+
if shape.type == 'GeometryCollection':
geometries.pop()
continue
-
+
elif shape.type == 'Point':
geometry.update(dict(type='Point', coordinates=forward(shape.x, shape.y)))
@@ -190,12 +153,16 @@ def encode(file, features, bounds, is_clipped):
json.dump(result, file, separators=(',', ':'))
-def merge(file, names, config, coord):
+def merge(file, names, inputs, config, coord):
''' Retrieve a list of TopoJSON tile responses and merge them into one.
get_tiles() retrieves data and performs basic integrity checks.
'''
- inputs = get_tiles(names, config, coord)
+ transforms = [topo['transform'] for topo in inputs]
+ unique_xforms = set([tuple(xform['scale'] + xform['translate']) for xform in transforms])
+
+ if len(unique_xforms) > 1:
+ raise KnownUnknown('%s.merge encountered incompatible transforms: %s' % (__name__, list(unique_xforms)))
output = {
'type': 'Topology',
diff --git a/TileStache/Goodies/VecTiles/transform.py b/TileStache/Goodies/VecTiles/transform.py
new file mode 100644
index 00000000..004515fb
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/transform.py
@@ -0,0 +1,3289 @@
+# transformation functions to apply to features
+
+from numbers import Number
+from StreetNames import short_street_name
+from collections import defaultdict
+from shapely.strtree import STRtree
+from shapely.geometry.polygon import orient
+from shapely.ops import linemerge
+from shapely.geometry import Point
+from shapely.geometry import LineString
+from shapely.geometry import LinearRing
+from shapely.geometry import Polygon
+from shapely.geometry import box as Box
+from shapely.geometry.multipoint import MultiPoint
+from shapely.geometry.multilinestring import MultiLineString
+from shapely.geometry.multipolygon import MultiPolygon
+from shapely.geometry.collection import GeometryCollection
+from util import to_float
+from sort import pois as sort_pois
+from sys import float_info
+import pycountry
+import re
+import csv
+
+
+feet_pattern = re.compile('([+-]?[0-9.]+)\'(?: *([+-]?[0-9.]+)")?')
+number_pattern = re.compile('([+-]?[0-9.]+)')
+# pattern to detect numbers with units.
+# PLEASE: keep this in sync with the conversion factors below.
+unit_pattern = re.compile('([+-]?[0-9.]+) *(mi|km|m|nmi|ft)')
+
+# multiplicative conversion factor from the unit into meters.
+# PLEASE: keep this in sync with the unit_pattern above.
+unit_conversion_factor = {
+ 'mi': 1609.3440,
+ 'km': 1000.0000,
+ 'm': 1.0000,
+ 'nmi': 1852.0000,
+ 'ft': 0.3048
+}
+
+# used to detect if the "name" of a building is
+# actually a house number.
+digits_pattern = re.compile('^[0-9-]+$')
+
+# used to detect station names which are followed by a
+# parenthetical list of line names.
+station_pattern = re.compile('([^(]*)\(([^)]*)\).*')
+
+# used to detect if an airport's IATA code is the "short"
+# 3-character type. there are also longer codes, and ones
+# which include numbers, but those seem to be used for
+# less important airports.
+iata_short_code_pattern = re.compile('^[A-Z]{3}$')
+
+
+def _to_float_meters(x):
+ if x is None:
+ return None
+
+ as_float = to_float(x)
+ if as_float is not None:
+ return as_float
+
+ # trim whitespace to simplify further matching
+ x = x.strip()
+
+ # try looking for a unit
+ unit_match = unit_pattern.match(x)
+ if unit_match is not None:
+ value = unit_match.group(1)
+ units = unit_match.group(2)
+ value_as_float = to_float(value)
+ if value_as_float is not None:
+ return value_as_float * unit_conversion_factor[units]
+
+ # try if it looks like an expression in feet via ' "
+ feet_match = feet_pattern.match(x)
+ if feet_match is not None:
+ feet = feet_match.group(1)
+ inches = feet_match.group(2)
+ feet_as_float = to_float(feet)
+ inches_as_float = to_float(inches)
+
+ total_inches = 0.0
+ parsed_feet_or_inches = False
+ if feet_as_float is not None:
+ total_inches = feet_as_float * 12.0
+ parsed_feet_or_inches = True
+ if inches_as_float is not None:
+ total_inches += inches_as_float
+ parsed_feet_or_inches = True
+ if parsed_feet_or_inches:
+ # international inch is exactly 25.4mm
+ meters = total_inches * 0.0254
+ return meters
+
+ # try and match the first number that can be parsed
+ for number_match in number_pattern.finditer(x):
+ potential_number = number_match.group(1)
+ as_float = to_float(potential_number)
+ if as_float is not None:
+ return as_float
+
+ return None
+
+
+def _coalesce(properties, *property_names):
+ for prop in property_names:
+ val = properties.get(prop)
+ if val:
+ return val
+ return None
+
+
+def _remove_properties(properties, *property_names):
+ for prop in property_names:
+ properties.pop(prop, None)
+ return properties
+
+
+def _building_calc_levels(levels):
+ levels = max(levels, 1)
+ levels = (levels * 3) + 2
+ return levels
+
+
+def _building_calc_min_levels(min_levels):
+ min_levels = max(min_levels, 0)
+ min_levels = min_levels * 3
+ return min_levels
+
+
+def _building_calc_height(height_val, levels_val, levels_calc_fn):
+ height = _to_float_meters(height_val)
+ if height is not None:
+ return height
+ levels = _to_float_meters(levels_val)
+ if levels is None:
+ return None
+ levels = levels_calc_fn(levels)
+ return levels
+
+
+def add_id_to_properties(shape, properties, fid, zoom):
+ properties['id'] = fid
+ return shape, properties, fid
+
+
+def detect_osm_relation(shape, properties, fid, zoom):
+ # Assume all negative ids indicate the data was a relation. At the
+ # moment, this is true because only osm contains negative
+ # identifiers. Should this change, this logic would need to become
+ # more robust
+ if isinstance(fid, Number) and fid < 0:
+ properties['osm_relation'] = True
+ return shape, properties, fid
+
+
+def remove_feature_id(shape, properties, fid, zoom):
+ return shape, properties, None
+
+
+def building_height(shape, properties, fid, zoom):
+ height = _building_calc_height(
+ properties.get('height'), properties.get('building_levels'),
+ _building_calc_levels)
+ if height is not None:
+ properties['height'] = height
+ else:
+ properties.pop('height', None)
+ return shape, properties, fid
+
+
+def building_min_height(shape, properties, fid, zoom):
+ min_height = _building_calc_height(
+ properties.get('min_height'), properties.get('building_min_levels'),
+ _building_calc_min_levels)
+ if min_height is not None:
+ properties['min_height'] = min_height
+ else:
+ properties.pop('min_height', None)
+ return shape, properties, fid
+
+
+def synthesize_volume(shape, props, fid, zoom):
+ area = props.get('area')
+ height = props.get('height')
+ if area is not None and height is not None:
+ props['volume'] = int(area * height)
+ return shape, props, fid
+
+
+def building_trim_properties(shape, properties, fid, zoom):
+ properties = _remove_properties(
+ properties,
+ 'building', 'building_part',
+ 'building_levels', 'building_min_levels')
+ return shape, properties, fid
+
+
+def road_classifier(shape, properties, fid, zoom):
+ source = properties.get('source')
+ assert source, 'Missing source in road query'
+ if source == 'naturalearthdata.com':
+ return shape, properties, fid
+
+ properties.pop('is_link', None)
+ properties.pop('is_tunnel', None)
+ properties.pop('is_bridge', None)
+
+ highway = properties.get('highway', '')
+ tunnel = properties.get('tunnel', '')
+ bridge = properties.get('bridge', '')
+
+ if highway.endswith('_link'):
+ properties['is_link'] = True
+ if tunnel in ('yes', 'true'):
+ properties['is_tunnel'] = True
+ if bridge in ('yes', 'true'):
+ properties['is_bridge'] = True
+
+ return shape, properties, fid
+
+
+def road_trim_properties(shape, properties, fid, zoom):
+ properties = _remove_properties(properties, 'bridge', 'tunnel')
+ return shape, properties, fid
+
+
+def _reverse_line_direction(shape):
+ if shape.type != 'LineString':
+ return False
+ shape.coords = shape.coords[::-1]
+ return True
+
+
+def road_oneway(shape, properties, fid, zoom):
+ oneway = properties.get('oneway')
+ if oneway in ('-1', 'reverse'):
+ did_reverse = _reverse_line_direction(shape)
+ if did_reverse:
+ properties['oneway'] = 'yes'
+ elif oneway in ('true', '1'):
+ properties['oneway'] = 'yes'
+ elif oneway in ('false', '0'):
+ properties['oneway'] = 'no'
+ return shape, properties, fid
+
+
+def road_abbreviate_name(shape, properties, fid, zoom):
+ name = properties.get('name', None)
+ if not name:
+ return shape, properties, fid
+ short_name = short_street_name(name)
+ properties['name'] = short_name
+ return shape, properties, fid
+
+
+def route_name(shape, properties, fid, zoom):
+ rn = properties.get('route_name')
+ if rn:
+ name = properties.get('name')
+ if not name:
+ properties['name'] = rn
+ del properties['route_name']
+ elif rn == name:
+ del properties['route_name']
+ return shape, properties, fid
+
+
+def place_population_int(shape, properties, fid, zoom):
+ population_str = properties.pop('population', None)
+ population = to_float(population_str)
+ if population is not None:
+ properties['population'] = int(population)
+ return shape, properties, fid
+
+
+def pois_capacity_int(shape, properties, fid, zoom):
+ pois_capacity_str = properties.pop('capacity', None)
+ capacity = to_float(pois_capacity_str)
+ if capacity is not None:
+ properties['capacity'] = int(capacity)
+ return shape, properties, fid
+
+
+def water_tunnel(shape, properties, fid, zoom):
+ tunnel = properties.pop('tunnel', None)
+ if tunnel in (None, 'no', 'false', '0'):
+ properties.pop('is_tunnel', None)
+ else:
+ properties['is_tunnel'] = True
+ return shape, properties, fid
+
+
+def admin_level_as_int(shape, properties, fid, zoom):
+ admin_level_str = properties.pop('admin_level', None)
+ if admin_level_str is None:
+ return shape, properties, fid
+ try:
+ admin_level_int = int(admin_level_str)
+ except ValueError:
+ return shape, properties, fid
+ properties['admin_level'] = admin_level_int
+ return shape, properties, fid
+
+
+def tags_create_dict(shape, properties, fid, zoom):
+ tags_hstore = properties.get('tags')
+ if tags_hstore:
+ tags = dict(tags_hstore)
+ properties['tags'] = tags
+ return shape, properties, fid
+
+
+def tags_remove(shape, properties, fid, zoom):
+ properties.pop('tags', None)
+ return shape, properties, fid
+
+
+tag_name_alternates = (
+ 'int_name',
+ 'loc_name',
+ 'nat_name',
+ 'official_name',
+ 'old_name',
+ 'reg_name',
+ 'short_name',
+ 'name_left',
+ 'name_right',
+)
+
+
+def _convert_wof_l10n_name(x):
+ lang_str_iso_639_3 = x[:3]
+ if len(lang_str_iso_639_3) != 3:
+ return None
+ try:
+ pycountry.languages.get(iso639_3_code=lang_str_iso_639_3)
+ except KeyError:
+ return None
+ return lang_str_iso_639_3
+
+
+def _normalize_osm_lang_code(x):
+ # first try a 639-1 code
+ try:
+ lang = pycountry.languages.get(iso639_1_code=x)
+ except KeyError:
+ # next, try a 639-2 code
+ try:
+ lang = pycountry.languages.get(iso639_2T_code=x)
+ except KeyError:
+ # finally, try a 639-3 code
+ try:
+ lang = pycountry.languages.get(iso639_3_code=x)
+ except KeyError:
+ return None
+ iso639_3_code = lang.iso639_3_code.encode('utf-8')
+ return iso639_3_code
+
+
+def _normalize_country_code(x):
+ x = x.upper()
+ try:
+ c = pycountry.countries.get(alpha2=x)
+ except KeyError:
+ try:
+ c = pycountry.countries.get(alpha3=x)
+ except KeyError:
+ try:
+ c = pycountry.countries.get(numeric_code=x)
+ except KeyError:
+ return None
+ alpha2_code = c.alpha2
+ return alpha2_code
+
+
+osm_l10n_lookup = {
+ 'zh-min-nan': 'nan',
+ 'zh-yue': 'yue',
+}
+
+
+def osm_l10n_name_lookup(x):
+ lookup = osm_l10n_lookup.get(x)
+ if lookup is not None:
+ return lookup
+ else:
+ return x
+
+
+def _convert_osm_l10n_name(x):
+ x = osm_l10n_name_lookup(x)
+
+ if '_' not in x:
+ return _normalize_osm_lang_code(x)
+
+ fields_by_underscore = x.split('_', 1)
+ lang_code_candidate, country_candidate = fields_by_underscore
+
+ lang_code_result = _normalize_osm_lang_code(lang_code_candidate)
+ if lang_code_result is None:
+ return None
+
+ country_result = _normalize_country_code(country_candidate)
+ if country_result is None:
+ return None
+
+ result = '%s_%s' % (lang_code_result, country_result)
+ return result
+
+
+def tags_name_i18n(shape, properties, fid, zoom):
+ tags = properties.get('tags')
+ if not tags:
+ return shape, properties, fid
+
+ name = properties.get('name')
+ if not name:
+ return shape, properties, fid
+
+ source = properties.get('source')
+ is_wof = source == 'whosonfirst.mapzen.com'
+ is_osm = source == 'openstreetmap.org'
+
+ alt_name_prefix_candidates = []
+ convert_fn = lambda x: None
+ if is_osm:
+ alt_name_prefix_candidates = [
+ 'name:left:', 'name:right:', 'name:', 'alt_name:', 'old_name:'
+ ]
+ convert_fn = _convert_osm_l10n_name
+ elif is_wof:
+ alt_name_prefix_candidates = ['name:']
+ convert_fn = _convert_wof_l10n_name
+
+ for k, v in tags.items():
+ if v == name:
+ continue
+ for candidate in alt_name_prefix_candidates:
+ if k.startswith(candidate):
+ lang_code = k[len(candidate):]
+ normalized_lang_code = convert_fn(lang_code)
+ if normalized_lang_code:
+ lang_key = '%s%s' % (candidate, normalized_lang_code)
+ properties[lang_key] = v
+
+ for alt_tag_name_candidate in tag_name_alternates:
+ alt_tag_name_value = tags.get(alt_tag_name_candidate)
+ if alt_tag_name_value and alt_tag_name_value != name:
+ properties[alt_tag_name_candidate] = alt_tag_name_value
+
+ return shape, properties, fid
+
+
+def _no_none_min(a, b):
+ """
+ Usually, `min(None, a)` will return None. This isn't
+ what we want, so this one will return a non-None
+ argument instead. This is basically the same as
+ treating None as greater than any other value.
+ """
+
+ if a is None:
+ return b
+ elif b is None:
+ return a
+ else:
+ return min(a, b)
+
+
+def _sorted_attributes(features, attrs, attribute):
+ """
+ When the list of attributes is a dictionary, use the
+ sort key parameter to order the feature attributes.
+ evaluate it as a function and return it. If it's not
+ in the right format, attrs isn't a dict then returns
+ None.
+ """
+
+ sort_key = attrs.get('sort_key')
+ reverse = attrs.get('reverse')
+
+ assert sort_key is not None, "Configuration " + \
+ "parameter 'sort_key' is missing, please " + \
+ "check your configuration."
+
+ # first, we find the _minimum_ ordering over the
+ # group of key values. this is because we only do
+ # the intersection in groups by the cutting
+ # attribute, so can only sort in accordance with
+ # that.
+ group = dict()
+ for feature in features:
+ val = feature[1].get(sort_key)
+ key = feature[1].get(attribute)
+ val = _no_none_min(val, group.get(key))
+ group[key] = val
+
+ # extract the sorted list of attributes from the
+ # grouped (attribute, order) pairs, ordering by
+ # the order.
+ all_attrs = sorted(group.iteritems(),
+ key=lambda x: x[1], reverse=bool(reverse))
+
+ # strip out the sort key in return
+ return [x[0] for x in all_attrs]
+
+
+# the table of geometry dimensions indexed by geometry
+# type name. it would be better to use geometry type ID,
+# but it seems like that isn't exposed.
+#
+# each of these is a bit-mask, so zero dimentions is
+# represented by 1, one by 2, etc... this is to support
+# things like geometry collections where the type isn't
+# statically known.
+_NULL_DIMENSION = 0
+_POINT_DIMENSION = 1
+_LINE_DIMENSION = 2
+_POLYGON_DIMENSION = 4
+
+
+_GEOMETRY_DIMENSIONS = {
+ 'Point': _POINT_DIMENSION,
+ 'LineString': _LINE_DIMENSION,
+ 'LinearRing': _LINE_DIMENSION,
+ 'Polygon': _POLYGON_DIMENSION,
+ 'MultiPoint': _POINT_DIMENSION,
+ 'MultiLineString': _LINE_DIMENSION,
+ 'MultiPolygon': _POLYGON_DIMENSION,
+ 'GeometryCollection': _NULL_DIMENSION,
+}
+
+
+# returns the dimensionality of the object. so points have
+# zero dimensions, lines one, polygons two. multi* variants
+# have the same as their singular variant.
+#
+# geometry collections can hold many different types, so
+# we use a bit-mask of the dimensions and recurse down to
+# find the actual dimensionality of the stored set.
+#
+# returns a bit-mask, with these bits ORed together:
+# 1: contains a point / zero-dimensional object
+# 2: contains a linestring / one-dimensional object
+# 4: contains a polygon / two-dimensional object
+def _geom_dimensions(g):
+ dim = _GEOMETRY_DIMENSIONS.get(g.geom_type)
+ assert dim is not None, "Unknown geometry type " + \
+ "%s in transform._geom_dimensions." % \
+ repr(g.geom_type)
+
+ # recurse for geometry collections to find the true
+ # dimensionality of the geometry.
+ if dim == _NULL_DIMENSION:
+ for part in g.geoms:
+ dim = dim | _geom_dimensions(part)
+
+ return dim
+
+
+def _flatten_geoms(shape):
+ """
+ Flatten a shape so that it is returned as a list
+ of single geometries.
+
+ >>> [g.wkt for g in _flatten_geoms(shapely.wkt.loads('GEOMETRYCOLLECTION (MULTIPOINT(-1 -1, 0 0), GEOMETRYCOLLECTION (POINT(1 1), POINT(2 2), GEOMETRYCOLLECTION (POINT(3 3))), LINESTRING(0 0, 1 1))'))]
+ ['POINT (-1 -1)', 'POINT (0 0)', 'POINT (1 1)', 'POINT (2 2)', 'POINT (3 3)', 'LINESTRING (0 0, 1 1)']
+ >>> _flatten_geoms(Polygon())
+ []
+ >>> _flatten_geoms(MultiPolygon())
+ []
+ """
+ if shape.geom_type.startswith('Multi'):
+ return shape.geoms
+
+ elif shape.is_empty:
+ return []
+
+ elif shape.type == 'GeometryCollection':
+ geoms = []
+
+ for g in shape.geoms:
+ geoms.extend(_flatten_geoms(g))
+
+ return geoms
+
+ else:
+ return [shape]
+
+
+def _filter_geom_types(shape, keep_dim):
+ """
+ Return a geometry which consists of the geometries in
+ the input shape filtered so that only those of the
+ given dimension remain. Collapses any structure (e.g:
+ of geometry collections) down to a single or multi-
+ geometry.
+
+ >>> _filter_geom_types(GeometryCollection(), _POINT_DIMENSION).wkt
+ 'GEOMETRYCOLLECTION EMPTY'
+ >>> _filter_geom_types(Point(0,0), _POINT_DIMENSION).wkt
+ 'POINT (0 0)'
+ >>> _filter_geom_types(Point(0,0), _LINE_DIMENSION).wkt
+ 'GEOMETRYCOLLECTION EMPTY'
+ >>> _filter_geom_types(Point(0,0), _POLYGON_DIMENSION).wkt
+ 'GEOMETRYCOLLECTION EMPTY'
+ >>> _filter_geom_types(LineString([(0,0),(1,1)]), _LINE_DIMENSION).wkt
+ 'LINESTRING (0 0, 1 1)'
+ >>> _filter_geom_types(Polygon([(0,0),(1,1),(1,0),(0,0)],[]), _POLYGON_DIMENSION).wkt
+ 'POLYGON ((0 0, 1 1, 1 0, 0 0))'
+ >>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), LINESTRING(0 0, 1 1))'), _POINT_DIMENSION).wkt
+ 'POINT (0 0)'
+ >>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), LINESTRING(0 0, 1 1))'), _LINE_DIMENSION).wkt
+ 'LINESTRING (0 0, 1 1)'
+ >>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), LINESTRING(0 0, 1 1))'), _POLYGON_DIMENSION).wkt
+ 'GEOMETRYCOLLECTION EMPTY'
+ >>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), GEOMETRYCOLLECTION (POINT(1 1), LINESTRING(0 0, 1 1)))'), _POINT_DIMENSION).wkt
+ 'MULTIPOINT (0 0, 1 1)'
+ >>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (MULTIPOINT(-1 -1, 0 0), GEOMETRYCOLLECTION (POINT(1 1), POINT(2 2), GEOMETRYCOLLECTION (POINT(3 3))), LINESTRING(0 0, 1 1))'), _POINT_DIMENSION).wkt
+ 'MULTIPOINT (-1 -1, 0 0, 1 1, 2 2, 3 3)'
+ >>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (LINESTRING(-1 -1, 0 0), GEOMETRYCOLLECTION (LINESTRING(1 1, 2 2), GEOMETRYCOLLECTION (POINT(3 3))), LINESTRING(0 0, 1 1))'), _LINE_DIMENSION).wkt
+ 'MULTILINESTRING ((-1 -1, 0 0), (1 1, 2 2), (0 0, 1 1))'
+ >>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POLYGON((-2 -2, -2 2, 2 2, 2 -2, -2 -2)), GEOMETRYCOLLECTION (LINESTRING(1 1, 2 2), GEOMETRYCOLLECTION (POLYGON((3 3, 0 0, 1 0, 3 3)))), LINESTRING(0 0, 1 1))'), _POLYGON_DIMENSION).wkt
+ 'MULTIPOLYGON (((-2 -2, -2 2, 2 2, 2 -2, -2 -2)), ((3 3, 0 0, 1 0, 3 3)))'
+ """
+
+ # flatten the geometries, and keep the parts with the
+ # dimension that we want. each item in the parts list
+ # should be a single (non-multi) geometry.
+ parts = []
+ for g in _flatten_geoms(shape):
+ if _geom_dimensions(g) == keep_dim:
+ parts.append(g)
+
+ # figure out how to construct a multi-geometry of the
+ # dimension wanted.
+ if keep_dim == _POINT_DIMENSION:
+ constructor = MultiPoint
+
+ elif keep_dim == _LINE_DIMENSION:
+ constructor = MultiLineString
+
+ elif keep_dim == _POLYGON_DIMENSION:
+ constructor = MultiPolygon
+
+ else:
+ raise ValueError("Unknown dimension %d in _filter_geom_types" % keep_dim)
+
+ if len(parts) == 0:
+ return constructor()
+
+ elif len(parts) == 1:
+ # return the singular geometry
+ return parts[0]
+
+ else:
+ if keep_dim == _POINT_DIMENSION:
+ # not sure why the MultiPoint constructor wants
+ # its coordinates differently from MultiPolygon
+ # and MultiLineString...
+ coords = []
+ for p in parts:
+ coords.extend(p.coords)
+ return MultiPoint(coords)
+
+ else:
+ return constructor(parts)
+
+
+# creates a list of indexes, each one for a different cut
+# attribute value, in priority order.
+#
+# STRtree stores geometries and returns these from the query,
+# but doesn't appear to allow any other attributes to be
+# stored along with the geometries. this means we have to
+# separate the index out into several "layers", each having
+# the same attribute value. which isn't all that much of a
+# pain, as we need to cut the shapes in a certain order to
+# ensure priority anyway.
+#
+# intersect_func is a functor passed in to control how an
+# intersection is performed. it is passed
+class _Cutter:
+ def __init__(self, features, attrs, attribute,
+ target_attribute, keep_geom_type,
+ intersect_func):
+ group = defaultdict(list)
+ for feature in features:
+ shape, props, fid = feature
+ attr = props.get(attribute)
+ group[attr].append(shape)
+
+ # if the user didn't supply any options for controlling
+ # the cutting priority, then just make some up based on
+ # the attributes which are present in the dataset.
+ if attrs is None:
+ all_attrs = set()
+ for feature in features:
+ all_attrs.add(feature[1].get(attribute))
+ attrs = list(all_attrs)
+
+ # alternatively, the user can specify an ordering
+ # function over the attributes.
+ elif isinstance(attrs, dict):
+ attrs = _sorted_attributes(features, attrs,
+ attribute)
+
+ cut_idxs = list()
+ for attr in attrs:
+ if attr in group:
+ cut_idxs.append((attr, STRtree(group[attr])))
+
+ self.attribute = attribute
+ self.target_attribute = target_attribute
+ self.cut_idxs = cut_idxs
+ self.keep_geom_type = keep_geom_type
+ self.intersect_func = intersect_func
+ self.new_features = []
+
+
+ # cut up the argument shape, projecting the configured
+ # attribute to the properties of the intersecting parts
+ # of the shape. adds all the selected bits to the
+ # new_features list.
+ def cut(self, shape, props, fid):
+ original_geom_dim = _geom_dimensions(shape)
+
+ for cutting_attr, cut_idx in self.cut_idxs:
+ cutting_shapes = cut_idx.query(shape)
+
+ for cutting_shape in cutting_shapes:
+ if cutting_shape.intersects(shape):
+ shape = self._intersect(
+ shape, props, fid, cutting_shape,
+ cutting_attr, original_geom_dim)
+
+ # if there's no geometry left outside the
+ # shape, then we can exit the function
+ # early, as nothing else will intersect.
+ if shape.is_empty:
+ return
+
+ # if there's still geometry left outside, then it
+ # keeps the old, unaltered properties.
+ self._add(shape, props, fid, original_geom_dim)
+
+
+ # only keep geometries where either the type is the
+ # same as the original, or we're not trying to keep the
+ # same type.
+ def _add(self, shape, props, fid, original_geom_dim):
+ # if keeping the same geometry type, then filter
+ # out anything that's different.
+ if self.keep_geom_type:
+ shape = _filter_geom_types(
+ shape, original_geom_dim)
+
+ # don't add empty shapes, they're completely
+ # useless. the previous step may also have created
+ # an empty geometry if there weren't any items of
+ # the type we're looking for.
+ if shape.is_empty:
+ return
+
+ # add the shape as-is unless we're trying to keep
+ # the geometry type or the geometry dimension is
+ # identical.
+ self.new_features.append((shape, props, fid))
+
+
+ # intersects the shape with the cutting shape and
+ # handles attribute projection. anything "inside" is
+ # kept as it must have intersected the highest
+ # priority cutting shape already. the remainder is
+ # returned.
+ def _intersect(self, shape, props, fid, cutting_shape,
+ cutting_attr, original_geom_dim):
+ inside, outside = \
+ self.intersect_func(shape, cutting_shape)
+
+ # intersections are tricky, and it seems that the geos
+ # library (perhaps only certain versions of it) don't
+ # handle intersection of a polygon with its boundary
+ # very well. for example:
+ #
+ # >>> import shapely.geometry as g
+ # >>> p = g.Point(0,0).buffer(1.0, resolution=2)
+ # >>> b = p.boundary
+ # >>> b.intersection(p).wkt
+ # 'MULTILINESTRING ((1 0, 0.7071067811865481 -0.7071067811865469), (0.7071067811865481 -0.7071067811865469, 1.615544574432587e-15 -1), (1.615544574432587e-15 -1, -0.7071067811865459 -0.7071067811865491), (-0.7071067811865459 -0.7071067811865491, -1 -3.231089148865173e-15), (-1 -3.231089148865173e-15, -0.7071067811865505 0.7071067811865446), (-0.7071067811865505 0.7071067811865446, -4.624589118372729e-15 1), (-4.624589118372729e-15 1, 0.7071067811865436 0.7071067811865515), (0.7071067811865436 0.7071067811865515, 1 0))'
+ #
+ # the result multilinestring could be joined back into
+ # the original object. but because it has separate parts,
+ # each requires duplicating the start and end point, and
+ # each separate segment gets a different polygon buffer
+ # in Tangram - basically, it's a problem all round.
+ #
+ # two solutions to this: given that we're cutting, then
+ # the inside and outside should union back to the
+ # original shape - if either is empty then the whole
+ # object ought to be in the other.
+ #
+ # the second solution, for when there is actually some
+ # part cut, is that we can attempt to merge lines back
+ # together.
+ if outside.is_empty and not inside.is_empty:
+ inside = shape
+ elif inside.is_empty and not outside.is_empty:
+ outside = shape
+ elif original_geom_dim == _LINE_DIMENSION:
+ inside = _linemerge(inside)
+ outside = _linemerge(outside)
+
+ if cutting_attr is not None:
+ inside_props = props.copy()
+ inside_props[self.target_attribute] = cutting_attr
+ else:
+ inside_props = props
+
+ self._add(inside, inside_props, fid,
+ original_geom_dim)
+ return outside
+
+# intersect by cutting, so that the cutting shape defines
+# a part of the shape which is inside and a part which is
+# outside as two separate shapes.
+def _intersect_cut(shape, cutting_shape):
+ inside = shape.intersection(cutting_shape)
+ outside = shape.difference(cutting_shape)
+ return inside, outside
+
+
+# intersect by looking at the overlap size. we can define
+# a cut-off fraction and if that fraction or more of the
+# area of the shape is within the cutting shape, it's
+# inside, else outside.
+#
+# this is done using a closure so that we can curry away
+# the fraction parameter.
+def _intersect_overlap(min_fraction):
+ # the inner function is what will actually get
+ # called, but closing over min_fraction means it
+ # will have access to that.
+ def _f(shape, cutting_shape):
+ overlap = shape.intersection(cutting_shape).area
+ area = shape.area
+
+ # need an empty shape of the same type as the
+ # original shape, which should be possible, as
+ # it seems shapely geometries all have a default
+ # constructor to empty.
+ empty = type(shape)()
+
+ if ((area > 0) and
+ (overlap / area) >= min_fraction):
+ return shape, empty
+ else:
+ return empty, shape
+ return _f
+
+
+# find a layer by iterating through all the layers. this
+# would be easier if they layers were in a dict(), but
+# that's a pretty invasive change.
+#
+# returns None if the layer can't be found.
+def _find_layer(feature_layers, name):
+
+ for feature_layer in feature_layers:
+ layer_datum = feature_layer['layer_datum']
+ layer_name = layer_datum['name']
+
+ if layer_name == name:
+ return feature_layer
+
+ return None
+
+
+# shared implementation of the intercut algorithm, used
+# both when cutting shapes and using overlap to determine
+# inside / outsideness.
+def _intercut_impl(intersect_func, feature_layers,
+ base_layer, cutting_layer, attribute,
+ target_attribute, cutting_attrs,
+ keep_geom_type):
+ # the target attribute can default to the attribute if
+ # they are distinct. but often they aren't, and that's
+ # why target_attribute is a separate parameter.
+ if target_attribute is None:
+ target_attribute = attribute
+
+ # search through all the layers and extract the ones
+ # which have the names of the base and cutting layer.
+ # it would seem to be better to use a dict() for
+ # layers, and this will give odd results if names are
+ # allowed to be duplicated.
+ base = _find_layer(feature_layers, base_layer)
+ cutting = _find_layer(feature_layers, cutting_layer)
+
+ # base or cutting layer not available. this could happen
+ # because of a config problem, in which case you'd want
+ # it to be reported. but also can happen when the client
+ # selects a subset of layers which don't include either
+ # the base or the cutting layer. then it's not an error.
+ # the interesting case is when they select the base but
+ # not the cutting layer...
+ if base is None or cutting is None:
+ return None
+
+ base_features = base['features']
+ cutting_features = cutting['features']
+
+ # make a cutter object to help out
+ cutter = _Cutter(cutting_features, cutting_attrs,
+ attribute, target_attribute,
+ keep_geom_type, intersect_func)
+
+ for base_feature in base_features:
+ # we use shape to track the current remainder of the
+ # shape after subtracting bits which are inside cuts.
+ shape, props, fid = base_feature
+
+ cutter.cut(shape, props, fid)
+
+ base['features'] = cutter.new_features
+
+ return base
+
+
+# intercut takes features from a base layer and cuts each
+# of them against a cutting layer, splitting any base
+# feature which intersects into separate inside and outside
+# parts.
+#
+# the parts of each base feature which are outside any
+# cutting feature are left unchanged. the parts which are
+# inside have their property with the key given by the
+# 'target_attribute' parameter set to the same value as the
+# property from the cutting feature with the key given by
+# the 'attribute' parameter.
+#
+# the intended use of this is to project attributes from one
+# layer to another so that they can be styled appropriately.
+#
+# - feature_layers: list of layers containing both the base
+# and cutting layer.
+# - base_layer: str name of the base layer.
+# - cutting_layer: str name of the cutting layer.
+# - attribute: optional str name of the property / attribute
+# to take from the cutting layer.
+# - target_attribute: optional str name of the property /
+# attribute to assign on the base layer. defaults to the
+# same as the 'attribute' parameter.
+# - cutting_attrs: list of str, the priority of the values
+# to be used in the cutting operation. this ensures that
+# items at the beginning of the list get cut first and
+# those values have priority (won't be overridden by any
+# other shape cutting).
+# - keep_geom_type: if truthy, then filter the output to be
+# the same type as the input. defaults to True, because
+# this seems like an eminently sensible behaviour.
+#
+# returns a feature layer which is the base layer cut by the
+# cutting layer.
+def intercut(ctx):
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ base_layer = ctx.params.get('base_layer')
+ assert base_layer, \
+ 'Parameter base_layer was missing from intercut config'
+ cutting_layer = ctx.params.get('cutting_layer')
+ assert cutting_layer, \
+ 'Parameter cutting_layer was missing from intercut ' \
+ 'config'
+ attribute = ctx.params.get('attribute')
+ # sanity check on the availability of the cutting
+ # attribute.
+ assert attribute is not None, \
+ 'Parameter attribute to intercut was None, but ' + \
+ 'should have been an attribute name. Perhaps check ' + \
+ 'your configuration file and queries.'
+
+
+ target_attribute = ctx.params.get('target_attribute')
+ cutting_attrs = ctx.params.get('cutting_attrs')
+ keep_geom_type = ctx.params.get('keep_geom_type', True)
+
+ return _intercut_impl(_intersect_cut, feature_layers,
+ base_layer, cutting_layer, attribute,
+ target_attribute, cutting_attrs, keep_geom_type)
+
+
+# overlap measures the area overlap between each feature in
+# the base layer and each in the cutting layer. if the
+# fraction of overlap is greater than the min_fraction
+# constant, then the feature in the base layer is assigned
+# a property with its value derived from the overlapping
+# feature from the cutting layer.
+#
+# the intended use of this is to project attributes from one
+# layer to another so that they can be styled appropriately.
+#
+# it has the same parameters as intercut, see above.
+#
+# returns a feature layer which is the base layer with
+# overlapping features having attributes projected from the
+# cutting layer.
+def overlap(ctx):
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ base_layer = ctx.params.get('base_layer')
+ assert base_layer, \
+ 'Parameter base_layer was missing from overlap config'
+ cutting_layer = ctx.params.get('cutting_layer')
+ assert cutting_layer, \
+ 'Parameter cutting_layer was missing from overlap ' \
+ 'config'
+ attribute = ctx.params.get('attribute')
+ # sanity check on the availability of the cutting
+ # attribute.
+ assert attribute is not None, \
+ 'Parameter attribute to overlap was None, but ' + \
+ 'should have been an attribute name. Perhaps check ' + \
+ 'your configuration file and queries.'
+
+ target_attribute = ctx.params.get('target_attribute')
+ cutting_attrs = ctx.params.get('cutting_attrs')
+ keep_geom_type = ctx.params.get('keep_geom_type', True)
+ min_fraction = ctx.params.get('min_fraction', 0.8)
+
+ return _intercut_impl(_intersect_overlap(min_fraction),
+ feature_layers, base_layer, cutting_layer, attribute,
+ target_attribute, cutting_attrs, keep_geom_type)
+
+
+# intracut cuts a layer with a set of features from that same
+# layer, which are then removed.
+#
+# for example, with water boundaries we get one set of linestrings
+# from the admin polygons and another set from the original ways
+# where the `maritime=yes` tag is set. we don't actually want
+# separate linestrings, we just want the `maritime=yes` attribute
+# on the first set of linestrings.
+def intracut(ctx):
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ base_layer = ctx.params.get('base_layer')
+ assert base_layer, \
+ 'Parameter base_layer was missing from intracut config'
+ attribute = ctx.params.get('attribute')
+ # sanity check on the availability of the cutting
+ # attribute.
+ assert attribute is not None, \
+ 'Parameter attribute to intracut was None, but ' + \
+ 'should have been an attribute name. Perhaps check ' + \
+ 'your configuration file and queries.'
+
+ base = _find_layer(feature_layers, base_layer)
+ if base is None:
+ return None
+
+ # unlike intracut & overlap, which work on separate layers,
+ # intracut separates features in the same layer into
+ # different sets to work on.
+ base_features = list()
+ cutting_features = list()
+ for shape, props, fid in base['features']:
+ if attribute in props:
+ cutting_features.append((shape, props, fid))
+ else:
+ base_features.append((shape, props, fid))
+
+ cutter = _Cutter(cutting_features, None, attribute,
+ attribute, True, _intersect_cut)
+
+ for shape, props, fid in base_features:
+ cutter.cut(shape, props, fid)
+
+ base['features'] = cutter.new_features
+
+ return base
+
+
+# place kinds, as used by OSM, mapped to their rough
+# scale_ranks so that we can provide a defaulted,
+# non-curated scale_rank / min_zoom value.
+_default_scalerank_for_place_kind = {
+ 'locality': 13,
+ 'isolated_dwelling': 13,
+ 'farm': 13,
+
+ 'hamlet': 12,
+
+ 'village': 11,
+
+ 'suburb': 10,
+ 'quarter': 10,
+ 'borough': 10,
+
+ 'town': 8,
+ 'city': 8,
+
+ 'province': 4,
+ 'state': 4,
+
+ 'sea': 3,
+
+ 'country': 0,
+ 'ocean': 0,
+ 'continent': 0
+}
+
+
+# if the feature does not have a scale_rank attribute already,
+# which would have come from a curated source, then calculate
+# a default one based on the kind of place it is.
+def calculate_default_place_scalerank(shape, properties, fid, zoom):
+ # don't override an existing attribute
+ scalerank = properties.get('scalerank')
+ if scalerank is not None:
+ return shape, properties, fid
+
+ # base calculation off kind
+ kind = properties.get('kind')
+ if kind is None:
+ return shape, properties, fid
+
+ scalerank = _default_scalerank_for_place_kind.get(kind)
+ if scalerank is None:
+ return shape, properties, fid
+
+ # adjust scalerank for state / country capitals
+ if kind in ('city', 'town'):
+ if properties.get('state_capital'):
+ scalerank -= 1
+ elif properties.get('capital'):
+ scalerank -= 2
+
+ properties['scalerank'] = scalerank
+
+ return shape, properties, fid
+
+
+def _make_new_properties(props, props_instructions):
+ """
+ make new properties from existing properties and a
+ dict of instructions.
+
+ the algorithm is:
+ - where a key appears with value True, it will be
+ copied from the existing properties.
+ - where it's a dict, the values will be looked up
+ in that dict.
+ - otherwise the value will be used directly.
+ """
+ new_props = dict()
+
+ for k, v in props_instructions.iteritems():
+ if v is True:
+ # this works even when props[k] = None
+ if k in props:
+ new_props[k] = props[k]
+ elif isinstance(v, dict):
+ # this will return None, which allows us to
+ # use the dict to set default values.
+ original_v = props.get(k)
+ if original_v in v:
+ new_props[k] = v[original_v]
+ elif isinstance(v, list) and len(v) == 1:
+ # this is a hack to implement escaping for when the output value
+ # should be a value, but that value (e.g: True, or a dict) is
+ # used for some other purpose above.
+ new_props[k] = v[0]
+ else:
+ new_props[k] = v
+
+ return new_props
+
+
+def _snap_to_grid(shape, grid_size):
+ """
+ Snap coordinates of a shape to a multiple of `grid_size`.
+
+ This can be useful when there's some error in point
+ positions, but we're using an algorithm which is very
+ sensitive to coordinate exactness. For example, when
+ calculating the boundary of several items, it makes a
+ big difference whether the shapes touch or there's a
+ very small gap between them.
+
+ This is implemented here because it doesn't exist in
+ GEOS or Shapely. It exists in PostGIS, but only because
+ it's implemented there as well. Seems like it would be a
+ useful thing to have in GEOS, though.
+
+ >>> _snap_to_grid(Point(0.5, 0.5), 1).wkt
+ 'POINT (1 1)'
+ >>> _snap_to_grid(Point(0.1, 0.1), 1).wkt
+ 'POINT (0 0)'
+ >>> _snap_to_grid(Point(-0.1, -0.1), 1).wkt
+ 'POINT (-0 -0)'
+ >>> _snap_to_grid(LineString([(1.1,1.1),(1.9,0.9)]), 1).wkt
+ 'LINESTRING (1 1, 2 1)'
+ _snap_to_grid(Polygon([(0.1,0.1),(3.1,0.1),(3.1,3.1),(0.1,3.1),(0.1,0.1)],[[(1.1,0.9),(1.1,1.9),(2.1,1.9),(2.1,0.9),(1.1,0.9)]]), 1).wkt
+ 'POLYGON ((0 0, 3 0, 3 3, 0 3, 0 0), (1 1, 1 2, 2 2, 2 1, 1 1))'
+ >>> _snap_to_grid(MultiPoint([Point(0.1, 0.1), Point(0.9, 0.9)]), 1).wkt
+ 'MULTIPOINT (0 0, 1 1)'
+ >>> _snap_to_grid(MultiLineString([LineString([(0.1, 0.1), (0.9, 0.9)]), LineString([(0.9, 0.1),(0.1,0.9)])]), 1).wkt
+ 'MULTILINESTRING ((0 0, 1 1), (1 0, 0 1))'
+ """
+
+ # snap a single coordinate value
+ def _snap(c):
+ return grid_size * round(c / grid_size, 0)
+
+ # snap all coordinate pairs in something iterable
+ def _snap_coords(c):
+ return [(_snap(x), _snap(y)) for x, y in c]
+
+ # recursively snap all coordinates in an iterable over
+ # geometries.
+ def _snap_multi(geoms):
+ return [_snap_to_grid(g, grid_size) for g in geoms]
+
+ shape_type = shape.geom_type
+ if shape_type == 'Point':
+ return Point(_snap(shape.x), _snap(shape.y))
+
+ elif shape_type == 'LineString':
+ return LineString(_snap_coords(shape.coords))
+
+ elif shape_type == 'Polygon':
+ exterior = LinearRing(_snap_coords(shape.exterior.coords))
+ interiors = []
+ for interior in shape.interiors:
+ interiors.append(LinearRing(_snap_coords(interior.coords)))
+ return Polygon(exterior, interiors)
+
+ elif shape_type == 'MultiPoint':
+ return MultiPoint(_snap_multi(shape.geoms))
+
+ elif shape_type == 'MultiLineString':
+ return MultiLineString(_snap_multi(shape.geoms))
+
+ elif shape_type == 'MultiPolygon':
+ return MultiPolygon(_snap_multi(shape.geoms))
+
+ else:
+ raise ValueError("_snap_to_grid: unimplemented for shape type %s" % repr(shape_type))
+
+
+# returns a geometry which is the given bounds expanded by `factor`. that is,
+# if the original shape was a 1x1 box, the new one will be `factor`x`factor`
+# box, with the same centroid as the original box.
+def _calculate_padded_bounds(factor, bounds):
+ min_x, min_y, max_x, max_y = bounds
+ dx = 0.5 * (max_x - min_x) * (factor - 1.0)
+ dy = 0.5 * (max_y - min_y) * (factor - 1.0)
+ return Box(min_x - dx, min_y - dy, max_x + dx, max_y + dy)
+
+
+def exterior_boundaries(ctx):
+ """
+ create new fetures from the boundaries of polygons
+ in the base layer, subtracting any sections of the
+ boundary which intersect other polygons. this is
+ added as a new layer if new_layer_name is not None
+ otherwise appended to the base layer.
+
+ the purpose of this is to provide us a shoreline /
+ river bank layer from the water layer without having
+ any of the shoreline / river bank draw over the top
+ of any of the base polygons.
+
+ properties on the lines returned are copied / adapted
+ from the existing layer using the new_props dict. see
+ _make_new_properties above for the rules.
+
+ buffer_size determines whether any buffering will be
+ done to the index polygons. a judiciously small
+ amount of buffering can help avoid "dashing" due to
+ tolerance in the intersection, but will also create
+ small overlaps between lines.
+
+ any features in feature_layers[layer] which aren't
+ polygons will be ignored.
+
+ note that the `bounds` kwarg should be filled out
+ automatically by tilequeue - it does not have to be
+ provided from the config.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ base_layer = ctx.params.get('base_layer')
+ assert base_layer, 'Missing base_layer parameter'
+ new_layer_name = ctx.params.get('new_layer_name')
+ prop_transform = ctx.params.get('prop_transform')
+ buffer_size = ctx.params.get('buffer_size')
+ start_zoom = ctx.params.get('start_zoom', 0)
+ snap_tolerance = ctx.params.get('snap_tolerance')
+ bounds = ctx.unpadded_bounds
+
+ layer = None
+
+ # don't start processing until the start zoom
+ if zoom < start_zoom:
+ return layer
+
+ # check that the bounds parameter was, in fact, passed.
+ assert bounds is not None, \
+ "Automatic bounds parameter should have been passed."
+
+ # make a bounding box 3x larger than the original tile, but with the same
+ # centroid.
+ padded_bbox = _calculate_padded_bounds(3, bounds)
+
+ # search through all the layers and extract the one
+ # which has the name of the base layer we were given
+ # as a parameter.
+ layer = _find_layer(feature_layers, base_layer)
+
+ # if we failed to find the base layer then it's
+ # possible the user just didn't ask for it, so return
+ # an empty result.
+ if layer is None:
+ return None
+
+ if prop_transform is None:
+ prop_transform = {}
+
+ features = layer['features']
+
+ # this exists to enable a dirty hack to try and work
+ # around duplicate geometries in the database. this
+ # happens when a multipolygon relation can't
+ # supersede a member way because the way contains tags
+ # which aren't present on the relation. working around
+ # this by calling "union" on geometries proved to be
+ # too expensive (~3x current), so this hack looks at
+ # the way_area of each object, and uses that as a
+ # proxy for identity. it's not perfect, but the chance
+ # that there are two overlapping polygons of exactly
+ # the same size must be pretty small. however, the
+ # STRTree we're using as a spatial index doesn't
+ # directly support setting attributes on the indexed
+ # geometries, so this class exists to carry the area
+ # attribute through the index to the point where we
+ # want to use it.
+ class geom_with_area:
+ def __init__(self, geom, area):
+ self.geom = geom
+ self.area = area
+ self._geom = geom._geom
+ # STRtree started filtering out empty geoms at some version, so
+ # we need to proxy the is_empty property.
+ self.is_empty = geom.is_empty
+
+ # create an index so that we can efficiently find the
+ # polygons intersecting the 'current' one. Note that
+ # we're only interested in intersecting with other
+ # polygonal features, and that intersecting with lines
+ # can give some unexpected results.
+ indexable_features = list()
+ indexable_shapes = list()
+ for shape, props, fid in features:
+ if shape.geom_type in ('Polygon', 'MultiPolygon'):
+ # clip the feature to the padded bounds of the tile
+ clipped = shape.intersection(padded_bbox)
+
+ snapped = clipped
+ if snap_tolerance is not None:
+ snapped = _snap_to_grid(clipped, snap_tolerance)
+
+ # snapping coordinates and clipping shapes might make the shape
+ # invalid, so we need a way to clean them. one simple, but not
+ # foolproof, way is to buffer them by 0.
+ if not snapped.is_valid:
+ snapped = snapped.buffer(0)
+
+ # that still might not have done the trick, so drop any polygons
+ # which are still invalid so as not to cause errors later.
+ if not snapped.is_valid:
+ # TODO: log this as a warning!
+ continue
+
+ # skip any geometries that may have become empty
+ if snapped.is_empty:
+ continue
+
+ indexable_features.append((snapped, props, fid))
+ indexable_shapes.append(geom_with_area(snapped, props.get('area')))
+
+ index = STRtree(indexable_shapes)
+
+ new_features = list()
+ # loop through all the polygons, taking the boundary
+ # of each and subtracting any parts which are within
+ # other polygons. what remains (if anything) is the
+ # new feature.
+ for feature in indexable_features:
+ shape, props, fid = feature
+
+ boundary = shape.boundary
+ cutting_shapes = index.query(boundary)
+
+ for cutting_item in cutting_shapes:
+ cutting_shape = cutting_item.geom
+ cutting_area = cutting_item.area
+
+ # dirty hack: this object is probably a
+ # superseded way if the ID is positive and
+ # the area is the same as the cutting area.
+ # using the ID check here prevents the
+ # boundary from being duplicated.
+ is_superseded_way = \
+ cutting_area == props.get('area') and \
+ props.get('id') > 0
+
+ if cutting_shape is not shape and \
+ not is_superseded_way:
+ buf = cutting_shape
+
+ if buffer_size is not None:
+ buf = buf.buffer(buffer_size)
+
+ boundary = boundary.difference(buf)
+
+ # filter only linestring-like objects. we don't
+ # want any points which might have been created
+ # by the intersection.
+ boundary = _filter_geom_types(boundary, _LINE_DIMENSION)
+
+ if not boundary.is_empty:
+ new_props = _make_new_properties(props,
+ prop_transform)
+ new_features.append((boundary, new_props, fid))
+
+ if new_layer_name is None:
+ # no new layer requested, instead add new
+ # features into the same layer.
+ layer['features'].extend(new_features)
+
+ return layer
+
+ else:
+ # make a copy of the old layer's information - it
+ # shouldn't matter about most of the settings, as
+ # post-processing is one of the last operations.
+ # but we need to override the name to ensure we get
+ # some output.
+ new_layer_datum = layer['layer_datum'].copy()
+ new_layer_datum['name'] = new_layer_name
+ new_layer = layer.copy()
+ new_layer['layer_datum'] = new_layer_datum
+ new_layer['features'] = new_features
+ new_layer['name'] = new_layer_name
+
+ return new_layer
+
+
+def _inject_key(key, infix):
+ """
+ OSM keys often have several parts, separated by ':'s.
+ When we merge properties from the left and right of a
+ boundary, we want to preserve information like the
+ left and right names, but prefer the form "name:left"
+ rather than "left:name", so we have to insert an
+ infix string to these ':'-delimited arrays.
+
+ >>> _inject_key('a:b:c', 'x')
+ 'a:x:b:c'
+ >>> _inject_key('a', 'x')
+ 'a:x'
+
+ """
+ parts = key.split(':')
+ parts.insert(1, infix)
+ return ':'.join(parts)
+
+
+def _merge_left_right_props(lprops, rprops):
+ """
+ Given a set of properties to the left and right of a
+ boundary, we want to keep as many of these as possible,
+ but keeping them all might be a bit too much.
+
+ So we want to keep the key-value pairs which are the
+ same in both in the output, but merge the ones which
+ are different by infixing them with 'left' and 'right'.
+
+ >>> _merge_left_right_props({}, {})
+ {}
+ >>> _merge_left_right_props({'a':1}, {})
+ {'a:left': 1}
+ >>> _merge_left_right_props({}, {'b':2})
+ {'b:right': 2}
+ >>> _merge_left_right_props({'a':1, 'c':3}, {'b':2, 'c':3})
+ {'a:left': 1, 'c': 3, 'b:right': 2}
+ >>> _merge_left_right_props({'a':1},{'a':2})
+ {'a:left': 1, 'a:right': 2}
+ """
+ keys = set(lprops.keys()) | set(rprops.keys())
+ new_props = dict()
+
+ # props in both are copied directly if they're the same
+ # in both the left and right. they get left/right
+ # inserted after the first ':' if they're different.
+ for k in keys:
+ lv = lprops.get(k)
+ rv = rprops.get(k)
+
+ if lv == rv:
+ new_props[k] = lv
+ else:
+ if lv is not None:
+ new_props[_inject_key(k, 'left')] = lv
+ if rv is not None:
+ new_props[_inject_key(k, 'right')] = rv
+
+ return new_props
+
+
+def _make_joined_name(props):
+ """
+ Updates the argument to contain a 'name' element
+ generated from joining the left and right names.
+
+ Just to make it easier for people, we generate a name
+ which is easy to display of the form "LEFT - RIGHT".
+ The individual properties are available if the user
+ wants to generate a more complex name.
+
+ >>> x = {}
+ >>> _make_joined_name(x)
+ >>> x
+ {}
+
+ >>> x = {'name:left':'Left'}
+ >>> _make_joined_name(x)
+ >>> x
+ {'name': 'Left', 'name:left': 'Left'}
+
+ >>> x = {'name:right':'Right'}
+ >>> _make_joined_name(x)
+ >>> x
+ {'name': 'Right', 'name:right': 'Right'}
+
+ >>> x = {'name:left':'Left', 'name:right':'Right'}
+ >>> _make_joined_name(x)
+ >>> x
+ {'name:right': 'Right', 'name': 'Left - Right', 'name:left': 'Left'}
+
+ >>> x = {'name:left':'Left', 'name:right':'Right', 'name': 'Already Exists'}
+ >>> _make_joined_name(x)
+ >>> x
+ {'name:right': 'Right', 'name': 'Already Exists', 'name:left': 'Left'}
+ """
+
+ # don't overwrite an existing name
+ if 'name' in props:
+ return
+
+ lname = props.get('name:left')
+ rname = props.get('name:right')
+
+ if lname is not None:
+ if rname is not None:
+ props['name'] = "%s - %s" % (lname, rname)
+ else:
+ props['name'] = lname
+ elif rname is not None:
+ props['name'] = rname
+
+
+def _linemerge(geom):
+ """
+ Try to extract all the linear features from the geometry argument
+ and merge them all together into the smallest set of linestrings
+ possible.
+
+ This is almost identical to Shapely's linemerge, and uses it,
+ except that Shapely's throws exceptions when passed a single
+ linestring, or a geometry collection with lines and points in it.
+ So this can be thought of as a "safer" wrapper around Shapely's
+ function.
+ """
+ geom_type = geom.type
+ result_geom = None
+
+ if geom_type == 'GeometryCollection':
+ # collect together everything line-like from the geometry
+ # collection and filter out anything that's empty
+ lines = []
+ for line in geom.geoms:
+ line = _linemerge(line)
+ if not line.is_empty:
+ lines.append(line)
+
+ result_geom = linemerge(lines) if lines else None
+
+ elif geom_type == 'LineString':
+ result_geom = geom
+
+ elif geom_type == 'MultiLineString':
+ result_geom = linemerge(geom)
+
+ else:
+ result_geom = None
+
+ if result_geom is not None:
+ # simplify with very small tolerance to remove duplicate points.
+ # almost duplicate or nearly colinear points can occur due to
+ # numerical round-off or precision in the intersection algorithm, and
+ # this should help get rid of those. see also:
+ # http://lists.gispython.org/pipermail/community/2014-January/003236.html
+ #
+ # the tolerance here is hard-coded to a fraction of the coordinate
+ # magnitude. there isn't a perfect way to figure out what this tolerance
+ # should be, so this may require some tweaking.
+ epsilon = max(map(abs, result_geom.bounds)) * float_info.epsilon * 1000
+ result_geom = result_geom.simplify(epsilon, True)
+
+ result_geom_type = result_geom.type
+ # the geometry may still have invalid or repeated points if it has zero
+ # length segments, so remove anything where the length is less than
+ # epsilon.
+ if result_geom_type == 'LineString':
+ if result_geom.length < epsilon:
+ result_geom = None
+
+ elif result_geom_type == 'MultiLineString':
+ parts = []
+ for line in result_geom.geoms:
+ if line.length >= epsilon:
+ parts.append(line)
+ result_geom = MultiLineString(parts)
+
+ return result_geom if result_geom else MultiLineString([])
+
+
+def _orient(geom):
+ """
+ Given a shape, returns the counter-clockwise oriented
+ version. Does not affect points or lines.
+
+ This version is required because Shapely's version is
+ only defined for single polygons, and we want
+ something that works generically.
+
+ In the example below, note the change in order of the
+ coordinates in `p2`, which is initially not oriented
+ CCW.
+
+ >>> p1 = Polygon([[0, 0], [1, 0], [0, 1], [0, 0]])
+ >>> p2 = Polygon([[0, 1], [1, 1], [1, 0], [0, 1]])
+ >>> orient(p1).wkt
+ 'POLYGON ((0 0, 1 0, 0 1, 0 0))'
+ >>> orient(p2).wkt
+ 'POLYGON ((0 1, 1 0, 1 1, 0 1))'
+ >>> _orient(MultiPolygon([p1, p2])).wkt
+ 'MULTIPOLYGON (((0 0, 1 0, 0 1, 0 0)), ((0 1, 1 0, 1 1, 0 1)))'
+ """
+
+ def oriented_multi(kind, geom):
+ oriented_geoms = [_orient(g) for g in geom.geoms]
+ return kind(oriented_geoms)
+
+ geom_type = geom.type
+
+ if geom_type == 'Polygon':
+ geom = orient(geom)
+
+ elif geom_type == 'MultiPolygon':
+ geom = oriented_multi(MultiPolygon, geom)
+
+ elif geom_type == 'GeometryCollection':
+ geom = oriented_multi(GeometryCollection, geom)
+
+ return geom
+
+
+def admin_boundaries(ctx):
+ """
+ Given a layer with admin boundaries and inclusion polygons for
+ land-based boundaries, attempts to output a set of oriented
+ boundaries with properties from both the left and right admin
+ boundary, and also cut with the maritime information to provide
+ a `maritime_boundary: True` value where there's overlap between
+ the maritime lines and the admin boundaries.
+
+ Note that admin boundaries must alread be correctly oriented.
+ In other words, it must have a positive area and run counter-
+ clockwise around the polygon for which it is an outer (or
+ clockwise if it was an inner).
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ base_layer = ctx.params.get('base_layer')
+ assert base_layer, 'Parameter base_layer missing.'
+ start_zoom = ctx.params.get('start_zoom', 0)
+
+ layer = None
+
+ # don't start processing until the start zoom
+ if zoom < start_zoom:
+ return layer
+
+ layer = _find_layer(feature_layers, base_layer)
+ if layer is None:
+ return None
+
+ # layer will have polygonal features for the admin
+ # polygons and also linear features for the maritime
+ # boundaries. further, we want to group the admin
+ # polygons by their kind, as this will reduce the
+ # working set.
+ admin_features = defaultdict(list)
+ maritime_features = list()
+ new_features = list()
+
+ for shape, props, fid in layer['features']:
+ dims = _geom_dimensions(shape)
+ kind = props.get('kind')
+ maritime_boundary = props.get('maritime_boundary')
+
+ # the reason to use this rather than compare the
+ # string of types is to catch the "multi-" types
+ # as well.
+ if dims == _LINE_DIMENSION and kind is not None:
+ admin_features[kind].append((shape, props, fid))
+
+ elif dims == _POLYGON_DIMENSION and maritime_boundary:
+ maritime_features.append((shape, {'maritime_boundary':False}, 0))
+
+ # there are separate polygons for each admin level, and
+ # we only want to intersect like with like because it
+ # makes more sense to have Country-Country and
+ # State-State boundaries (and labels) rather than the
+ # (combinatoric) set of all different levels.
+ for kind, features in admin_features.iteritems():
+ num_features = len(features)
+ envelopes = [g[0].envelope for g in features]
+
+ for i, feature in enumerate(features):
+ boundary, props, fid = feature
+ envelope = envelopes[i]
+
+ # intersect with *preceding* features to remove
+ # those boundary parts. this ensures that there
+ # are no duplicate parts.
+ for j in range(0, i):
+ cut_shape, cut_props, cut_fid = features[j]
+ cut_envelope = envelopes[j]
+ if envelope.intersects(cut_envelope):
+ boundary = boundary.difference(cut_shape)
+
+ if boundary.is_empty:
+ break
+
+ # intersect with every *later* feature. now each
+ # intersection represents a section of boundary
+ # that we want to keep.
+ for j in range(i+1, num_features):
+ cut_shape, cut_props, cut_fid = features[j]
+ cut_envelope = envelopes[j]
+
+ if envelope.intersects(cut_envelope):
+ inside, boundary = _intersect_cut(boundary, cut_shape)
+
+ inside = _linemerge(inside)
+ if not inside.is_empty:
+ new_props = _merge_left_right_props(props, cut_props)
+ new_props['id'] = props['id']
+ _make_joined_name(new_props)
+ new_features.append((inside, new_props, fid))
+
+ if boundary.is_empty:
+ break
+
+ # anything left over at the end is still a boundary,
+ # but a one-sided boundary to international waters.
+ boundary = _linemerge(boundary)
+ if not boundary.is_empty:
+ new_props = props.copy()
+ _make_joined_name(new_props)
+ new_features.append((boundary, new_props, fid))
+
+
+ # use intracut for maritime, but it intersects in a positive
+ # way - it sets the tag on anything which intersects, whereas
+ # we want to set maritime where it _doesn't_ intersect. so
+ # we have to flip the attribute afterwards.
+ cutter = _Cutter(maritime_features, None,
+ 'maritime_boundary', 'maritime_boundary',
+ _LINE_DIMENSION, _intersect_cut)
+
+ for shape, props, fid in new_features:
+ cutter.cut(shape, props, fid)
+
+ # flip the property, so define maritime_boundary=yes where
+ # it was previously unset and remove maritime_boundary=no.
+ for shape, props, fid in cutter.new_features:
+ maritime_boundary = props.pop('maritime_boundary', None)
+ if maritime_boundary is None:
+ props['maritime_boundary'] = True
+
+ layer['features'] = cutter.new_features
+ return layer
+
+
+def generate_label_features(ctx):
+
+ feature_layers = ctx.feature_layers
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'generate_label_features: missing source_layer'
+ label_property_name = ctx.params.get('label_property_name')
+ label_property_value = ctx.params.get('label_property_value')
+ new_layer_name = ctx.params.get('new_layer_name')
+ geom_types = ctx.params.get('geom_types', ['Polygon', 'MultiPolygon'])
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ new_features = []
+ for feature in layer['features']:
+ shape, properties, fid = feature
+
+ # only add the original features if updating an existing layer
+ if new_layer_name is None:
+ new_features.append(feature)
+
+ if shape.geom_type not in geom_types:
+ continue
+
+ # Additionally, the feature needs to have a name or a sport
+ # tag, oherwise it's not really useful for labelling purposes
+ name = properties.get('name')
+ if not name:
+ sport = properties.get('sport')
+ if not sport:
+ continue
+ # if we have a sport tag but no name, we only want it
+ # included if it's not a rock or stone
+ kind = properties.get('kind')
+ if kind in ('rock', 'stone'):
+ continue
+
+ # need to generate a single point for (multi)polygons, but lines and
+ # points can be labelled directly.
+ if shape.geom_type in ('Polygon', 'MultiPolygon'):
+ label_geom = shape.representative_point()
+ else:
+ label_geom = shape
+
+ label_properties = properties.copy()
+
+ if label_property_name:
+ label_properties[label_property_name] = label_property_value
+
+ label_feature = label_geom, label_properties, fid
+
+ new_features.append(label_feature)
+
+ if new_layer_name is None:
+ layer['features'] = new_features
+ return layer
+ else:
+ label_layer_datum = layer['layer_datum'].copy()
+ label_layer_datum['name'] = new_layer_name
+ label_feature_layer = dict(
+ name=new_layer_name,
+ features=new_features,
+ layer_datum=label_layer_datum,
+ )
+ return label_feature_layer
+
+
+def generate_address_points(ctx):
+ """
+ Generates address points from building polygons where there is an
+ addr:housenumber tag on the building. Removes those tags from the
+ building.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'generate_address_points: missing source_layer'
+ start_zoom = ctx.params.get('start_zoom', 0)
+
+ if zoom < start_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ new_features = []
+ for feature in layer['features']:
+ shape, properties, fid = feature
+
+ # We only want to create address points for polygonal
+ # buildings with address tags.
+ if shape.geom_type not in ('Polygon', 'MultiPolygon'):
+ continue
+
+ addr_housenumber = properties.get('addr_housenumber')
+
+ # consider it an address if the name of the building
+ # is just a number.
+ name = properties.get('name')
+ if name is not None and digits_pattern.match(name):
+ if addr_housenumber is None:
+ addr_housenumber = properties.pop('name')
+
+ # and also suppress the name if it's the same as
+ # the address.
+ elif name == addr_housenumber:
+ properties.pop('name')
+
+ # if there's no address, then keep the feature as-is,
+ # no modifications.
+ if addr_housenumber is None:
+ continue
+
+ label_point = shape.representative_point()
+
+ # we're only interested in a very few properties for
+ # address points.
+ label_properties = dict(
+ addr_housenumber=addr_housenumber,
+ kind='address')
+
+ source = properties.get('source')
+ if source is not None:
+ label_properties['source'] = source
+
+ addr_street = properties.get('addr_street')
+ if addr_street is not None:
+ label_properties['addr_street'] = addr_street
+
+ oid = properties.get('id')
+ if oid is not None:
+ label_properties['id'] = oid
+
+ label_feature = label_point, label_properties, fid
+
+ new_features.append(label_feature)
+
+ layer['features'].extend(new_features)
+ return layer
+
+
+def parse_layer_as_float(shape, properties, fid, zoom):
+ """
+ If the 'layer' property is present on a feature, then
+ this attempts to parse it as a floating point number.
+ The old value is removed and, if it could be parsed
+ as a floating point number, the number replaces the
+ original property.
+ """
+
+ layer = properties.pop('layer', None)
+
+ if layer:
+ layer_float = to_float(layer)
+ if layer_float is not None:
+ properties['layer'] = layer_float
+
+ return shape, properties, fid
+
+
+def drop_features_where(ctx):
+ """
+ Drop features entirely that match the particular "where"
+ condition. Any feature properties are available to use, as well as
+ the properties dict itself, called "properties" in the scope.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'drop_features_where: missing source layer'
+ start_zoom = ctx.params.get('start_zoom', 0)
+ where = ctx.params.get('where')
+ assert where, 'drop_features_where: missing where'
+
+ if zoom < start_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ where = compile(where, 'queries.yaml', 'eval')
+
+ new_features = []
+ for feature in layer['features']:
+ shape, properties, fid = feature
+
+ local = properties.copy()
+ local['properties'] = properties
+
+ if not eval(where, {}, local):
+ new_features.append(feature)
+
+ layer['features'] = new_features
+ return layer
+
+
+def _project_properties(ctx, action):
+ """
+ Project properties down to a subset of the existing properties based on a
+ predicate `where` which returns true when the function `action` should be
+ performed. The value returned from `action` replaces the properties of the
+ feature.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ where = ctx.params.get('where')
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, '_project_properties: missing source layer'
+ start_zoom = ctx.params.get('start_zoom', 0)
+ end_zoom = ctx.params.get('end_zoom')
+
+ if zoom < start_zoom:
+ return None
+
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ if where is not None:
+ where = compile(where, 'queries.yaml', 'eval')
+
+ new_features = []
+ for feature in layer['features']:
+ shape, props, fid = feature
+
+ # copy params to add a 'zoom' one. would prefer '$zoom', but apparently
+ # that's not allowed in python syntax.
+ local = props.copy()
+ local['zoom'] = zoom
+
+ if where is None or eval(where, {}, local):
+ props = action(props)
+
+ new_features.append((shape, props, fid))
+
+ layer['features'] = new_features
+ return layer
+
+
+def drop_properties(ctx):
+ """
+ Drop all configured properties for features in source_layer
+ """
+
+ properties = ctx.params.get('properties')
+ assert properties, 'drop_properties: missing properties'
+
+ def action(p):
+ return _remove_properties(p, *properties)
+
+ return _project_properties(ctx, action)
+
+
+def keep_properties(ctx):
+ """
+ Keep only configured properties for features in source_layer
+ """
+
+ properties = ctx.params.get('properties')
+ assert properties, 'keep_properties: missing properties'
+
+ where = ctx.params.get('where')
+ if where is not None:
+ where = compile(where, 'queries.yaml', 'eval')
+
+ def keep_property(p, props):
+ # copy params to add a 'zoom' one. would prefer '$zoom', but apparently
+ # that's not allowed in python syntax.
+ local = props.copy()
+ local['zoom'] = zoom
+
+ return p in properties and (where is None or eval(where, {}, local))
+
+ return _project_properties(ctx, keep_property)
+
+
+def remove_zero_area(shape, properties, fid, zoom):
+ """
+ All features get a numeric area tag, but for points this
+ is zero. The area probably isn't exactly zero, so it's
+ probably less confusing to just remove the tag to show
+ that the value is probably closer to "unspecified".
+ """
+
+ # remove the property if it's present. we _only_ want
+ # to replace it if it matches the positive, float
+ # criteria.
+ area = properties.pop("area", None)
+
+ # try to parse a string if the area has been sent as a
+ # string. it should come through as a float, though,
+ # since postgres treats it as a real.
+ if isinstance(area, (str, unicode)):
+ area = to_float(area)
+
+ if area is not None:
+ # cast to integer to match what we do for polygons.
+ # also the fractional parts of a sq.m are just
+ # noise really.
+ area = int(area)
+ if area > 0:
+ properties['area'] = area
+
+ return shape, properties, fid
+
+
+# circumference of the extent of the world in mercator "meters"
+_MERCATOR_CIRCUMFERENCE = 40075016.68
+
+
+# _Deduplicator handles the logic for deduplication. a feature
+# is considered a duplicate if it has the same property tuple
+# as another and is within a certain distance of the other.
+#
+# the property tuple is calculated by taking a tuple or list
+# of keys and extracting the value of the matching property
+# or None. if none_means_unique is true, then if any tuple
+# entry is None the feature is considered unique and kept.
+#
+# note: distance here is measured in coordinate units; i.e:
+# mercator meters!
+class _Deduplicator:
+ def __init__(self, property_keys, min_distance,
+ none_means_unique):
+ self.property_keys = property_keys
+ self.min_distance = min_distance
+ self.none_means_unique = none_means_unique
+ self.seen_items = dict()
+
+ def keep_feature(self, feature):
+ """
+ Returns true if the feature isn't a duplicate, and should
+ be kept in the output. Otherwise, returns false, as
+ another feature had the same tuple of values.
+ """
+ shape, props, fid = feature
+
+ key = tuple([props.get(k) for k in self.property_keys])
+ if self.none_means_unique and any([v is None for v in key]):
+ return True
+
+ seen_geoms = self.seen_items.get(key)
+ if seen_geoms is None:
+ # first time we've seen this item, so keep it in
+ # the output.
+ self.seen_items[key] = [shape]
+ return True
+
+ else:
+ # if the distance is greater than the minimum set
+ # for this zoom, then we also keep it.
+ distance = min([shape.distance(s) for s in seen_geoms])
+
+ if distance > self.min_distance:
+ # this feature is far enough away to count as
+ # distinct, but keep this geom to suppress any
+ # other labels nearby.
+ seen_geoms.append(shape)
+ return True
+
+ else:
+ # feature is a duplicate
+ return False
+
+
+def remove_duplicate_features(ctx):
+ """
+ Removes duplicate features from a layer, or set of layers. The
+ definition of duplicate is anything which has the same values
+ for the tuple of values associated with the property_keys.
+
+ If `none_means_unique` is set, which it is by default, then a
+ value of None for *any* of the values in the tuple causes the
+ feature to be considered unique and completely by-passed. This
+ is mainly to handle things like features missing their name,
+ where we don't want to remove all but one unnamed feature.
+
+ For example, if property_keys was ['name', 'kind'], then only
+ the first feature of those with the same value for the name
+ and kind properties would be kept in the output.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ source_layers = ctx.params.get('source_layers')
+ start_zoom = ctx.params.get('start_zoom', 0)
+ property_keys = ctx.params.get('property_keys')
+ geometry_types = ctx.params.get('geometry_types')
+ min_distance = ctx.params.get('min_distance', 0.0)
+ none_means_unique = ctx.params.get('none_means_unique', True)
+ end_zoom = ctx.params.get('end_zoom')
+
+ # can use either a single source layer, or multiple source
+ # layers, but not both.
+ assert bool(source_layer) ^ bool(source_layers), \
+ 'remove_duplicate_features: define either source layer or source layers, but not both'
+
+ # note that the property keys or geometry types could be empty,
+ # but then this post-process filter would do nothing. so we
+ # assume that the user didn't intend this, or they wouldn't have
+ # included the filter in the first place.
+ assert property_keys, 'remove_duplicate_features: missing or empty property keys'
+ assert geometry_types, 'remove_duplicate_features: missing or empty geometry types'
+
+ if zoom < start_zoom:
+ return None
+
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ # allow either a single or multiple layers to be used.
+ if source_layer:
+ source_layers = [source_layer]
+
+ # correct for zoom: min_distance is given in pixels, but we
+ # want to do the comparison in coordinate units to avoid
+ # repeated conversions.
+ min_distance = min_distance * _MERCATOR_CIRCUMFERENCE / float(1 << (zoom + 8))
+
+ # keep a set of the tuple of the property keys. this will tell
+ # us if the feature is unique while allowing us to maintain the
+ # sort order by only dropping later, presumably less important,
+ # features. we keep the geometry of the seen items too, so that
+ # we can tell if any new feature is significantly far enough
+ # away that it should be shown again.
+ deduplicator = _Deduplicator(property_keys, min_distance,
+ none_means_unique)
+
+ for source_layer in source_layers:
+ layer_index = -1
+ # because this post-processor can potentially modify
+ # multiple layers, and that wasn't how the return value
+ # system was designed, instead it modifies layers
+ # *in-place*. this is abnormal, and as such requires a
+ # nice big comment like this!
+ for index, feature_layer in enumerate(feature_layers):
+ layer_datum = feature_layer['layer_datum']
+ layer_name = layer_datum['name']
+ if layer_name == source_layer:
+ layer_index = index
+ break
+
+ if layer_index < 0:
+ # TODO: warn about missing layer when we get the
+ # ability to log.
+ continue
+
+ layer = feature_layers[layer_index]
+
+ new_features = []
+ for feature in layer['features']:
+ shape, props, fid = feature
+ keep_feature = True
+
+ if geometry_types is not None and \
+ shape.geom_type in geometry_types:
+ keep_feature = deduplicator.keep_feature(feature)
+
+ if keep_feature:
+ new_features.append(feature)
+
+ # NOTE! modifying the layer *in-place*.
+ layer['features'] = new_features
+ feature_layers[index] = layer
+
+ # returning None here would normally indicate that the
+ # post-processor has done nothing. but because this
+ # modifies the layers *in-place* then all the return
+ # value is superfluous.
+ return None
+
+
+def merge_duplicate_stations(ctx):
+ """
+ Normalise station names by removing any parenthetical lines
+ lists at the end (e.g: "Foo St (A, C, E)"). Parse this and
+ use it to replace the `subway_routes` list if that is empty
+ or isn't present.
+
+ Use the root relation ID, calculated as part of the exploration of the
+ transit relations, plus the name, now appropriately trimmed, to merge
+ station POIs together, unioning their subway routes.
+
+ Finally, re-sort the features in case the merging has caused
+ the station POIs to be out-of-order.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'normalize_and_merge_duplicate_stations: missing source layer'
+ start_zoom = ctx.params.get('start_zoom', 0)
+ end_zoom = ctx.params.get('end_zoom')
+
+ if zoom < start_zoom:
+ return None
+
+ # we probably don't want to do this at higher zooms (e.g: 17 &
+ # 18), even if there are a bunch of stations very close
+ # together.
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ seen_stations = {}
+ new_features = []
+ for feature in layer['features']:
+ shape, props, fid = feature
+
+ kind = props.get('kind')
+ name = props.get('name')
+ if name is not None and kind == 'station':
+ # this should match station names where the name is
+ # followed by a ()-bracketed list of line names. this
+ # is common in NYC, and we want to normalise by
+ # stripping these off and using it to provide the
+ # list of lines if we haven't already got that info.
+ m = station_pattern.match(name)
+
+ subway_routes = props.get('subway_routes', [])
+ transit_route_relation_id = props.get('mz_transit_root_relation_id')
+
+ if m:
+ # if the lines aren't present or are empty
+ if not subway_routes:
+ lines = m.group(2).split(',')
+ subway_routes = [x.strip() for x in lines]
+ props['subway_routes'] = subway_routes
+
+ # update name so that it doesn't contain all the
+ # lines.
+ name = m.group(1).strip()
+ props['name'] = name
+
+ # if the root relation ID is available, then use that for
+ # identifying duplicates. otherwise, use the name.
+ key = transit_route_relation_id or name
+
+ seen_idx = seen_stations.get(key)
+ if seen_idx is None:
+ seen_stations[key] = len(new_features)
+
+ # ensure that transit routes is present and is of
+ # list type for when we append to it later if we
+ # find a duplicate.
+ props['subway_routes'] = subway_routes
+ new_features.append(feature)
+
+ else:
+ # get the properties and append this duplicate's
+ # transit routes to the list on the original
+ # feature.
+ seen_props = new_features[seen_idx][1]
+
+ # make sure routes are unique
+ unique_subway_routes = set(subway_routes) | \
+ set(seen_props['subway_routes'])
+ seen_props['subway_routes'] = list(unique_subway_routes)
+
+ else:
+ # not a station, or name is missing - we can't
+ # de-dup these.
+ new_features.append(feature)
+
+ # might need to re-sort, if we merged any stations:
+ # removing duplicates would have changed the number
+ # of routes for each station.
+ if seen_stations:
+ sort_pois(new_features, zoom)
+
+ layer['features'] = new_features
+ return layer
+
+
+def normalize_station_properties(ctx):
+ """
+ Normalise station properties by removing some which are only used during
+ importance calculation. Stations may also have route information, which may
+ appear as empty lists. These are removed. Also, flags are put on the station
+ to indicate what kind(s) of station it might be.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'normalize_and_merge_duplicate_stations: missing source layer'
+ start_zoom = ctx.params.get('start_zoom', 0)
+ end_zoom = ctx.params.get('end_zoom')
+
+ if zoom < start_zoom:
+ return None
+
+ # we probably don't want to do this at higher zooms (e.g: 17 &
+ # 18), even if there are a bunch of stations very close
+ # together.
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ for shape, props, fid in layer['features']:
+ kind = props.get('kind')
+
+ # get rid of temporaries
+ root_relation_id = props.pop('mz_transit_root_relation_id', None)
+ props.pop('mz_transit_score', None)
+
+ if kind == 'station':
+ # remove anything that has an empty *_routes
+ # list, as this most likely indicates that we were
+ # not able to _detect_ what lines it's part of, as
+ # it seems unlikely that a station would be part of
+ # _zero_ routes.
+ for typ in ['train', 'subway', 'light_rail', 'tram']:
+ prop_name = '%s_routes' % typ
+ routes = props.pop(prop_name, [])
+ if routes:
+ props[prop_name] = routes
+ props['is_%s' % typ] = True
+
+ # if the station has a root relation ID then include
+ # that as a way for the client to link together related
+ # features.
+ if root_relation_id:
+ props['root_relation_id'] = root_relation_id
+
+ return layer
+
+
+def _match_props(props, items_matching):
+ """
+ Checks if all the items in `items_matching` are also
+ present in `props`. If so, returns true. Otherwise
+ returns false.
+ Each value in `items_matching` can be a list, in which case the
+ value from `props` must be any one of those values.
+ """
+
+ for k, v in items_matching.iteritems():
+ prop_val = props.get(k)
+ if isinstance(v, list):
+ if prop_val not in v:
+ return False
+ elif prop_val != v:
+ return False
+
+ return True
+
+
+def keep_n_features(ctx):
+ """
+ Keep only the first N features matching `items_matching`
+ in the layer. This is primarily useful for removing
+ features which are abundant in some places but scarce in
+ others. Rather than try to set some global threshold which
+ works well nowhere, instead sort appropriately and take a
+ number of features which is appropriate per-tile.
+
+ This is done by counting each feature which matches _all_
+ the key-value pairs in `items_matching` and, when the
+ count is larger than `max_items`, dropping those features.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'keep_n_features: missing source layer'
+ start_zoom = ctx.params.get('start_zoom', 0)
+ end_zoom = ctx.params.get('end_zoom')
+ items_matching = ctx.params.get('items_matching')
+ max_items = ctx.params.get('max_items')
+
+ # leaving items_matching or max_items as None (or zero)
+ # would mean that this filter would do nothing, so assume
+ # that this is really a configuration error.
+ assert items_matching, 'keep_n_features: missing or empty item match dict'
+ assert max_items, 'keep_n_features: missing or zero max number of items'
+
+ if zoom < start_zoom:
+ return None
+
+ # we probably don't want to do this at higher zooms (e.g: 17 &
+ # 18), even if there are a bunch of features in the tile, as
+ # we use the high-zoom tiles for overzooming to 20+, and we'd
+ # eventually expect to see _everything_.
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ count = 0
+ new_features = []
+ for shape, props, fid in layer['features']:
+ keep_feature = True
+
+ if _match_props(props, items_matching):
+ count += 1
+ if count > max_items:
+ keep_feature = False
+
+ if keep_feature:
+ new_features.append((shape, props, fid))
+
+ layer['features'] = new_features
+ return layer
+
+
+def rank_features(ctx):
+ """
+ Enumerate the features matching `items_matching` and insert
+ the rank as a property with the key `rank_key`. This is
+ useful for the client, so that it can selectively display
+ only the top features, or de-emphasise the later features.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'rank_features: missing source layer'
+ start_zoom = ctx.params.get('start_zoom', 0)
+ items_matching = ctx.params.get('items_matching')
+ rank_key = ctx.params.get('rank_key')
+
+ # leaving items_matching or rank_key as None would mean
+ # that this filter would do nothing, so assume that this
+ # is really a configuration error.
+ assert items_matching, 'rank_features: missing or empty item match dict'
+ assert rank_key, 'rank_features: missing or empty rank key'
+
+ if zoom < start_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ count = 0
+ for shape, props, fid in layer['features']:
+ if _match_props(props, items_matching):
+ count += 1
+ props[rank_key] = count
+
+ return layer
+
+
+def normalize_aerialways(shape, props, fid, zoom):
+ aerialway = props.get('aerialway')
+
+ # normalise cableway, apparently a deprecated
+ # value.
+ if aerialway == 'cableway':
+ props['aerialway'] = 'zip_line'
+
+ # 'yes' is a pretty unhelpful value, so normalise
+ # to a slightly more meaningful 'unknown', which
+ # is also a commonly-used value.
+ if aerialway == 'yes':
+ props['aerialway'] = 'unknown'
+
+ return shape, props, fid
+
+
+def numeric_min_filter(ctx):
+ """
+ Keep only features which have properties equal or greater
+ than the configured minima. These are in a dict per zoom
+ like this:
+
+ { 15: { 'area': 1000 }, 16: { 'area': 2000 } }
+
+ This would mean that at zooms 15 and 16, the filter was
+ active. At other zooms it would do nothing.
+
+ Multiple filters can be given for a single zoom. The
+ `mode` parameter can be set to 'any' to require that only
+ one of the filters needs to match, or any other value to
+ use the default 'all', which requires all filters to
+ match.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ assert source_layer, 'rank_features: missing source layer'
+ filters = ctx.params.get('filters')
+ mode = ctx.params.get('mode')
+
+ # assume missing filter is a config error.
+ assert filters, 'numeric_min_filter: missing or empty filters dict'
+
+ # get the minimum filters for this zoom, and return if
+ # there are none to apply.
+ minima = filters.get(zoom)
+ if not minima:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ # choose whether all minima have to be met, or just
+ # one of them.
+ aggregate_func = all
+ if mode == 'any':
+ aggregate_func = any
+
+ new_features = []
+ for shape, props, fid in layer['features']:
+ keep = []
+
+ for prop, min_val in minima.iteritems():
+ val = props.get(prop)
+ keep.append(val >= min_val)
+
+ if aggregate_func(keep):
+ new_features.append((shape, props, fid))
+
+ layer['features'] = new_features
+ return layer
+
+
+def copy_features(ctx):
+ """
+ Copy features matching _both_ the `where` selection and the
+ `geometry_types` list to another layer. If the target layer
+ doesn't exist, it is created.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ target_layer = ctx.params.get('target_layer')
+ where = ctx.params.get('where')
+ geometry_types = ctx.params.get('geometry_types')
+
+ assert source_layer, 'copy_features: source layer not configured'
+ assert target_layer, 'copy_features: target layer not configured'
+ assert where, 'copy_features: you must specify how to match features in the where parameter'
+ assert geometry_types, 'copy_features: you must specify at least one type of geometry in geometry_types'
+
+ src_layer = _find_layer(feature_layers, source_layer)
+ if src_layer is None:
+ return None
+
+ tgt_layer = _find_layer(feature_layers, target_layer)
+ if tgt_layer is None:
+ # create target layer if it doesn't already exist.
+ tgt_layer_datum = src_layer['layer_datum'].copy()
+ tgt_layer_datum['name'] = target_layer
+ tgt_layer = dict(
+ name=target_layer,
+ features=[],
+ layer_datum=tgt_layer_datum,
+ )
+
+ new_features = []
+ for feature in src_layer['features']:
+ shape, props, fid = feature
+
+ if _match_props(props, where):
+ # need to deep copy, otherwise we could have some
+ # unintended side effects if either layer is
+ # mutated later on.
+ shape_copy = shape.__class__(shape)
+ new_features.append((shape_copy, props.copy(), fid))
+
+ tgt_layer['features'].extend(new_features)
+ return tgt_layer
+
+
+def make_representative_point(shape, properties, fid, zoom):
+ """
+ Replaces the geometry of each feature with its
+ representative point. This is a point which should be
+ within the interior of the geometry, which can be
+ important for labelling concave or doughnut-shaped
+ polygons.
+ """
+
+ shape = shape.representative_point()
+
+ return shape, properties, fid
+
+
+def add_iata_code_to_airports(shape, properties, fid, zoom):
+ """
+ If the feature is an airport, and it has a 3-character
+ IATA code in its tags, then move that code to its
+ properties.
+ """
+
+ kind = properties.get('kind')
+ if kind not in ('aerodrome', 'airport'):
+ return shape, properties, fid
+
+ tags = properties.get('tags')
+ if not tags:
+ return shape, properties, fid
+
+ iata_code = tags.get('iata')
+ if not iata_code:
+ return shape, properties, fid
+
+ # IATA codes should be uppercase, and most are, but there
+ # might be some in lowercase, so just normalise to upper
+ # here.
+ iata_code = iata_code.upper()
+ if iata_short_code_pattern.match(iata_code):
+ properties['iata'] = iata_code
+
+ return shape, properties, fid
+
+
+def add_uic_ref(shape, properties, fid, zoom):
+ """
+ If the feature has a valid uic_ref tag (7 integers), then move it
+ to its properties.
+ """
+
+ tags = properties.get('tags')
+ if not tags:
+ return shape, properties, fid
+
+ uic_ref = tags.get('uic_ref')
+ if not uic_ref:
+ return shape, properties, fid
+
+ uic_ref = uic_ref.strip()
+ if len(uic_ref) != 7:
+ return shape, properties, fid
+
+ try:
+ uic_ref_int = int(uic_ref)
+ except ValueError:
+ return shape, properties, fid
+ else:
+ properties['uic_ref'] = uic_ref_int
+ return shape, properties, fid
+
+
+def merge_features(ctx):
+ """
+ Merge (linear) features with the same properties together, attempting to
+ make the resulting geometry as large as possible. Note that this will
+ remove the IDs from any merged features.
+
+ At the moment, only merging for linear features is implemented, although
+ it would be possible to extend to other geometry types.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ start_zoom = ctx.params.get('start_zoom', 0)
+ end_zoom = ctx.params.get('end_zoom')
+
+ assert source_layer, 'merge_features: missing source layer'
+
+ if zoom < start_zoom:
+ return None
+
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ # a dictionary mapping the properties of a feature to a tuple of the feature
+ # IDs and a list of shapes. When we merge the features, they will lose their
+ # individual IDs, so only keep the first.
+ features_by_property = {}
+
+ # a list of all the features that we can't currently merge (at this time;
+ # points and polygons) which will be skipped by this procedure.
+ skipped_features = []
+
+ for shape, props, fid in layer['features']:
+ dims = _geom_dimensions(shape)
+
+ # keep the 'id' property as well as the feature ID, as these are often
+ # distinct.
+ p_id = props.pop('id', None)
+
+ # because dicts are mutable and therefore not hashable, we have to
+ # transform their items into a frozenset instead.
+ frozen_props = frozenset(props.items())
+
+ if dims != _LINE_DIMENSION:
+ skipped_features.append((shape, props, fid))
+
+ elif frozen_props in features_by_property:
+ features_by_property[frozen_props][2].append(shape)
+
+ else:
+ features_by_property[frozen_props] = (fid, p_id, [shape])
+
+ new_features = []
+ for frozen_props, (fid, p_id, shapes) in features_by_property.iteritems():
+ # we only have lines, so _linemerge is the best we can attempt. however,
+ # the `shapes` we're operating on may be linestrings, multi-linestrings
+ # or even empty, so the first thing to do is to flatten them into a
+ # single geometry.
+ list_of_linestrings = []
+ for shape in shapes:
+ list_of_linestrings.extend(_flatten_geoms(shape))
+ multi = MultiLineString(list_of_linestrings)
+
+ # thaw the frozen properties to use in the new feature.
+ props = dict(frozen_props)
+
+ # restore any 'id' property.
+ if p_id is not None:
+ props['id'] = p_id
+
+ new_features.append((_linemerge(multi), props, fid))
+
+ new_features.extend(skipped_features)
+ layer['features'] = new_features
+
+ return layer
+
+
+def normalize_tourism_kind(shape, properties, fid, zoom):
+ """
+ There are many tourism-related tags, including 'zoo=*' and 'attraction=*' in
+ addition to 'tourism=*'. This function promotes things with zoo and
+ attraction tags have those values as their main kind.
+
+ See https://github.com/mapzen/vector-datasource/issues/440 for more details.
+ """
+
+ zoo = properties.pop('zoo', None)
+ if zoo is not None:
+ properties['kind'] = zoo
+ properties['tourism'] = 'attraction'
+ return (shape, properties, fid)
+
+ attraction = properties.pop('attraction', None)
+ if attraction is not None:
+ properties['kind'] = attraction
+ properties['tourism'] = 'attraction'
+ return (shape, properties, fid)
+
+ return (shape, properties, fid)
+
+
+def normalize_social_kind(shape, properties, fid, zoom):
+ """
+ Social facilities have an `amenity=social_facility` tag, but more
+ information is generally available in the `social_facility=*` tag, so it
+ is more informative to put that as the `kind`. We keep the old tag as
+ well, for disambiguation.
+
+ Additionally, we normalise the `social_facility:for` tag, which is a
+ semi-colon delimited list, to an actual list under the `for` property.
+ This should make it easier to consume.
+ """
+
+ kind = properties.get('kind')
+ if kind == 'social_facility':
+ tags = properties.get('tags', {})
+ if tags:
+ social_facility = tags.get('social_facility')
+ if social_facility:
+ properties['kind'] = social_facility
+ # leave the original tag on for disambiguation
+ properties['social_facility'] = social_facility
+
+ # normalise the 'for' list to an actual list
+ for_list = tags.get('social_facility:for')
+ if for_list:
+ properties['for'] = for_list.split(';')
+
+ return (shape, properties, fid)
+
+
+def normalize_medical_kind(shape, properties, fid, zoom):
+ """
+ Many medical practices, such as doctors and dentists, have a specialty,
+ which is indicated through the `healthcare:specialty` tag. This is a
+ semi-colon delimited list, so we expand it to an actual list.
+ """
+
+ kind = properties.get('kind')
+ if kind in ['clinic', 'doctors', 'dentist']:
+ tags = properties.get('tags', {})
+ if tags:
+ specialty = tags.get('healthcare:specialty')
+ if specialty:
+ properties['specialty'] = specialty.split(';')
+
+ return (shape, properties, fid)
+
+
+class _AnyMatcher(object):
+ def match(self, other):
+ return True
+
+ def __repr__(self):
+ return "*"
+
+
+class _NoneMatcher(object):
+ def match(self, other):
+ return other is None
+
+ def __repr__(self):
+ return "-"
+
+
+class _SomeMatcher(object):
+ def match(self, other):
+ return other is not None
+
+ def __repr__(self):
+ return "+"
+
+
+class _TrueMatcher(object):
+ def match(self, other):
+ return other is True
+
+ def __repr__(self):
+ return "true"
+
+
+class _ExactMatcher(object):
+ def __init__(self, value):
+ self.value = value
+
+ def match(self, other):
+ return other == self.value
+
+ def __repr__(self):
+ return repr(self.value)
+
+
+class _SetMatcher(object):
+ def __init__(self, values):
+ self.values = values
+
+ def match(self, other):
+ return other in self.values
+
+ def __repr__(self):
+ return repr(self.value)
+
+
+class _GreaterThanEqualMatcher(object):
+ def __init__(self, value):
+ self.value = value
+
+ def match(self, other):
+ return other >= self.value
+
+ def __repr__(self):
+ return '>=%r' % self.value
+
+
+class _GreaterThanMatcher(object):
+ def __init__(self, value):
+ self.value = value
+
+ def match(self, other):
+ return other > self.value
+
+ def __repr__(self):
+ return '>%r' % self.value
+
+
+class _LessThanEqualMatcher(object):
+ def __init__(self, value):
+ self.value = value
+
+ def match(self, other):
+ return other <= self.value
+
+ def __repr__(self):
+ return '<=%r' % self.value
+
+
+class _LessThanMatcher(object):
+ def __init__(self, value):
+ self.value = value
+
+ def match(self, other):
+ return other < self.value
+
+ def __repr__(self):
+ return '<%r' % self.value
+
+
+_KEY_TYPE_LOOKUP = {
+ 'int': int,
+ 'float': float,
+}
+
+
+def _parse_kt(key_type):
+ kt = key_type.split("::")
+
+ type_key = kt[1] if len(kt) > 1 else None
+ fn = _KEY_TYPE_LOOKUP.get(type_key, str)
+
+ return (kt[0], fn)
+
+
+class CSVMatcher(object):
+ def __init__(self, fh):
+ keys = None
+ types = []
+ rows = []
+
+ self.static_any = _AnyMatcher()
+ self.static_none = _NoneMatcher()
+ self.static_some = _SomeMatcher()
+ self.static_true = _TrueMatcher()
+
+ # CSV - allow whitespace after the comma
+ reader = csv.reader(fh, skipinitialspace=True)
+ for row in reader:
+ if keys is None:
+ target_key = row.pop(-1)
+ keys = []
+ for key_type in row:
+ key, typ = _parse_kt(key_type)
+ keys.append(key)
+ types.append(typ)
+
+ else:
+ target_val = row.pop(-1)
+ for i in range(0, len(row)):
+ row[i] = self._match_val(row[i], types[i])
+ rows.append((row, target_val))
+
+ self.keys = keys
+ self.rows = rows
+ self.target_key = target_key
+
+ def _match_val(self, v, typ):
+ if v == '*':
+ return self.static_any
+ if v == '-':
+ return self.static_none
+ if v == '+':
+ return self.static_some
+ if v == 'true':
+ return self.static_true
+ if isinstance(v, str) and ';' in v:
+ return _SetMatcher(set(v.split(';')))
+ if v.startswith('>='):
+ assert len(v) > 2, 'Invalid >= matcher'
+ return _GreaterThanEqualMatcher(typ(v[2:]))
+ if v.startswith('<='):
+ assert len(v) > 2, 'Invalid <= matcher'
+ return _LessThanEqualMatcher(typ(v[2:]))
+ if v.startswith('>'):
+ assert len(v) > 1, 'Invalid > matcher'
+ return _GreaterThanMatcher(typ(v[1:]))
+ if v.startswith('<'):
+ assert len(v) > 1, 'Invalid > matcher'
+ return _LessThanMatcher(typ(v[1:]))
+ return _ExactMatcher(typ(v))
+
+ def __call__(self, properties, zoom):
+ vals = []
+ for key in self.keys:
+ # NOTE zoom is special cased
+ if key == 'zoom':
+ val = zoom
+ else:
+ val = properties.get(key)
+ vals.append(val)
+ for row, target_val in self.rows:
+ if all([a.match(b) for (a, b) in zip(row, vals)]):
+ return (self.target_key, target_val)
+
+ return None
+
+
+def csv_match_properties(ctx):
+ """
+ Add or update a property on all features which match properties which are
+ given as headings in a CSV file.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ start_zoom = ctx.params.get('start_zoom', 0)
+ end_zoom = ctx.params.get('end_zoom')
+ target_value_type = ctx.params.get('target_value_type')
+ matcher = ctx.resources.get('matcher')
+
+ assert source_layer, 'csv_match_properties: missing source layer'
+ assert matcher, 'csv_match_properties: missing matcher resource'
+
+ if zoom < start_zoom:
+ return None
+
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ def _type_cast(v):
+ if target_value_type == 'int':
+ return int(v)
+ return v
+
+ for shape, props, fid in layer['features']:
+ m = matcher(props, zoom)
+ if m is not None:
+ k, v = m
+ props[k] = _type_cast(v)
+
+ return layer
+
+
+def update_parenthetical_properties(ctx):
+ """
+ If a feature's name ends with a set of values in parens, update
+ its kind and increase the min_zoom appropriately.
+ """
+
+ feature_layers = ctx.feature_layers
+ zoom = ctx.tile_coord.zoom
+ source_layer = ctx.params.get('source_layer')
+ start_zoom = ctx.params.get('start_zoom', 0)
+ end_zoom = ctx.params.get('end_zoom')
+ parenthetical_values = ctx.params.get('values')
+ target_min_zoom = ctx.params.get('target_min_zoom')
+ drop_below_zoom = ctx.params.get('drop_below_zoom')
+
+ assert parenthetical_values is not None, \
+ 'update_parenthetical_properties: missing values'
+ assert target_min_zoom is not None, \
+ 'update_parenthetical_properties: missing target_min_zoom'
+
+ if zoom < start_zoom:
+ return None
+
+ if end_zoom is not None and zoom > end_zoom:
+ return None
+
+ layer = _find_layer(feature_layers, source_layer)
+ if layer is None:
+ return None
+
+ new_features = []
+ for shape, props, fid in layer['features']:
+ name = props.get('name', '')
+ if not name:
+ new_features.append((shape, props, fid))
+ continue
+
+ keep = True
+ for value in parenthetical_values:
+ if name.endswith('(%s)' % value):
+ props['kind'] = value
+ props['min_zoom'] = target_min_zoom
+ if drop_below_zoom and zoom < drop_below_zoom:
+ keep = False
+ if keep:
+ new_features.append((shape, props, fid))
+
+ layer['features'] = new_features
+ return layer
+
+
+def height_to_meters(shape, props, fid, zoom):
+ """
+ If the properties has a "height" entry, then convert that to meters.
+ """
+
+ height = props.get('height')
+ if not height:
+ return shape, props, fid
+
+ props['height'] = _to_float_meters(height)
+ return shape, props, fid
+
+def elevation_to_meters(shape, props, fid, zoom):
+ """
+ If the properties has an "elevation" entry, then convert that to meters.
+ """
+
+ elevation = props.get('elevation')
+ if not elevation:
+ return shape, props, fid
+
+ props['elevation'] = _to_float_meters(elevation)
+ return shape, props, fid
+
+
+def normalize_cycleway(shape, props, fid, zoom):
+ """
+ If the properties contain both a cycleway:left and cycleway:right
+ with the same values, those should be removed and replaced with a
+ single cycleway property. Additionally, if a cycleway_both tag is
+ present, normalize that to the cycleway tag.
+ """
+ cycleway = props.get('cycleway')
+ cycleway_left = props.get('cycleway_left')
+ cycleway_right = props.get('cycleway_right')
+
+ cycleway_both = props.pop('cycleway_both', None)
+ if cycleway_both and not cycleway:
+ props['cycleway'] = cycleway = cycleway_both
+
+ if (cycleway_left and cycleway_right and cycleway_left == cycleway_right
+ and (not cycleway or cycleway_left == cycleway)):
+ props['cycleway'] = cycleway_left
+ del props['cycleway_left']
+ del props['cycleway_right']
+ return shape, props, fid
+
+
+def add_is_bicycle_related(shape, props, fid, zoom):
+ """
+ If the props contain a bicycle_network tag, cycleway, or
+ highway=cycleway, it should have an is_bicycle_related
+ boolean. Depends on the normalize_cycleway transform to have been
+ run first.
+ """
+ props.pop('is_bicycle_related', None)
+ if ('bicycle_network' in props or
+ 'cycleway' in props or
+ 'cycleway_left' in props or
+ 'cycleway_right' in props or
+ props.get('highway') == 'cycleway'):
+ props['is_bicycle_related'] = True
+ return shape, props, fid
+
+
+def drop_properties_with_prefix(ctx):
+ """
+ Iterate through all features, dropping all properties that start
+ with prefix.
+ """
+
+ prefix = ctx.params.get('prefix')
+ assert prefix, 'drop_properties_with_prefix: missing prefix param'
+
+ feature_layers = ctx.feature_layers
+ for feature_layer in feature_layers:
+ for shape, props, fid in feature_layer['features']:
+ for k in props.keys():
+ if k.startswith(prefix):
+ del props[k]
diff --git a/TileStache/Goodies/VecTiles/util.py b/TileStache/Goodies/VecTiles/util.py
new file mode 100644
index 00000000..1a3dc560
--- /dev/null
+++ b/TileStache/Goodies/VecTiles/util.py
@@ -0,0 +1,12 @@
+# attempts to convert x to a floating point value,
+# first removing some common punctuation. returns
+# None if conversion failed.
+def to_float(x):
+ if x is None:
+ return None
+ # normalize punctuation
+ x = x.replace(';', '.').replace(',', '.')
+ try:
+ return float(x)
+ except ValueError:
+ return None
diff --git a/TileStache/Mapnik.py b/TileStache/Mapnik.py
index 24120a85..de148f3b 100644
--- a/TileStache/Mapnik.py
+++ b/TileStache/Mapnik.py
@@ -4,6 +4,7 @@
known as "mapnik grid". Both require Mapnik to be installed; Grid requires
Mapnik 2.0.0 and above.
"""
+from __future__ import absolute_import
from time import time
from os.path import exists
from thread import allocate_lock
@@ -17,6 +18,11 @@
import logging
import json
+# We enabled absolute_import because case insensitive filesystems
+# cause this file to be loaded twice (the name of this file
+# conflicts with the name of the module we want to import).
+# Forcing absolute imports fixes the issue.
+
try:
import mapnik
except ImportError:
diff --git a/TileStache/Memcache.py b/TileStache/Memcache.py
index 1a1f3322..43f54942 100644
--- a/TileStache/Memcache.py
+++ b/TileStache/Memcache.py
@@ -31,8 +31,14 @@
"""
+from __future__ import absolute_import
from time import time as _time, sleep as _sleep
+# We enabled absolute_import because case insensitive filesystems
+# cause this file to be loaded twice (the name of this file
+# conflicts with the name of the module we want to import).
+# Forcing absolute imports fixes the issue.
+
try:
from memcache import Client
except ImportError:
diff --git a/TileStache/Providers.py b/TileStache/Providers.py
index 8cc7b6d8..0d309e74 100644
--- a/TileStache/Providers.py
+++ b/TileStache/Providers.py
@@ -200,8 +200,12 @@ class Proxy:
Provider name string from Modest Maps built-ins.
See ModestMaps.builtinProviders.keys() for a list.
Example: "OPENSTREETMAP".
+ - timeout (optional)
+ Defines a timeout in seconds for the request.
+ If not defined, the global default timeout setting will be used.
- One of the above is required. When both are present, url wins.
+
+ Either url or provider is required. When both are present, url wins.
Example configuration:
@@ -210,7 +214,7 @@ class Proxy:
"url": "http://tile.openstreetmap.org/{Z}/{X}/{Y}.png"
}
"""
- def __init__(self, layer, url=None, provider_name=None):
+ def __init__(self, layer, url=None, provider_name=None, timeout=None):
""" Initialize Proxy provider with layer and url.
"""
if url:
@@ -225,6 +229,8 @@ def __init__(self, layer, url=None, provider_name=None):
else:
raise Exception('Missing required url or provider parameter to Proxy provider')
+ self.timeout = timeout
+
@staticmethod
def prepareKeywordArgs(config_dict):
""" Convert configured parameters to keyword args for __init__().
@@ -237,6 +243,9 @@ def prepareKeywordArgs(config_dict):
if 'provider' in config_dict:
kwargs['provider_name'] = config_dict['provider']
+ if 'timeout' in config_dict:
+ kwargs['timeout'] = config_dict['timeout']
+
return kwargs
def renderTile(self, width, height, srs, coord):
@@ -245,8 +254,12 @@ def renderTile(self, width, height, srs, coord):
img = None
urls = self.provider.getTileUrls(coord)
+ # Explicitly tell urllib2 to get no proxies
+ proxy_support = urllib2.ProxyHandler({})
+ url_opener = urllib2.build_opener(proxy_support)
+
for url in urls:
- body = urllib.urlopen(url).read()
+ body = url_opener.open(url, timeout=self.timeout).read()
tile = Verbatim(body)
if len(urls) == 1:
@@ -278,11 +291,18 @@ class UrlTemplate:
- referer (optional)
String to use in the "Referer" header when making HTTP requests.
+ - source projection (optional)
+ Projection to transform coordinates into before making request
+ - timeout (optional)
+ Defines a timeout in seconds for the request.
+ If not defined, the global default timeout setting will be used.
+
More on string substitutions:
- http://docs.python.org/library/string.html#template-strings
"""
- def __init__(self, layer, template, referer=None):
+ def __init__(self, layer, template, referer=None, source_projection=None,
+ timeout=None):
""" Initialize a UrlTemplate provider with layer and template string.
http://docs.python.org/library/string.html#template-strings
@@ -290,6 +310,8 @@ def __init__(self, layer, template, referer=None):
self.layer = layer
self.template = Template(template)
self.referer = referer
+ self.source_projection = source_projection
+ self.timeout = timeout
@staticmethod
def prepareKeywordArgs(config_dict):
@@ -300,6 +322,12 @@ def prepareKeywordArgs(config_dict):
if 'referer' in config_dict:
kwargs['referer'] = config_dict['referer']
+ if 'source projection' in config_dict:
+ kwargs['source_projection'] = Geography.getProjectionByName(config_dict['source projection'])
+
+ if 'timeout' in config_dict:
+ kwargs['timeout'] = config_dict['timeout']
+
return kwargs
def renderArea(self, width, height, srs, xmin, ymin, xmax, ymax, zoom):
@@ -307,6 +335,17 @@ def renderArea(self, width, height, srs, xmin, ymin, xmax, ymax, zoom):
Each argument (width, height, etc.) is substituted into the template.
"""
+ if self.source_projection is not None:
+ ne_location = self.layer.projection.projLocation(Point(xmax, ymax))
+ ne_point = self.source_projection.locationProj(ne_location)
+ ymax = ne_point.y
+ xmax = ne_point.x
+ sw_location = self.layer.projection.projLocation(Point(xmin, ymin))
+ sw_point = self.source_projection.locationProj(sw_location)
+ ymin = sw_point.y
+ xmin = sw_point.x
+ srs = self.source_projection.srs
+
mapping = {'width': width, 'height': height, 'srs': srs, 'zoom': zoom}
mapping.update({'xmin': xmin, 'ymin': ymin, 'xmax': xmax, 'ymax': ymax})
@@ -316,7 +355,7 @@ def renderArea(self, width, height, srs, xmin, ymin, xmax, ymax, zoom):
if self.referer:
req.add_header('Referer', self.referer)
- body = urllib2.urlopen(req).read()
+ body = urllib2.urlopen(req, timeout=self.timeout).read()
tile = Verbatim(body)
return tile
diff --git a/TileStache/Redis.py b/TileStache/Redis.py
index e7c5baa2..a8a74836 100644
--- a/TileStache/Redis.py
+++ b/TileStache/Redis.py
@@ -39,8 +39,13 @@
"""
+from __future__ import absolute_import
from time import time as _time, sleep as _sleep
+# We enabled absolute_import because case insensitive filesystems
+# cause this file to be loaded twice (the name of this file
+# conflicts with the name of the module we want to import).
+# Forcing absolute imports fixes the issue.
try:
import redis
diff --git a/TileStache/Sandwich.py b/TileStache/Sandwich.py
index c0ac7201..ee77f4db 100644
--- a/TileStache/Sandwich.py
+++ b/TileStache/Sandwich.py
@@ -125,23 +125,35 @@
from . import Core
-import Image
-import Blit
-
-blend_modes = {
- 'screen': Blit.blends.screen,
- 'add': Blit.blends.add,
- 'multiply': Blit.blends.multiply,
- 'subtract': Blit.blends.subtract,
- 'linear light': Blit.blends.linear_light,
- 'hard light': Blit.blends.hard_light
- }
+try:
+ import Image
+except ImportError:
+ try:
+ from Pillow import Image
+ except ImportError:
+ from PIL import Image
+
+try:
+ import Blit
+
+ blend_modes = {
+ 'screen': Blit.blends.screen,
+ 'add': Blit.blends.add,
+ 'multiply': Blit.blends.multiply,
+ 'subtract': Blit.blends.subtract,
+ 'linear light': Blit.blends.linear_light,
+ 'hard light': Blit.blends.hard_light
+ }
-adjustment_names = {
- 'threshold': Blit.adjustments.threshold,
- 'curves': Blit.adjustments.curves,
- 'curves2': Blit.adjustments.curves2
- }
+ adjustment_names = {
+ 'threshold': Blit.adjustments.threshold,
+ 'curves': Blit.adjustments.curves,
+ 'curves2': Blit.adjustments.curves2
+ }
+
+except ImportError:
+ # Well, this will not work.
+ pass
class Provider:
""" Sandwich Provider.
diff --git a/TileStache/VERSION b/TileStache/VERSION
new file mode 100644
index 00000000..300ca1e9
--- /dev/null
+++ b/TileStache/VERSION
@@ -0,0 +1 @@
+1.49.10
diff --git a/TileStache/Vector/__init__.py b/TileStache/Vector/__init__.py
index 9ed0d491..6ab0b080 100644
--- a/TileStache/Vector/__init__.py
+++ b/TileStache/Vector/__init__.py
@@ -450,7 +450,7 @@ def _open_layer(driver_name, parameters, dirpath):
layer = datasource.GetLayer(0)
if layer.GetSpatialRef() is None and driver_name != 'SQLite':
- raise KnownUnknown('Couldn\'t get a layer from data source %s' % source_name)
+ raise KnownUnknown('The layer has no spatial reference: %s' % source_name)
#
# Return the layer and the datasource.
diff --git a/TileStache/__init__.py b/TileStache/__init__.py
index ba117951..c92453db 100644
--- a/TileStache/__init__.py
+++ b/TileStache/__init__.py
@@ -8,7 +8,9 @@
Documentation available at http://tilestache.org/doc/
"""
-__version__ = 'N.N.N'
+import os.path
+
+__version__ = open(os.path.join(os.path.dirname(__file__), 'VERSION')).read().strip()
import re
@@ -46,7 +48,10 @@
_pathinfo_pat = re.compile(r'^/?(?P\w.+)/(?P\d+)/(?P-?\d+)/(?P-?\d+)\.(?P\w+)$')
_preview_pat = re.compile(r'^/?(?P\w.+)/(preview\.html)?$')
-def getTile(layer, coord, extension, ignore_cached=False):
+# symbol used to separate layers when specifying more than one layer
+_delimiter = ','
+
+def getTile(layer, coord, extension, ignore_cached=False, suppress_cache_write=False):
''' Get a type string and tile binary for a given request layer tile.
This function is documented as part of TileStache's public API:
@@ -57,15 +62,21 @@ def getTile(layer, coord, extension, ignore_cached=False):
- coord: one ModestMaps.Core.Coordinate corresponding to a single tile.
- extension: filename extension to choose response type, e.g. "png" or "jpg".
- ignore_cached: always re-render the tile, whether it's in the cache or not.
+ - suppress_cache_write: don't save the tile to the cache
This is the main entry point, after site configuration has been loaded
and individual tiles need to be rendered.
'''
- status_code, headers, body = layer.getTileResponse(coord, extension, ignore_cached)
+ status_code, headers, body = layer.getTileResponse(coord, extension, ignore_cached, suppress_cache_write)
mime = headers.get('Content-Type')
return mime, body
+def unknownLayerMessage(config, unknown_layername):
+ """ A message that notifies that the given layer is unknown and lists out the known layers.
+ """
+ return '"%s" is not a layer I know about. \nHere are some that I do know about: \n %s.' % (unknown_layername, '\n '.join(sorted(config.layers.keys())))
+
def getPreview(layer):
""" Get a type string and dynamic map viewer HTML for a given layer.
"""
@@ -138,6 +149,19 @@ def mergePathInfo(layer, coord, extension):
return '/%(layer)s/%(z)d/%(x)d/%(y)d.%(extension)s' % locals()
+def isValidLayer(layer, config):
+ if not layer:
+ return False
+ if (layer not in config.layers):
+ if (layer.find(_delimiter) != -1):
+ multi_providers = list(ll for ll in config.layers if hasattr(config.layers[ll].provider, 'names'))
+ for l in layer.split(_delimiter):
+ if ((l not in config.layers) or (l in multi_providers)):
+ return False
+ return True
+ return False
+ return True
+
def requestLayer(config, path_info):
""" Return a Layer.
@@ -175,11 +199,46 @@ def requestLayer(config, path_info):
layername = splitPathInfo(path_info)[0]
- if layername not in config.layers:
- raise Core.KnownUnknown('"%s" is not a layer I know about. Here are some that I do know about: %s.' % (layername, ', '.join(sorted(config.layers.keys()))))
+ if not isValidLayer(layername, config):
+ raise Core.KnownUnknown(unknownLayerMessage(config, layername))
+ custom_layer = layername.find(_delimiter)!=-1
+
+ if custom_layer:
+ # we can't just assign references, because we get identity problems
+ # when tilestache tries to look up the layer's name, which won't match
+ # the list of names in the provider
+ provider_names = layername.split(_delimiter)
+ custom_layer_obj = config.layers[config.custom_layer_name]
+ config.layers[layername] = clone_layer(custom_layer_obj, provider_names)
+
return config.layers[layername]
+
+def clone_layer(layer, provider_names):
+ from TileStache.Core import Layer
+ copy = Layer(
+ layer.config,
+ layer.projection,
+ layer.metatile,
+ layer.stale_lock_timeout,
+ layer.cache_lifespan,
+ layer.write_cache,
+ layer.allowed_origin,
+ layer.max_cache_age,
+ layer.redirects,
+ layer.preview_lat,
+ layer.preview_lon,
+ layer.preview_zoom,
+ layer.preview_ext,
+ layer.bounds,
+ layer.dim,
+ )
+ copy.provider = layer.provider
+ copy.provider(copy, provider_names)
+ return copy
+
+
def requestHandler(config_hint, path_info, query_string=None):
""" Generate a mime-type and response body for a given request.
@@ -222,9 +281,6 @@ def requestHandler2(config_hint, path_info, query_string=None, script_name=''):
except KeyError:
callback = None
- if layer.allowed_origin:
- headers.setdefault('Access-Control-Allow-Origin', layer.allowed_origin)
-
#
# Special case for index page.
#
@@ -253,7 +309,10 @@ def requestHandler2(config_hint, path_info, query_string=None, script_name=''):
else:
status_code, headers, content = layer.getTileResponse(coord, extension)
-
+
+ if layer.allowed_origin:
+ headers.setdefault('Access-Control-Allow-Origin', layer.allowed_origin)
+
if callback and 'json' in headers['Content-Type']:
headers['Content-Type'] = 'application/javascript; charset=utf-8'
content = '%s(%s)' % (callback, content)
@@ -263,10 +322,11 @@ def requestHandler2(config_hint, path_info, query_string=None, script_name=''):
headers.setdefault('Expires', expires.strftime('%a %d %b %Y %H:%M:%S GMT'))
headers.setdefault('Cache-Control', 'public, max-age=%d' % layer.max_cache_age)
- except Core.KnownUnknown, e:
+ except (Core.KnownUnknown, Exception), e:
+ logging.exception(e)
out = StringIO()
- print >> out, 'Known unknown!'
+ print >> out, 'Known unknown!' if isinstance(e,Core.KnownUnknown) else 'Exception!'
print >> out, e
print >> out, ''
print >> out, '\n'.join(Core._rummy())
@@ -369,8 +429,8 @@ def __call__(self, environ, start_response):
# WSGI behavior is different from CGI behavior, because we may not want
# to return a chatty rummy for likely-deployed WSGI vs. testing CGI.
#
- if layer and layer not in self.config.layers:
- return self._response(start_response, 404)
+ if not isValidLayer(layer, self.config):
+ return self._response(start_response, 404, str(unknownLayerMessage(self.config, layer)))
path_info = environ.get('PATH_INFO', None)
query_string = environ.get('QUERY_STRING', None)
diff --git a/VERSION b/VERSION
deleted file mode 100644
index 7e564d13..00000000
--- a/VERSION
+++ /dev/null
@@ -1 +0,0 @@
-1.49.8
diff --git a/Vagrant/setup.sh b/Vagrant/setup.sh
new file mode 100755
index 00000000..6601fccd
--- /dev/null
+++ b/Vagrant/setup.sh
@@ -0,0 +1,64 @@
+#!/bin/bash -e
+
+if [ -f ~/.bootstrap_complete ]; then
+ exit 0
+fi
+
+set -x
+
+whoami
+sudo apt-get -q update
+sudo apt-get -q install python-software-properties
+sudo add-apt-repository ppa:mapnik/nightly-2.3 -y
+sudo apt-get -q update
+sudo apt-get -q install libmapnik-dev mapnik-utils python-mapnik virtualenvwrapper python-dev -y
+sudo apt-get -q install gdal-bin libgdal-dev -y
+
+# needed to build gdal bindings separately
+sudo apt-get install build-essential -y
+
+# create a python virtualenv
+virtualenv -q ~/.virtualenvs/tilestache
+source ~/.virtualenvs/tilestache/bin/activate
+
+# make sure it gets activated the next time we log in
+echo "source ~/.virtualenvs/tilestache/bin/activate" >> ~/.bashrc
+
+# add system mapnik to virtualenv
+ln -s /usr/lib/pymodules/python2.7/mapnik ~/.virtualenvs/tilestache/lib/python2.7/site-packages/mapnik
+
+# for tests
+sudo apt-get -q install postgresql-9.3-postgis-2.1 memcached -y
+~/.virtualenvs/tilestache/bin/pip install nose coverage python-memcached psycopg2 werkzeug
+~/.virtualenvs/tilestache/bin/pip install pil --allow-external pil --allow-unverified pil
+
+# install basic TileStache requirements
+cd /srv/tilestache/
+~/.virtualenvs/tilestache/bin/pip install -r requirements.txt --allow-external ModestMaps --allow-unverified ModestMaps
+
+# workaround for gdal bindings
+~/.virtualenvs/tilestache/bin/pip install --no-install GDAL
+cd ~/.virtualenvs/tilestache/build/GDAL
+~/.virtualenvs/tilestache/bin/python setup.py build_ext --include-dirs=/usr/include/gdal/
+~/.virtualenvs/tilestache/bin/pip install --no-download GDAL
+
+# allow any user to connect as postgres to this test data. DO NOT USE IN PRODUCTION
+sudo sed -i '1i local test_tilestache postgres trust' /etc/postgresql/9.3/main/pg_hba.conf
+
+sudo /etc/init.d/postgresql restart
+
+# add some test data
+sudo -u postgres psql -c "drop database if exists test_tilestache"
+sudo -u postgres psql -c "create database test_tilestache"
+sudo -u postgres psql -c "create extension postgis" -d test_tilestache
+sudo -u postgres ogr2ogr -nlt MULTIPOLYGON -f "PostgreSQL" PG:"user=postgres dbname=test_tilestache" ./examples/sample_data/world_merc.shp
+
+set +x
+echo "
+****************************************************************
+* Warning: your postgres security settings (pg_hba.conf)
+* are not setup for production (i.e. have been set insecurely).
+****************************************************************"
+
+# we did it. let's mark the script as complete
+touch ~/.bootstrap_complete
diff --git a/Vagrantfile b/Vagrantfile
new file mode 100644
index 00000000..caca279c
--- /dev/null
+++ b/Vagrantfile
@@ -0,0 +1,17 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+VAGRANTFILE_API_VERSION = "2"
+
+Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
+
+ config.vm.box = "Trusty64Daily"
+ config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/20140501/trusty-server-cloudimg-amd64-vagrant-disk1.box"
+
+ #config.vm.network :private_network, ip: "192.168.33.10"
+
+ config.vm.synced_folder ".", "/srv/tilestache"
+
+ config.vm.provision :shell, :privileged => false, :inline => "sh /srv/tilestache/Vagrant/setup.sh"
+
+end
diff --git a/setup.py b/setup.py
index 8004e5db..0deefdb7 100644
--- a/setup.py
+++ b/setup.py
@@ -5,7 +5,7 @@
import sys
-version = open('VERSION', 'r').read().strip()
+version = open('TileStache/VERSION', 'r').read().strip()
def is_installed(name):
@@ -16,13 +16,8 @@ def is_installed(name):
return False
-requires = ['ModestMaps >=1.3.0','simplejson', 'Werkzeug']
-
-# Soft dependency on PIL or Pillow
-if is_installed('Pillow') or sys.platform == 'win32':
- requires.append('Pillow')
-else:
- requires.append('PIL')
+requires = ['ModestMaps >=1.3.0', 'simplejson', 'Werkzeug',
+ 'mapbox-vector-tile', 'StreetNames', 'Pillow', 'pycountry']
setup(name='TileStache',
@@ -37,8 +32,12 @@ def is_installed(name):
'TileStache.Goodies',
'TileStache.Goodies.Caches',
'TileStache.Goodies.Providers',
- 'TileStache.Goodies.VecTiles'],
+ 'TileStache.Goodies.VecTiles',
+ 'TileStache.Goodies.VecTiles/OSciMap4/StaticKeys',
+ 'TileStache.Goodies.VecTiles/OSciMap4/StaticVals',
+ 'TileStache.Goodies.VecTiles/OSciMap4/TagRewrite',
+ 'TileStache.Goodies.VecTiles/OSciMap4'],
scripts=['scripts/tilestache-compose.py', 'scripts/tilestache-seed.py', 'scripts/tilestache-clean.py', 'scripts/tilestache-server.py', 'scripts/tilestache-render.py', 'scripts/tilestache-list.py'],
data_files=[('share/tilestache', ['TileStache/Goodies/Providers/DejaVuSansMono-alphanumeric.ttf'])],
- download_url='http://tilestache.org/download/TileStache-%(version)s.tar.gz' % locals(),
+ package_data={'TileStache': ['VERSION', '../doc/*.html']},
license='BSD')
diff --git a/tests/vectiles_tests.py b/tests/vectiles_tests.py
index f21a66b1..fdcf7b3d 100644
--- a/tests/vectiles_tests.py
+++ b/tests/vectiles_tests.py
@@ -88,7 +88,6 @@ def setUp(self):
"clip": false,
"dbinfo":
{
- "host": "localhost",
"user": "postgres",
"password": "",
"database": "test_tilestache"
@@ -109,7 +108,6 @@ def setUp(self):
{
"dbinfo":
{
- "host": "localhost",
"user": "postgres",
"password": "",
"database": "test_tilestache"