-
Notifications
You must be signed in to change notification settings - Fork 0
GeoPackage Export With oneliners
Professional GeoPackage creation with comprehensive metadata, imagery support, and streamlined one-liner functions for PlanetScope satellite imagery analysis.
The Enhanced GeoPackage Export module provides professional capabilities for creating standardized GeoPackage files with:
- Scene Footprint Polygons: Vector layers with comprehensive Planet API metadata
- Multi-Layer Support: Vector polygons and raster imagery in single file
- Professional Schemas: Minimal, standard, and comprehensive attribute schemas
- GIS Software Integration: Direct compatibility with QGIS, ArcGIS, and other tools
- Automatic Styling: Generation of .qml style files for immediate visualization
- Cross-Platform Standards: Standardized schemas for maximum compatibility
- Enhanced One-Liner Functions: Simplified API with comprehensive Planet API support
from planetscope_py import create_scene_geopackage, quick_milan_geopackage
# Ultra-simple GeoPackage creation
gpkg_path = create_scene_geopackage(
milan_polygon, "2025-01-01/2025-01-31"
)
print(f"Created: {gpkg_path}")
# Milan preset with quality filters
milan_gpkg = quick_milan_geopackage(
"last_month", size="large",
cloud_cover_max=0.2, sun_elevation_min=30
)
print(f"Milan GeoPackage: {milan_gpkg}")from planetscope_py import GeoPackageManager, GeoPackageConfig
# Initialize GeoPackage manager
geopackage_manager = GeoPackageManager()
# Create basic GeoPackage with scene polygons
layer_info = geopackage_manager.create_scene_geopackage(
scenes=results['features'],
output_path="milan_scenes.gpkg",
roi=roi
)
print(f"Created GeoPackage: milan_scenes.gpkg")
print(f"Vector layer: {layer_info.feature_count} scene polygons")The primary function for creating GeoPackages with comprehensive Planet API parameter support.
from planetscope_py import create_scene_geopackage
# Basic usage
gpkg_path = create_scene_geopackage(milan_polygon, "2025-01-01/2025-01-31")
# Advanced usage with comprehensive Planet API parameters
gpkg_path = create_scene_geopackage(
roi=milan_polygon,
time_period="2025-01-01/2025-01-31",
output_path="milan_quality.gpkg",
clip_to_roi=True,
schema="comprehensive",
# Core quality filters
cloud_cover_max=0.2, # 20% max cloud cover
sun_elevation_min=30, # Good illumination (degrees)
ground_control=True, # Require ground control points
quality_category="standard", # Quality level requirement
item_types=["PSScene"], # Planet item types
# Advanced quality metrics
usable_data_min=0.8, # 80% usable data minimum
visible_percent_min=0.9, # 90% visible pixels
clear_percent_min=0.9, # 90% clear pixels
clear_confidence_percent_min=0.8, # Clear confidence threshold
visible_confidence_percent_min=0.8, # Visible confidence threshold
# Atmospheric condition filters
shadow_percent_max=0.1, # Max 10% shadows
snow_ice_percent_max=0.05, # Max 5% snow/ice
heavy_haze_percent_max=0.02, # Max 2% heavy haze
light_haze_percent_max=0.05, # Max 5% light haze
anomalous_pixels_max=0.01, # Max 1% anomalous pixels
# Geometric constraints
view_angle_max=15, # Near-nadir viewing (degrees)
off_nadir_max=10, # Low off-nadir angle (degrees)
gsd_min=3.0, # Minimum ground sample distance (m)
gsd_max=4.0, # Maximum ground sample distance (m)
# Platform specifications
satellite_ids=["PSB.SD"], # Specific satellite IDs
instrument="PSB_SD", # Instrument type filter
provider="planet", # Data provider filter
processing_level="3A" # Processing level requirement
)| Parameter | Type | Default | Description |
|---|---|---|---|
roi |
Polygon/list/dict | Required | Region of interest geometry |
time_period |
str | "last_month" |
Time period specification |
output_path |
str | Auto-generated | Output GeoPackage path |
clip_to_roi |
bool | True |
Clip scene footprints to ROI shape |
schema |
str | "standard" |
Attribute schema level |
| Parameter | Type | Default | Description |
|---|---|---|---|
cloud_cover_max |
float | 0.3 |
Maximum cloud coverage (0.0-1.0) |
sun_elevation_min |
float | None | Minimum sun elevation angle (degrees) |
ground_control |
bool | None | Require ground control points |
quality_category |
str | None | Quality level ("test", "standard") |
item_types |
list | ["PSScene"] |
Planet item types to search |
usable_data_min |
float | None | Minimum usable data fraction (0.0-1.0) |
visible_percent_min |
float | None | Minimum visible pixels percentage |
clear_percent_min |
float | None | Minimum clear pixels percentage |
shadow_percent_max |
float | None | Maximum shadow percentage |
snow_ice_percent_max |
float | None | Maximum snow/ice coverage |
heavy_haze_percent_max |
float | None | Maximum heavy haze percentage |
light_haze_percent_max |
float | None | Maximum light haze percentage |
anomalous_pixels_max |
float | None | Maximum anomalous pixels |
view_angle_max |
float | None | Maximum view angle (degrees) |
off_nadir_max |
float | None | Maximum off-nadir angle (degrees) |
gsd_min |
float | None | Minimum ground sample distance (meters) |
gsd_max |
float | None | Maximum ground sample distance (meters) |
satellite_ids |
list | None | Specific satellite IDs to include |
instrument |
str | None | Instrument type filter |
provider |
str | None | Data provider filter |
processing_level |
str | None | Processing level requirement |
Predefined Milan area polygons with comprehensive Planet API parameters.
from planetscope_py import quick_milan_geopackage
# Basic Milan analysis
milan_gpkg = quick_milan_geopackage("2025-01-01/2025-01-31")
# High-quality data with size preset
milan_gpkg = quick_milan_geopackage(
time_period="last_month",
output_path="milan_premium.gpkg",
size="large", # Area size preset
cloud_cover_max=0.1, # Premium quality (10% max clouds)
sun_elevation_min=30, # Good illumination
ground_control=True, # High geometric accuracy
quality_category="standard", # Standard quality only
usable_data_min=0.9, # 90% usable data minimum
view_angle_max=15, # Near-nadir viewing
gsd_max=3.5 # High resolution only
)| Size | Area | Description | Coordinates |
|---|---|---|---|
"city_center" |
~100 km² | Central Milan only | 9.15-9.25°E, 45.45-45.50°N |
"small" |
~500 km² | Milan + close suburbs | 9.0-9.3°E, 45.4-45.6°N |
"medium" |
~1200 km² | Greater Milan area | 8.9-9.4°E, 45.3-45.65°N |
"large" |
~2000+ km² | Milan metropolitan region | 8.7-10.3°E, 45.1-46.2°N |
Enhanced search and export with comprehensive Planet API statistics.
from planetscope_py import search_and_export_scenes
# Basic search and export
result = search_and_export_scenes(
roi=milan_polygon,
start_date="2025-01-01",
end_date="2025-01-31"
)
# High-quality scenes with comprehensive filtering
result = search_and_export_scenes(
roi=milan_polygon,
start_date="2025-01-01",
end_date="2025-01-31",
output_path="comprehensive_analysis.gpkg",
# Quality filters
cloud_cover_max=0.15, # 15% max cloud cover
sun_elevation_min=25, # Good lighting conditions
ground_control=True, # Require ground control
quality_category="standard", # Standard quality level
# Advanced filters
usable_data_min=0.85, # 85% usable data
visible_percent_min=0.9, # 90% visible pixels
clear_percent_min=0.85, # 85% clear pixels
shadow_percent_max=0.15, # Max 15% shadows
view_angle_max=20, # Max 20° view angle
# Platform specifications
item_types=["PSScene"], # PlanetScope scenes only
satellite_ids=["PSB.SD", "PS2.SD"], # Specific satellites
gsd_min=3.0, # High resolution minimum
gsd_max=4.0 # High resolution maximum
)
# Access comprehensive results
if result['success']:
print(f"Found {result['scenes_found']} high-quality scenes")
print(f"ROI coverage: {result['coverage_ratio']:.1%}")
print(f"GeoPackage created: {result['geopackage_path']}")
# Access detailed statistics
stats = result['statistics']
if 'cloud_cover' in stats:
cc = stats['cloud_cover']
print(f"Cloud cover: {cc['mean']:.1%} avg, range {cc['min']:.1%}-{cc['max']:.1%}")
if 'sun_elevation' in stats:
se = stats['sun_elevation']
print(f"Sun elevation: {se['mean']:.1f}° avg, range {se['min']:.1f}°-{se['max']:.1f}°")
if 'quality_distribution' in stats:
print(f"Quality distribution: {stats['quality_distribution']}")
if 'satellite_distribution' in stats:
print(f"Satellites used: {stats['unique_satellites']} unique")
print(f"Distribution: {stats['satellite_distribution']}")result = {
'success': True,
'scenes_found': 150,
'roi_area_km2': 2000.5,
'total_coverage_km2': 1850.2,
'coverage_ratio': 0.925,
'statistics': {
'cloud_cover': {
'mean': 0.12,
'min': 0.01,
'max': 0.20,
'count': 148
},
'sun_elevation': {
'mean': 35.2,
'min': 25.1,
'max': 52.3,
'count': 150
},
'usable_data': {
'mean': 0.91,
'min': 0.85,
'max': 0.98,
'count': 145
},
'quality_distribution': {
'standard': 135,
'test': 15
},
'satellite_distribution': {
'PSB.SD': 85,
'PS2.SD': 65
},
'unique_satellites': 2
},
'date_range': {
'start': '2025-01-01T10:15:23.123Z',
'end': '2025-01-31T15:42:18.456Z',
'unique_dates': 28
},
'geopackage_path': '/path/to/comprehensive_analysis.gpkg'
}Comprehensive GeoPackage validation with Planet API metadata analysis.
from planetscope_py import validate_scene_geopackage
# Validate created GeoPackage
validation = validate_scene_geopackage("comprehensive_analysis.gpkg")
if validation['valid']:
print(f"Valid GeoPackage with {validation['feature_count']} scenes")
print(f"File size: {validation['file_size_mb']:.1f} MB")
print(f"CRS: {validation['crs']}")
# AOI analysis
if 'aoi_statistics' in validation:
aoi = validation['aoi_statistics']
print(f"Total AOI coverage: {aoi['total_aoi_km2']:.0f} km²")
print(f"Scenes overlapping ROI: {aoi['scenes_with_aoi']}")
# Cloud cover statistics
if 'cloud_statistics' in validation:
cloud = validation['cloud_statistics']
print(f"Cloud cover range: {cloud['min_cloud_cover']:.1%} - {cloud['max_cloud_cover']:.1%}")
print(f"Average cloud cover: {cloud['mean_cloud_cover']:.1%}")
# Temporal coverage
if 'temporal_coverage' in validation:
temporal = validation['temporal_coverage']
print(f"Date range: {temporal['start_date']} to {temporal['end_date']}")
print(f"Unique acquisition dates: {temporal['unique_dates']}")
else:
print(f"Invalid GeoPackage: {validation.get('error', 'Unknown error')}")All one-liner functions support flexible time period formats:
# Predefined periods
gpkg = create_scene_geopackage(roi, "last_month") # Previous 30 days
gpkg = create_scene_geopackage(roi, "last_3_months") # Previous 90 days
# Custom date ranges
gpkg = create_scene_geopackage(roi, "2025-01-01/2025-01-31") # January 2025
gpkg = create_scene_geopackage(roi, "2024-06-01/2024-08-31") # Summer 2024
gpkg = create_scene_geopackage(roi, "2024-01-01/2024-12-31") # Full year 2024
# Season-specific analysis
winter_gpkg = create_scene_geopackage(roi, "2024-12-01/2025-02-28")
spring_gpkg = create_scene_geopackage(roi, "2025-03-01/2025-05-31")
summer_gpkg = create_scene_geopackage(roi, "2025-06-01/2025-08-31")
autumn_gpkg = create_scene_geopackage(roi, "2025-09-01/2025-11-30")| Schema | Attributes | Description | Use Case |
|---|---|---|---|
minimal |
12 fields | Essential metadata only | Quick exports, small files |
standard |
30 fields | Common Planet API fields | General analysis workflows |
comprehensive |
50+ fields | All Planet API metadata | Detailed research, archival |
minimal_attributes = [
# Core identification
'scene_id', 'item_type', 'satellite_id', 'acquired',
# Quality essentials
'cloud_cover', 'cloud_percent', 'clear_percent',
# Area calculations
'area_km2', 'aoi_km2', 'coverage_percentage',
# Basic location
'centroid_lat', 'centroid_lon'
]standard_attributes = [
# All minimal attributes plus:
# Technical specifications
'provider', 'instrument', 'gsd', 'pixel_resolution',
'ground_control', 'quality_category', 'strip_id',
# Enhanced quality metrics
'clear_confidence_percent', 'visible_percent', 'visible_confidence_percent',
'shadow_percent', 'snow_ice_percent', 'heavy_haze_percent',
'light_haze_percent', 'anomalous_pixels', 'usable_data',
# Solar and viewing geometry
'sun_elevation', 'sun_azimuth', 'satellite_azimuth', 'view_angle',
# Temporal information
'published', 'updated', 'publishing_stage', 'acquisition_date'
]comprehensive_attributes = [
# All standard attributes plus:
# Advanced technical details
'platform', 'spacecraft_id', 'epsg_code', 'processing_level',
'radiometric_target', 'black_fill',
# Advanced viewing geometry
'off_nadir', 'azimuth_angle',
# Enhanced temporal details
'acquisition_time', 'day_of_year',
# Calculated geometric properties
'bounds_west', 'bounds_south', 'bounds_east', 'bounds_north',
'perimeter_km', 'aspect_ratio',
# Quality analysis
'overall_quality', 'suitability', 'geometric_accuracy'
]# Premium quality for scientific research
scientific_gpkg = create_scene_geopackage(
roi=study_area,
time_period="2024-01-01/2024-12-31",
output_path="scientific_analysis.gpkg",
# Premium quality filters
cloud_cover_max=0.02, # 2% max clouds
sun_elevation_min=45, # Optimal illumination
ground_control=True, # Geometric precision
usable_data_min=0.98, # 98% usable data
view_angle_max=5, # Near-nadir only
# Strict atmospheric conditions
shadow_percent_max=0.02, # Max 2% shadows
snow_ice_percent_max=0.01, # Max 1% snow/ice
heavy_haze_percent_max=0.005, # Max 0.5% heavy haze
anomalous_pixels_max=0.002, # Max 0.2% anomalous
# Platform precision
satellite_ids=["PSB.SD"], # Specific high-quality satellites
gsd_min=3.0, gsd_max=3.2, # Consistent high resolution
processing_level="3A", # Highest processing level
schema="comprehensive"
)# Operational monitoring
monitoring_gpkg = create_scene_geopackage(
roi=monitoring_area,
time_period="last_3_months",
# Balanced quality filters
cloud_cover_max=0.15, # 15% max clouds
sun_elevation_min=25, # Good illumination
usable_data_min=0.8, # 80% usable data
view_angle_max=20, # Reasonable geometry
schema="standard"
)# Quick overview analysis
overview_gpkg = create_scene_geopackage(
roi=assessment_area,
time_period="last_month",
# Relaxed filters for more scenes
cloud_cover_max=0.3, # 30% max clouds
sun_elevation_min=15, # Relaxed illumination
schema="minimal" # Basic attributes only
)# Create seasonal GeoPackages
seasons = {
'winter': "2024-12-01/2025-02-28",
'spring': "2025-03-01/2025-05-31",
'summer': "2025-06-01/2025-08-31",
'autumn': "2025-09-01/2025-11-30"
}
seasonal_results = {}
for season, period in seasons.items():
seasonal_gpkg = create_scene_geopackage(
roi=study_area,
time_period=period,
output_path=f"seasonal_{season}.gpkg",
# Season-appropriate filters
cloud_cover_max=0.2,
sun_elevation_min=20 if season == 'winter' else 25,
schema="standard"
)
validation = validate_scene_geopackage(seasonal_gpkg)
seasonal_results[season] = {
'geopackage': seasonal_gpkg,
'scene_count': validation.get('feature_count', 0)
}
print(f"{season}: {seasonal_results[season]['scene_count']} scenes")from planetscope_py import batch_geopackage_export
# Define multiple study areas
study_areas = [milan_polygon, rome_polygon, florence_polygon]
# Batch process with consistent parameters
batch_results = batch_geopackage_export(
roi_list=study_areas,
time_period="2024-06-01/2024-08-31",
output_dir="./batch_analysis",
# Consistent quality filters
cloud_cover_max=0.2,
sun_elevation_min=25,
schema="comprehensive",
clip_to_roi=True
)
# Process results
for i, result in enumerate(batch_results.values()):
if result['success']:
validation = result['validation']
print(f"ROI {i+1}: {validation['feature_count']} scenes")
else:
print(f"ROI {i+1}: Failed - {result['error']}")def complete_enhanced_workflow(roi, time_period):
"""Complete analysis workflow using enhanced one-liners."""
print("Starting enhanced PlanetScope analysis...")
# Step 1: High-quality scene collection
result = search_and_export_scenes(
roi=roi,
start_date=time_period.split('/')[0],
end_date=time_period.split('/')[1],
output_path="workflow_analysis.gpkg",
# High-quality filters
cloud_cover_max=0.1,
sun_elevation_min=30,
ground_control=True,
usable_data_min=0.9,
schema="comprehensive"
)
if not result['success']:
print(f"Scene collection failed: {result.get('error')}")
return None
print(f"Collected {result['scenes_found']} high-quality scenes")
# Step 2: Validation and analysis
validation = validate_scene_geopackage(result['geopackage_path'])
if validation['valid']:
print(f"Validation successful: {validation['feature_count']} scenes")
# Print quality summary
if 'cloud_statistics' in validation:
cloud_stats = validation['cloud_statistics']
print(f"Average cloud cover: {cloud_stats['mean_cloud_cover']:.1%}")
if 'aoi_statistics' in validation:
aoi_stats = validation['aoi_statistics']
print(f"ROI coverage: {aoi_stats['total_aoi_km2']:.0f} km²")
return {
'geopackage_path': result['geopackage_path'],
'validation': validation,
'statistics': result.get('statistics', {}),
'success': True
}
else:
print(f"Validation failed: {validation.get('error')}")
return {'success': False, 'error': validation.get('error')}
# Run complete workflow
workflow_result = complete_enhanced_workflow(
roi=milan_polygon,
time_period="2024-06-01/2024-08-31"
)from planetscope_py.exceptions import ValidationError, PlanetScopeError, APIError
def robust_geopackage_creation(roi, time_period, **kwargs):
"""Create GeoPackage with comprehensive error handling."""
try:
# Input validation
if not hasattr(roi, 'bounds') and not isinstance(roi, (list, dict)):
raise ValidationError("Invalid ROI format")
# Test scene availability first
test_result = search_and_export_scenes(
roi=roi,
start_date=time_period.split('/')[0],
end_date=time_period.split('/')[1],
cloud_cover_max=kwargs.get('cloud_cover_max', 0.3)
)
if not test_result['success']:
raise PlanetScopeError(f"Scene search failed: {test_result.get('error')}")
scene_count = test_result['scenes_found']
print(f"Found {scene_count} scenes matching criteria")
if scene_count == 0:
print("No scenes found - suggestions:")
print(f" - Increase cloud_cover_max (current: {kwargs.get('cloud_cover_max', 0.3)})")
print(f" - Remove sun_elevation_min (current: {kwargs.get('sun_elevation_min', 'None')})")
print(f" - Expand time_period")
return None
# Adjust schema based on dataset size
if scene_count > 5000:
kwargs['schema'] = kwargs.get('schema', 'standard')
# Create GeoPackage
gpkg_path = create_scene_geopackage(
roi=roi,
time_period=time_period,
**kwargs
)
# Validate result
validation = validate_scene_geopackage(gpkg_path)
if not validation['valid']:
raise PlanetScopeError(f"Validation failed: {validation.get('error')}")
print(f"Success! Created: {gpkg_path}")
print(f" Features: {validation['feature_count']}")
print(f" File size: {validation['file_size_mb']:.1f} MB")
return {
'geopackage_path': gpkg_path,
'validation': validation,
'success': True
}
except ValidationError as e:
print(f"Validation error: {e}")
return {'success': False, 'error': str(e), 'error_type': 'validation'}
except APIError as e:
print(f"Planet API error: {e}")
print("Suggestions: Check API key, internet connection, or try again later")
return {'success': False, 'error': str(e), 'error_type': 'api'}
except PlanetScopeError as e:
print(f"GeoPackage creation error: {e}")
return {'success': False, 'error': str(e), 'error_type': 'geopackage'}
except Exception as e:
print(f"Unexpected error: {e}")
return {'success': False, 'error': str(e), 'error_type': 'unexpected'}
# Use robust creation
result = robust_geopackage_creation(
roi=milan_polygon,
time_period="2024-06-01/2024-08-31",
cloud_cover_max=0.2,
sun_elevation_min=25,
schema="comprehensive"
)def get_recommended_parameters(use_case, roi_size_km2):
"""Get recommended parameters based on use case and ROI size."""
if use_case == 'scientific_research':
return {
'cloud_cover_max': 0.05, # Very low clouds
'sun_elevation_min': 35, # Optimal illumination
'ground_control': True, # High accuracy
'usable_data_min': 0.95, # Minimal data loss
'view_angle_max': 10, # Near-nadir only
'schema': 'comprehensive'
}
elif use_case == 'operational_monitoring':
return {
'cloud_cover_max': 0.15, # Low clouds
'sun_elevation_min': 25, # Good illumination
'usable_data_min': 0.8, # Reasonable quality
'view_angle_max': 20,
'schema': 'standard'
}
elif use_case == 'change_detection':
return {
'cloud_cover_max': 0.1, # Low clouds for comparison
'sun_elevation_min': 30, # Consistent illumination
'view_angle_max': 15, # Consistent geometry
'ground_control': True, # Geometric accuracy
'schema': 'comprehensive'
}
elif use_case == 'quick_assessment':
schema = 'minimal' if roi_size_km2 > 10000 else 'standard'
return {
'cloud_cover_max': 0.3, # Accept more clouds
'sun_elevation_min': 20, # Relaxed illumination
'schema': schema
}
else: # general_purpose
return {
'cloud_cover_max': 0.2,
'sun_elevation_min': 25,
'usable_data_min': 0.8,
'schema': 'standard'
}
# Use recommended parameters
roi_area = 1500 # km²
params = get_recommended_parameters('operational_monitoring', roi_area)
gpkg_path = create_scene_geopackage(
roi=study_area,
time_period="2024-06-01/2024-08-31",
**params
)# The enhanced one-liners automatically create .qml style files
gpkg_path = create_scene_geopackage(milan_polygon, "2025-01-01/2025-01-31")
# Manual QGIS loading steps:
# 1. Open QGIS
# 2. Layer > Add Layer > Add Vector Layer
# 3. Select the .gpkg file
# 4. Choose the layer to load
# 5. The .qml style file will be automatically applied
print(f"Load {gpkg_path} in QGIS for visualization")import geopandas as gpd
# Load the enhanced GeoPackage
scenes_gdf = gpd.read_file("analysis.gpkg", layer="scene_footprints")
print(f"Loaded {len(scenes_gdf)} scene polygons")
# Explore enhanced attributes
print("Available columns:")
for col in scenes_gdf.columns:
if col != 'geometry':
print(f" {col}: {scenes_gdf[col].dtype}")
# Enhanced analysis with comprehensive metadata
print(f"\nEnhanced Scene Statistics:")
print(f" Date range: {scenes_gdf['acquired'].min()} to {scenes_gdf['acquired'].max()}")
print(f" Cloud cover: {scenes_gdf['cloud_cover'].mean():.1%} average")
if 'sun_elevation' in scenes_gdf.columns:
print(f" Sun elevation: {scenes_gdf['sun_elevation'].mean():.1f}° average")
if 'usable_data' in scenes_gdf.columns:
print(f" Usable data: {scenes_gdf['usable_data'].mean():.1%} average")
if 'quality_category' in scenes_gdf.columns:
quality_dist = scenes_gdf['quality_category'].value_counts()
print(f" Quality distribution: {quality_dist.to_dict()}")
if 'satellite_id' in scenes_gdf.columns:
satellite_dist = scenes_gdf['satellite_id'].value_counts()
print(f" Satellite distribution: {satellite_dist.to_dict()}")
# Enhanced spatial analysis
if 'aoi_km2' in scenes_gdf.columns:
total_aoi = scenes_gdf['aoi_km2'].sum()
scenes_with_aoi = (scenes_gdf['aoi_km2'] > 0).sum()
print(f" Total AOI coverage: {total_aoi:.0f} km²")
print(f" Scenes overlapping ROI: {scenes_with_aoi}/{len(scenes_gdf)}")def export_geopackage_to_multiple_formats(geopackage_path, output_dir):
"""Export GeoPackage to multiple GIS formats."""
import os
from pathlib import Path
output_path = Path(output_dir)
output_path.mkdir(exist_ok=True)
# Load enhanced GeoPackage
gdf = gpd.read_file(geopackage_path)
# Export to Shapefile
shapefile_path = output_path / "scenes.shp"
gdf.to_file(shapefile_path, driver='ESRI Shapefile')
print(f"Exported to Shapefile: {shapefile_path}")
# Export to GeoJSON for web applications
geojson_path = output_path / "scenes.geojson"
gdf.to_file(geojson_path, driver='GeoJSON')
print(f"Exported to GeoJSON: {geojson_path}")
# Export metadata to CSV (without geometry)
csv_path = output_path / "scene_metadata.csv"
metadata_df = gdf.drop('geometry', axis=1)
metadata_df.to_csv(csv_path, index=False)
print(f"Exported metadata to CSV: {csv_path}")
# Export to KML for Google Earth
kml_path = output_path / "scenes.kml"
gdf.to_file(kml_path, driver='KML')
print(f"Exported to KML: {kml_path}")
return {
'shapefile': shapefile_path,
'geojson': geojson_path,
'csv': csv_path,
'kml': kml_path
}
# Export to multiple formats
exported_files = export_geopackage_to_multiple_formats(
"comprehensive_analysis.gpkg",
"exported_formats"
)def optimize_for_large_datasets(roi, time_period, expected_scenes=None):
"""Optimize GeoPackage creation for large datasets."""
if expected_scenes and expected_scenes > 5000:
print("Large dataset detected - using optimized settings")
# Use minimal schema for memory efficiency
gpkg_path = create_scene_geopackage(
roi=roi,
time_period=time_period,
output_path="large_dataset_optimized.gpkg",
schema="minimal", # Reduce memory usage
cloud_cover_max=0.25, # Slightly relaxed for more scenes
clip_to_roi=True
)
validation = validate_scene_geopackage(gpkg_path)
scene_count = validation.get('feature_count', 0)
print(f"Created optimized GeoPackage with {scene_count} scenes")
# If manageable size, create comprehensive version
if scene_count < 2000:
print("Creating comprehensive version...")
comprehensive_gpkg = create_scene_geopackage(
roi=roi,
time_period=time_period,
output_path="large_dataset_comprehensive.gpkg",
schema="comprehensive",
cloud_cover_max=0.25,
clip_to_roi=True
)
return comprehensive_gpkg
else:
print(f"Dataset too large ({scene_count} scenes) - keeping optimized version")
return gpkg_path
else:
# Standard processing for smaller datasets
return create_scene_geopackage(
roi=roi,
time_period=time_period,
schema="comprehensive",
clip_to_roi=True
)
# Handle large dataset
large_result = optimize_for_large_datasets(
roi=large_study_area,
time_period="2024-01-01/2024-12-31",
expected_scenes=8000
)import psutil
import gc
def memory_efficient_batch_processing(roi_list, time_period):
"""Process multiple ROIs with memory monitoring."""
def monitor_memory():
process = psutil.Process()
return process.memory_info().rss / 1024 / 1024
results = []
initial_memory = monitor_memory()
for i, roi in enumerate(roi_list):
print(f"Processing ROI {i+1}/{len(roi_list)}")
# Memory check
current_memory = monitor_memory()
if current_memory > initial_memory + 1000: # If increased by 1GB
print("High memory usage - forcing garbage collection")
gc.collect()
try:
gpkg_path = create_scene_geopackage(
roi=roi,
time_period=time_period,
output_path=f"roi_{i+1:03d}_efficient.gpkg",
schema="standard", # Balance detail and memory
cloud_cover_max=0.25,
clip_to_roi=True
)
validation = validate_scene_geopackage(gpkg_path)
results.append({
'roi_index': i+1,
'geopackage_path': gpkg_path,
'scene_count': validation.get('feature_count', 0),
'success': True
})
except Exception as e:
print(f"ROI {i+1} failed: {e}")
results.append({
'roi_index': i+1,
'error': str(e),
'success': False
})
# Cleanup after each ROI
gc.collect()
return results
# Process multiple ROIs efficiently
roi_list = [milan_polygon, rome_polygon, florence_polygon]
batch_results = memory_efficient_batch_processing(roi_list, "2024-06-01/2024-08-31")Old Traditional Approach (20+ lines):
# Old way - complex and verbose
from datetime import datetime
from planetscope_py import PlanetScopeQuery, GeoPackageManager, GeoPackageConfig
query = PlanetScopeQuery()
results = query.search_scenes(
geometry=milan_polygon,
start_date="2025-01-05",
end_date="2025-01-25",
cloud_cover_max=1.0, # Accepts ANY cloud cover!
item_types=["PSScene"]
)
scenes = results.get('features', [])
if scenes:
config = GeoPackageConfig(
clip_to_roi=True,
attribute_schema="standard"
)
manager = GeoPackageManager(config=config)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_path = f"milan_large_{timestamp}.gpkg"
manager.create_scene_geopackage(
scenes=scenes,
output_path=output_path,
roi=milan_polygon
)New Enhanced One-Liner (1-3 lines):
# New way - simple and with better quality control
from planetscope_py import create_scene_geopackage
# Basic version
gpkg_path = create_scene_geopackage(milan_polygon, "2025-01-05/2025-01-25")
# With quality control (much better than cloud_cover_max=1.0!)
gpkg_path = create_scene_geopackage(
milan_polygon, "2025-01-05/2025-01-25",
clip_to_roi=True, cloud_cover_max=0.2, sun_elevation_min=30
)
# Or even simpler with Milan preset
from planetscope_py import quick_milan_geopackage
milan_gpkg = quick_milan_geopackage("2025-01-05/2025-01-25", size="large")| Aspect | Traditional | Enhanced One-Liner |
|---|---|---|
| Lines of code | 20+ lines | 1-3 lines |
| Quality control |
cloud_cover_max=1.0 (accepts 100% clouds!) |
cloud_cover_max=0.2 + sun_elevation_min=30
|
| Parameter support | Basic Planet API | ALL Planet API parameters |
| Error handling | Manual | Built-in comprehensive |
| Validation | Manual | Automatic with detailed stats |
| Timestamp handling | Manual | Automatic |
| Metadata extraction | Basic | Comprehensive Planet API fields |
| Setup complexity | High | Minimal |
def scientific_research_workflow(study_area, time_period):
"""Complete workflow for scientific research with premium quality."""
print("Scientific Research Workflow - Premium Quality")
print("=" * 50)
# Step 1: Premium quality data collection
result = search_and_export_scenes(
roi=study_area,
start_date=time_period.split('/')[0],
end_date=time_period.split('/')[1],
output_path="scientific_premium.gpkg",
# Premium quality filters
cloud_cover_max=0.02, # 2% max clouds
sun_elevation_min=40, # Optimal illumination
ground_control=True, # High geometric accuracy
usable_data_min=0.98, # 98% usable data
view_angle_max=5, # Very near-nadir
# Strict atmospheric conditions
shadow_percent_max=0.02, # Max 2% shadows
snow_ice_percent_max=0.005, # Max 0.5% snow/ice
heavy_haze_percent_max=0.002, # Max 0.2% heavy haze
anomalous_pixels_max=0.001, # Max 0.1% anomalous
# Platform precision
satellite_ids=["PSB.SD"], # High-quality satellites only
processing_level="3A", # Highest processing
schema="comprehensive"
)
if not result['success']:
print(f"Premium data collection failed: {result.get('error')}")
# Fallback with slightly relaxed parameters
print("Trying with slightly relaxed parameters...")
result = search_and_export_scenes(
roi=study_area,
start_date=time_period.split('/')[0],
end_date=time_period.split('/')[1],
output_path="scientific_high_quality.gpkg",
cloud_cover_max=0.05, # 5% max clouds
sun_elevation_min=35, # Good illumination
ground_control=True,
usable_data_min=0.95,
view_angle_max=10,
schema="comprehensive"
)
if result['success']:
print(f"Data collection successful: {result['scenes_found']} premium scenes")
# Detailed validation
validation = validate_scene_geopackage(result['geopackage_path'])
# Quality assessment
stats = result.get('statistics', {})
print(f"\nQuality Assessment:")
if 'cloud_cover' in stats:
cc = stats['cloud_cover']
print(f" Cloud cover: {cc['mean']:.1%} avg (range: {cc['min']:.1%}-{cc['max']:.1%})")
if 'sun_elevation' in stats:
se = stats['sun_elevation']
print(f" Sun elevation: {se['mean']:.1f}° avg (range: {se['min']:.1f}°-{se['max']:.1f}°)")
if 'usable_data' in stats:
ud = stats['usable_data']
print(f" Usable data: {ud['mean']:.1%} avg (range: {ud['min']:.1%}-{ud['max']:.1%})")
print(f"\nSpatial Coverage:")
print(f" ROI area: {result['roi_area_km2']:.0f} km²")
print(f" Scene coverage: {result['coverage_ratio']:.1%}")
# Recommendations
print(f"\nRecommendations:")
if result['coverage_ratio'] > 0.95:
print(" Excellent spatial coverage - suitable for detailed analysis")
elif result['coverage_ratio'] > 0.8:
print(" Good spatial coverage - minor gaps may exist")
else:
print(" Limited spatial coverage - consider temporal expansion")
if stats.get('cloud_cover', {}).get('mean', 1) < 0.05:
print(" Excellent atmospheric conditions")
elif stats.get('cloud_cover', {}).get('mean', 1) < 0.1:
print(" Good atmospheric conditions")
else:
print(" Consider stricter cloud cover filters")
return {
'geopackage_path': result['geopackage_path'],
'validation': validation,
'statistics': stats,
'quality_grade': 'premium' if stats.get('cloud_cover', {}).get('mean', 1) < 0.05 else 'high',
'suitable_for_research': True,
'success': True
}
else:
print(f"Scientific workflow failed: {result.get('error')}")
return {'success': False, 'error': result.get('error')}
# Run scientific workflow
research_result = scientific_research_workflow(
study_area=milan_polygon,
time_period="2024-06-01/2024-08-31"
)def operational_monitoring_workflow(monitoring_areas, time_period="last_3_months"):
"""Operational monitoring workflow for multiple areas."""
print("Operational Monitoring Workflow")
print("=" * 35)
monitoring_results = {}
for i, (area_name, area_polygon) in enumerate(monitoring_areas.items()):
print(f"\nProcessing {area_name} ({i+1}/{len(monitoring_areas)})...")
# Operational quality parameters
result = search_and_export_scenes(
roi=area_polygon,
start_date=time_period.split('/')[0] if '/' in time_period else None,
end_date=time_period.split('/')[1] if '/' in time_period else None,
output_path=f"monitoring_{area_name.lower()}.gpkg",
# Balanced operational filters
cloud_cover_max=0.15, # 15% max clouds
sun_elevation_min=25, # Good illumination
usable_data_min=0.8, # 80% usable data
view_angle_max=20, # Reasonable geometry
schema="standard"
)
if result['success']:
validation = validate_scene_geopackage(result['geopackage_path'])
monitoring_results[area_name] = {
'geopackage_path': result['geopackage_path'],
'scene_count': result['scenes_found'],
'coverage_ratio': result.get('coverage_ratio', 0),
'statistics': result.get('statistics', {}),
'file_size_mb': validation.get('file_size_mb', 0),
'success': True
}
print(f" Success: {result['scenes_found']} scenes, {result.get('coverage_ratio', 0):.1%} coverage")
else:
monitoring_results[area_name] = {
'success': False,
'error': result.get('error', 'Unknown error')
}
print(f" Failed: {result.get('error')}")
# Generate monitoring summary
successful_areas = [name for name, data in monitoring_results.items() if data.get('success')]
total_scenes = sum(data.get('scene_count', 0) for data in monitoring_results.values() if data.get('success'))
print(f"\nMonitoring Summary:")
print(f" Successful areas: {len(successful_areas)}/{len(monitoring_areas)}")
print(f" Total scenes collected: {total_scenes}")
for area_name, data in monitoring_results.items():
if data.get('success'):
print(f" {area_name}: {data['scene_count']} scenes ({data['coverage_ratio']:.1%} coverage)")
else:
print(f" {area_name}: Failed - {data.get('error')}")
return monitoring_results
# Define monitoring areas
monitoring_areas = {
'Milan': milan_polygon,
'Rome': rome_polygon,
'Florence': florence_polygon
}
# Run operational monitoring
monitoring_results = operational_monitoring_workflow(
monitoring_areas=monitoring_areas,
time_period="2024-09-01/2024-11-30"
)| Use Case | Recommended Function | Key Parameters |
|---|---|---|
| Quick analysis | create_scene_geopackage() |
schema="minimal", relaxed filters |
| Standard monitoring | create_scene_geopackage() |
schema="standard", balanced filters |
| Scientific research | search_and_export_scenes() |
schema="comprehensive", strict filters |
| Milan area analysis | quick_milan_geopackage() |
Size presets, quality filters |
| Validation/QC | validate_scene_geopackage() |
Post-creation analysis |
| Multiple areas | batch_geopackage_export() |
Consistent parameters |
For Maximum Quality:
cloud_cover_max=0.02-0.05sun_elevation_min=35-45ground_control=Trueusable_data_min=0.95view_angle_max=5-10
For Balanced Performance:
cloud_cover_max=0.15-0.2sun_elevation_min=25-30usable_data_min=0.8view_angle_max=15-20
For Maximum Coverage:
cloud_cover_max=0.3sun_elevation_min=15-20usable_data_min=0.7- No geometric constraints
- Start with relaxed parameters to assess data availability
- Use appropriate schemas based on analysis requirements
-
Validate all outputs using
validate_scene_geopackage() - Monitor memory usage for large datasets
- Use error handling for production workflows
- Export to multiple formats for different applications
- Document parameters used for reproducibility
- Implement the enhanced one-liner functions in your PlanetScope-py library
- Test parameter combinations for your specific use cases
- Integrate with visualization modules for complete workflows
- Set up organized project structures for file management
- Use validation functions to ensure data quality
The Enhanced GeoPackage Export functionality transforms complex multi-step processes into simple one-line function calls while providing comprehensive Planet API parameter support and professional-grade outputs.
Getting Started
Core Features
- Scene Discovery
- Metadata Analysis
- Spatial Density Analysis and Visualization
- Rate Limiting & Performance
Advanced Analysis
Integration Workflows
Examples & Tutorials
API Reference
Support