toggle quoted messageShow quoted text
maybe I can help out here.
- asyncio's run_in_executor does the exact same thing as using a thread pool, it's just a different API. Until both GDAL and rasterio explicitly support this, you cannot use "real" asynchronous (non-blocking) IO.
- I can second Sean's comment that multithreading should speed up tile retrieval, and I suspect that something is off with your code and/or your raster. Usually, reading a tile from S3 takes something like 10-100ms if you do it right.
- At the moment, GDAL reads are not thread-safe! This leads to seemingly random failing tile reads (we struggled a lot with this in Terracotta). Now we use a process pool that we spawn at server start, which seems to work OK both performance and reliability-wise (https://github.com/DHI-GRAS/terracotta/blob/master/terracotta/drivers/raster_base.py
On 24/03/2020 22.12, kylebarron2 via Groups.Io wrote:
I'm trying to improve performance of dynamic satellite imagery tiling, using
which combines source Cloud-Optimized GeoTIFFs into a web mercator tile on the
fly. I'm using AWS Landsat and NAIP imagery stored in S3 buckets, and running
code on AWS Lambda in the same region.
Since NAIP imagery doesn't overlap cleanly with web mercator tiles, at zoom 12 I
have to load on average [6 assets to create one mercator
While profiling the AWS Lambda instance using AWS X-Ray, I found that the
biggest bottleneck was the [base
to `WarpedVRT.read()`. That call always takes [between 1.7 and 2.0
for each tile, regardless of the amount of overlap with the mercator tile.
When testing tile load times on an EC2 t2.nano in the same region, for the first
tile load, CPU time is 120 ms but wall time is 1.1 seconds. That leads me to
believe that the bottleneck is S3 latency.
If the code running on Lambda shares the same 90% proportion spent on latency
for each asset, that would imply that 9 seconds total are spent waiting on
Using multithreading with a `ThreadPoolExecutor` takes longer than running
single-threaded. Given the situation, it would seem ideal to use `asyncio` for
the COG network requests to improve performance.
Has this been attempted ever with Rasterio? I saw a [Rasterio example of using
to improve performance on a CPU bound function, and plan to try that out, but
I'm pessimistic about that approach directly because I'd think that the `async`
calls would need to be applied on the core fetch calls directly.
Reproduction for tile loading:
from rio_tiler.main import tile
os.environ['CURL_CA_BUNDLE'] = '/etc/ssl/certs/ca-certificates.crt'
address = 's3://naip-visualization/ca/2018/60cm/rgb/34118/m_3411861_ne_11_060_20180723_20190208.tif'
x = 701
y = 1635
z = 12
tilesize = 512
%time data, mask = tile(address, x, y, z, tilesize)
CPU times: user 119 ms, sys: 20.3 ms, total: 140 ms
Wall time: 1.1 s
Niels Bohr Institute
Physics of Ice, Climate and Earth
University of Copenhagen
Tagensvej 16, DK-2200 Copenhagen, DENMARK