Any way to ensure cell resolutions stay square?


himat15@...
 

 
I am resampling a raster to become a different resolution, but it seems like the resolutions are not entirely accurate.

Here is the code I have (the code is very standard, taken from the documentation).

orig_res = ds.res[0] # Calculate scale factor for this specific input layer upscale_factor = orig_res / 15.0 print("upscale factor: ", upscale_factor) # Scale while reading in array scaled_arr = ds.read( out_shape=( ds.count, int(ds.height * upscale_factor), int(ds.width * upscale_factor) ), resampling=Resampling.bilinear ) # Also have to update the transform to be scaled scaled_transform = ds.transform * ds.transform.scale( (ds.height / scaled_arr.shape[1]), (ds.width / scaled_arr.shape[2]) ) scaled_profile = ds.profile scaled_profile.update(transform=scaled_transform, height=scaled_arr.shape[1], width=scaled_arr.shape[2]) with rio.open(f_out_path, 'w', **scaled_profile) as scaled_ds: scaled_ds.write(scaled_arr) print(f"Orig arr shape: ({ds.count}, {ds.shape[0]}, {ds.shape[1]})") print(f"Scaled ds shape: ({scaled_ds.count}, {scaled_ds.shape[0]}, {scaled_ds.shape[1]})") print("Orig ds res: ", ds.res) print("Scaled ds res: ", scaled_ds.res)

Code output:
upscale factor:  0.3333333333333333
Orig arr shape: (1, 9240, 2671)
Scaled ds shape: (1, 3080, 890)
Orig ds res:  (5.0, 5.0)
Scaled ds res:  (15.0, 15.00561797752809)

But in the final scaled ds resolution, why is the vertical scale direction slightly different? (15.0, 15.00561797752809)?

I'm not sure if it's a bad thing or not. I just want to make sure it's okay, or if there's a way to correct it, that would be good too.


Luke
 

Try going the other way - multiply the resolution by a scale factor (i.e. 3 in your case) and divide the dimensions - see example gist

On Fri, 13 Mar 2020 at 14:35, himat15 via Groups.Io <himat15=yahoo.com@groups.io> wrote:
 
I am resampling a raster to become a different resolution, but it seems like the resolutions are not entirely accurate.

Here is the code I have (the code is very standard, taken from the documentation).

orig_res = ds.res[0] # Calculate scale factor for this specific input layer upscale_factor = orig_res / 15.0 print("upscale factor: ", upscale_factor) # Scale while reading in array scaled_arr = ds.read( out_shape=( ds.count, int(ds.height * upscale_factor), int(ds.width * upscale_factor) ), resampling=Resampling.bilinear ) # Also have to update the transform to be scaled scaled_transform = ds.transform * ds.transform.scale( (ds.height / scaled_arr.shape[1]), (ds.width / scaled_arr.shape[2]) ) scaled_profile = ds.profile scaled_profile.update(transform=scaled_transform, height=scaled_arr.shape[1], width=scaled_arr.shape[2]) with rio.open(f_out_path, 'w', **scaled_profile) as scaled_ds: scaled_ds.write(scaled_arr) print(f"Orig arr shape: ({ds.count}, {ds.shape[0]}, {ds.shape[1]})") print(f"Scaled ds shape: ({scaled_ds.count}, {scaled_ds.shape[0]}, {scaled_ds.shape[1]})") print("Orig ds res: ", ds.res) print("Scaled ds res: ", scaled_ds.res)

Code output:
upscale factor:  0.3333333333333333
Orig arr shape: (1, 9240, 2671)
Scaled ds shape: (1, 3080, 890)
Orig ds res:  (5.0, 5.0)
Scaled ds res:  (15.0, 15.00561797752809)

But in the final scaled ds resolution, why is the vertical scale direction slightly different? (15.0, 15.00561797752809)?

I'm not sure if it's a bad thing or not. I just want to make sure it's okay, or if there's a way to correct it, that would be good too.