train_pts = train_pts.drop(['Raster Value'], axis=1) # Remove Raster Value column
train_pts.head()
It should allocate the band values of the pixel of all images with their respective bands to the coordinate as shown below.
If i apply this approach to my own dataset now, it doesn't allocate any value to the coordinates and it's zero everywhere.
Quick summary on my dataset:
Preprocessed Sentinel 1, corregistered a Stack of ~100 images, performed Stack averaging (Minimum) to use as mask later on, to extract only data where all dates are available.
Preprocessed Sentinel 2, Landsat 7 & 8
Collocated Sentinel 1&2, Landsat 7&8 together with the Stack avg min mask and applied Land/Sea mask to remove Sentinel 1 non available data.
Exported to GeoTIFF / BigTIFF file
Imported into the jupyter Notebook
Opened everything successfully and visualized it
All values allocated to the Training Points have the value 0 and i don’t understand why. I have used dataset.xy(dataset.height // 2, dataset.width // 2) to validate the coordinate format in case long lat were swapped.
The dataset is big, around 100 Sentinel 1 images and 35 Sentinel 2 Landsat 7 & 8 images. With all their bands and addtionally NDVI, NDMI bands