Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Help]: projection and filesize issue related to sbas.export_geotiff #160

Open
jie666-6 opened this issue Aug 12, 2024 · 13 comments
Open

[Help]: projection and filesize issue related to sbas.export_geotiff #160

jie666-6 opened this issue Aug 12, 2024 · 13 comments

Comments

@jie666-6
Copy link

jie666-6 commented Aug 12, 2024

I am trying to save all coherence and phase data into a GeoTIFF using sbas.export_geotiff.

decimator = sbas.decimator(resolution=15)
corr_sbas = decimator(ds_sbas.correlation)
sbas.export_geotiff(sbas.ra2ll(corr_sbas),f'{OUTDIR}/Coherence_stack')

The saved GeoTIFF should already been saved in latitude/longitude coordinates after sbas.ra2ll. However, the saved GeoTIFF appears to be in 'pseudo' geographic Lat/Lon coordinates when I open it in ENVI. Additionally, the entire image seems to be in an incorrect projection when I check it using the large icons view in Windows 10 File Explorer directly:

image
image

When I open the file in QGIS, the projection appears to be correct, but it is prone to causing QGIS to crash.

I then used gdalwarp to reproject it to EPSG:4326 and saved it with LZW compression, as shown below:
image

After re-saving with GDAL, QGIS is able to open the data safely, and the file size has been reduced from 1.22 GB to 881 MB.

I am wondering why this issue occurs when exporting data to a GeoTIFF using sbas.export_geotiff. Is there a way to fix the projection issue and also reduce the file size using pyGMTSAR?

Thank you.

@AlexeyPechnikov
Copy link
Owner

It's strange to use Windows File Explorer to check the files. Use specialized tools like gdalinfo instead. This command, added to the Imperial Valley 2015 notebook, generates valid GeoTIFF files:

sbas.export_geotiff(disp_subset, 'disp')
gdalinfo /Users/mbg/Work/tmp/disp.2015-05-21.tif 
Driver: GTiff/GeoTIFF
Files: /Users/mbg/Work/tmp/disp.2015-05-21.tif
Size is 1213, 1105
Coordinate System is:
GEOGCRS["WGS 84",
    ENSEMBLE["World Geodetic System 1984 ensemble",
        MEMBER["World Geodetic System 1984 (Transit)"],
        MEMBER["World Geodetic System 1984 (G730)"],
        MEMBER["World Geodetic System 1984 (G873)"],
        MEMBER["World Geodetic System 1984 (G1150)"],
        MEMBER["World Geodetic System 1984 (G1674)"],
        MEMBER["World Geodetic System 1984 (G1762)"],
        MEMBER["World Geodetic System 1984 (G2139)"],
        MEMBER["World Geodetic System 1984 (G2296)"],
        ELLIPSOID["WGS 84",6378137,298.257223563,
            LENGTHUNIT["metre",1]],
        ENSEMBLEACCURACY[2.0]],
    PRIMEM["Greenwich",0,
        ANGLEUNIT["degree",0.0174532925199433]],
    CS[ellipsoidal,2],
        AXIS["geodetic latitude (Lat)",north,
            ORDER[1],
            ANGLEUNIT["degree",0.0174532925199433]],
        AXIS["geodetic longitude (Lon)",east,
            ORDER[2],
            ANGLEUNIT["degree",0.0174532925199433]],
    USAGE[
        SCOPE["Horizontal component of 3D system."],
        AREA["World."],
        BBOX[-90,-180,90,180]],
    ID["EPSG",4326]]
Data axis to CRS axis mapping: 2,1
Origin = (-115.746011188898507,31.917392942038045)
Pixel Size = (0.000911257797028,0.000769675923912)
Metadata:
  AREA_OR_POINT=Area
Image Structure Metadata:
  INTERLEAVE=BAND
Corner Coordinates:
Upper Left  (-115.7460112,  31.9173929) (115d44'45.64"W, 31d55' 2.61"N)
Lower Left  (-115.7460112,  32.7678848) (115d44'45.64"W, 32d46' 4.39"N)
Upper Right (-114.6406555,  31.9173929) (114d38'26.36"W, 31d55' 2.61"N)
Lower Right (-114.6406555,  32.7678848) (114d38'26.36"W, 32d46' 4.39"N)
Center      (-115.1933333,  32.3426389) (115d11'36.00"W, 32d20'33.50"N)
Band 1 Block=1213x1 Type=Float32, ColorInterp=Gray
  Description = los
image

@AlexeyPechnikov
Copy link
Owner

I can add the compression option for GeoTIFF file export, see the commit: 492a3d3

@jie666-6
Copy link
Author

I checked the export data and the data reprojected using gdal. It seems that the resolutions are not the same.

In the data exported from pyGMTSAR, the pixels are not square even though I fix the resolution into 15m
image

Once I use gdalwarp and check the gdalinfo again:
image

Also,you can see that there is a minor symbol in the resolution, which explains why the quick look at the two images is different.

image

@AlexeyPechnikov
Copy link
Owner

Of course, the pixels are not square because the resolution determines multilooking for azimuth and range in full (not fractional) pixels.

sbas.get_spacing()
(13.977137498606893, 4.167690596109662)

How are you going to make square pixels when 13.977… / 4.168… = 3.356…?

@jie666-6
Copy link
Author

Another issue I would like to ask for help if that I got the memory problem when I run this line

sbas.export_geotiff(sbas.ra2ll(unwrap_sbas.phase-trend_sbas),f'{OUTDIR}/SBAS_Phase_unwrap_detrend_sbas')

The data I would like to save is
<xarray.DataArray (pair: 45, lat: 17333, lon: 19035)> Size: 59GB
dask.array<concatenate, shape=(45, 17333, 19035), dtype=float32, chunksize=(1, 2048, 2048), chunktype=numpy.ndarray>
Coordinates:

  • lat (lat) float64 139kB 28.83 28.83 28.83 28.83 ... 31.05 31.05 31.05
  • lon (lon) float64 152kB -97.5 -97.5 -97.5 -97.5 ... -94.5 -94.5 -94.5
  • pair (pair) <U21 4kB '2017-01-02 2017-01-08' ... '2017-09-23 2017-09-29'
    ref (pair) datetime64[ns] 360B 2017-01-02 2017-01-08 ... 2017-09-23
    rep (pair) datetime64[ns] 360B 2017-01-08 2017-01-14 ... 2017-09-29

Since I am running the software on our server with a 1008GB memory limit, the issue should not be related to memory itself. I also tried with the following methods:
self.as_geo(self.ra2ll(grid) if not self.is_geo(grid) else grid).rio.to_raster(filename, compress=compress, tiled=True)
and
self.as_geo(self.ra2ll(grid) if not self.is_geo(grid) else grid).rio.to_raster(filename, compress=compress, tiled=True, lock=threading.Lock())

but neither approach helped in this case.

I even try to only save one phase using
sbas.export_geotiff(sbas.ra2ll(unwrap_sbas.phase-trend_sbas)[0,:,:],f'{OUTDIR}/SBAS_Phase_unwrap_detrend_sbas')

Still the problem exist.
Exporting WGS84 GeoTIFF(s): 0%| | 0/1 [00:00<?, ?it/s]2024-08-17 15:34:20,360 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 88.21 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:34:30,548 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 89.14 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:34:40,732 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 89.72 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:34:50,786 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 90.50 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:34:56,454 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 88.19 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:01,066 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 91.48 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:06,674 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 89.03 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:11,169 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 92.40 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:12,625 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 88.19 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:16,991 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 90.22 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:21,357 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 93.39 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:22,757 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 89.22 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:24,084 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 88.19 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:27,029 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 91.06 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:31,404 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 94.39 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:32,767 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 90.08 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:34,246 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 89.07 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:37,196 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 91.98 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:41,498 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 95.31 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:42,981 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 90.97 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:44,419 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 90.20 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:47,197 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 93.15 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:51,236 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 88.18 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:51,539 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 96.48 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:53,020 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 92.09 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:54,630 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 91.36 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:35:57,213 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 94.37 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:36:01,368 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 89.15 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:36:01,669 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 97.53 GiB -- Worker memory limit: 125.97 GiB
024-08-17 15:45:23,272 - distributed.worker.memory - WARNING - Worker is at 1% memory usage. Resuming worker. Process memory: 2.44 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:45:24,197 - distributed.worker.memory - WARNING - Worker is at 1% memory usage. Resuming worker. Process memory: 2.33 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:45:24,274 - distributed.worker.memory - WARNING - Worker is at 1% memory usage. Resuming worker. Process memory: 2.13 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:45:24,486 - distributed.worker.memory - WARNING - Worker is at 1% memory usage. Resuming worker. Process memory: 2.18 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:45:24,618 - distributed.worker.memory - WARNING - Worker is at 1% memory usage. Resuming worker. Process memory: 2.26 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:45:24,964 - distributed.worker.memory - WARNING - Worker is at 1% memory usage. Resuming worker. Process memory: 2.16 GiB -- Worker memory limit: 125.97 GiB
2024-08-17 15:45:47,805 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 88.30 GiB -- Worker memory limit: 125.97 GiB
/opt/conda/lib/python3.11/site-packages/distributed/client.py:3245: UserWarning: Sending large graph of size 9.10 GiB.
This may cause some slowdown.
Consider scattering data ahead of time and using futures.
warnings.warn(
2024-08-17 15:46:09,506 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 91.35 GiB -- Worker memory limit: 125.97 GiB

I am wondering what the problem is in my case. Thank you very much

@AlexeyPechnikov
Copy link
Owner

You’re attempting to export a stack of 45 large rasters:

<xarray.DataArray (pair: 45, lat: 17333, lon: 19035)> Size: 59GB

However, your available worker memory is insufficient:

Unmanaged memory: 97.53 GiB -- Worker memory limit: 125.97 GiB

Do you really need to export the geocoded detrended phase? Typically, results are exported while processing is done internally in radar coordinates. If exporting this data is necessary, consider reducing the number of workers to allocate more RAM per worker. Alternatively, materialize the data (using sync or similar functions) for (unwrap_sbas.phase - trend_sbas) or even sbas.ra2ll(unwrap_sbas.phase - trend_sbas) before export.

@jie666-6
Copy link
Author

But my problem is that even when I try to only export one raster instead of 45 at once, it still shave such memory issue.

xxx = sbas.ra2ll(unwrap_sbas.phase-trend_sbas)[0,:,:]
xxx
<xarray.DataArray (lat: 17333, lon: 19035)> Size: 1GB
dask.array<getitem, shape=(17333, 19035), dtype=float32, chunksize=(2048, 2048), chunktype=numpy.ndarray>
Coordinates:

  • lat (lat) float64 139kB 28.83 28.83 28.83 28.83 ... 31.05 31.05 31.05
  • lon (lon) float64 152kB -97.5 -97.5 -97.5 -97.5 ... -94.5 -94.5 -94.5
    pair <U21 84B '2017-01-02 2017-01-08'
    ref datetime64[ns] 8B 2017-01-02
    rep datetime64[ns] 8B 2017-01-08
    sbas.export_geotiff(xxx,f'{OUTDIR}/xxx')

@AlexeyPechnikov
Copy link
Owner

But how exactly are you computing the interferograms and correlations?

@jie666-6
Copy link
Author

Here are the code I used:

sbas.compute_ps()
# save PS data into tif
sbas.export_geotiff(sbas.ra2ll(sbas.multilooking(sbas.psfunction(), coarsen=(1,4), wavelength=100)),f'{OUTDIR}/PS')
sbas.plot_psfunction(quantile=[0.01, 0.90])
plt.savefig(f'{OUTDIR}/PSfunction.png', dpi=300, bbox_inches='tight')
baseline_pairs = sbas.sbas_pairs(days=6)
with mpl_settings({'figure.dpi': 300}):
    sbas.plot_baseline(baseline_pairs)

sbas.compute_interferogram_multilook(baseline_pairs, 'intf_mlook', wavelength=30, weight=sbas.psfunction(),resolution=15)
# use default 15m resolution
decimator = sbas.decimator(resolution=15)

ds_sbas = sbas.open_stack('intf_mlook')
intf_sbas = decimator(ds_sbas.phase)
corr_sbas = decimator(ds_sbas.correlation)

corr_sbas_stack = corr_sbas.mean('pair')


# 2D unwrapping
unwrap_sbas = sbas.unwrap_snaphu(
    intf_sbas.where(corr_sbas_stack>0.3),
    corr_sbas,
    conncomp=True
)

 # Trend Correction
decimator_sbas = sbas.decimator(resolution=15, grid=(1,1))
topo = decimator_sbas(sbas.get_topo())
yy, xx = xr.broadcast(topo.y, topo.x)
trend_sbas = sbas.regression(unwrap_sbas.phase,
        [topo,    topo*yy,    topo*xx,    topo*yy*xx,
        topo**2, topo**2*yy, topo**2*xx, topo**2*yy*xx,
        yy, xx, yy*xx], corr_sbas)

sbas.export_geotiff(sbas.ra2ll(unwrap_sbas.phase-trend_sbas),f'{OUTDIR}/SBAS_Phase_unwrap_detrend_sbas')

@AlexeyPechnikov
Copy link
Owner

This code is redundant because you've already specified the parameter 'resolution=15' in the sbas.compute_interferogram_multilook call:

# use default 15m resolution
decimator = sbas.decimator(resolution=15)
intf_sbas = decimator(ds_sbas.phase)
corr_sbas = decimator(ds_sbas.correlation)

However, neither the unwrap_sbas nor trend_sbas variables have been materialized, meaning they're being recalculated on the fly multiple times. You should sync them to disk as shown in the SBAS+PSI examples at https://insar.dev. This will help you fit the processing within your memory limits.

Remember, PyGMTSAR follows a lazy computation paradigm where you write the full code, run it, inspect the result sizes, and adjust outputs as necessary before executing the entire computation. This concept is detailed in my book, available in the repository.

@jie666-6
Copy link
Author

jie666-6 commented Aug 23, 2024

I materialized phase and correlation using the following code,

sbas.sync_cube(ds_sbas.phase, f'{OUTDIR}/SBAS_Phase_stack')
sbas.sync_cube(ds_sbas.correlation, f'{OUTDIR}/Coherence_stack')

then when I would like to do the same for unwrap phase
unwrap_sbas = sbas.sync_cube(unwrap_sbas.phase, f'{OUTDIR}/unwrap_sbas')

it stuck around 75% and then I checked docker stats , it used too many momery
image

I am not sure what the problem here. once I used sync_cube, will those variables be delete from the momery? Because for the unwrap_sbas.phase itself, it is only a matrix in 45 GB which should not use such large momery.

@AlexeyPechnikov
Copy link
Owner

sync_cube() function is very ineffective for stack based data. It is intended for pixel wise computations like 1D unwrapping and least-squares processing. For your case, it requires all the phases and unwrapped phases calculation at once. Use sync_stack() as in PyGMTSAR examples.

@jie666-6
Copy link
Author

Thanks a lot. Yes, sync_stack() is more effective. But I got this error at the last step in sync_stack

Saving 2D Stack: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4601/4601 [00:11<00:00, 414.01it/s]
Traceback (most recent call last):
File "/home/workdir/06_InSAR/code/example2.py", line 304, in
sbas.sync_stack(ds_sbas.phase, f'{OUTDIR}/SBAS_Phase_stack')
File "/opt/conda/lib/python3.11/site-packages/pygmtsar/IO.py", line 521, in sync_stack
return self.open_stack(name)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pygmtsar/IO.py", line 552, in open_stack
data = xr.open_mfdataset(
^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/xarray/backends/api.py", line 1019, in open_mfdataset
raise OSError("no files to open")
OSError: no files to open

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants