Skip to content

Commit dcc1699

Browse files
Merge branch 'v4-dev' into v4-argo-float-tutorial
2 parents 9a6abf5 + 2ce77c1 commit dcc1699

18 files changed

+472
-337
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,9 @@
1010
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5353/badge)](https://bestpractices.coreinfrastructure.org/projects/5353)
1111
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/OceanParcels/parcels/main?labpath=docs%2Fexamples%2Fparcels_tutorial.ipynb)
1212

13+
> [!WARNING]
14+
> This branch is `v4-dev` - version 4 of Parcels which is in active development. See `main` (or the tags) to browse stable versions of Parcels.
15+
1316
**Parcels** (**P**robably **A** **R**eally **C**omputationally **E**fficient **L**agrangian **S**imulator) is a set of Python classes and methods to create customisable particle tracking simulations using output from Ocean Circulation models. Parcels can be used to track passive and active particulates such as water, plankton, [plastic](http://www.topios.org/) and [fish](https://github.com/Jacketless/IKAMOANA).
1417

1518
![Arctic-SO-medusaParticles](https://github.com/OceanParcels/oceanparcels_website/blob/main/images/homepage.gif)

docs/examples/tutorial_kernelloop.ipynb

Lines changed: 11 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -26,17 +26,7 @@
2626
"\n",
2727
"When you run a Parcels simulation (i.e. a call to `pset.execute()`), the Kernel loop is the main part of the code that is executed. This part of the code loops through all particles and executes the Kernels that are defined for each particle.\n",
2828
"\n",
29-
"In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particle_dlon`, `particle_dlat`, and `particle_ddepth`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by adding the changes in position, the ordering of the Kernels has no consequence on the particle displacement."
30-
]
31-
},
32-
{
33-
"cell_type": "markdown",
34-
"metadata": {},
35-
"source": [
36-
"<div class=\"alert alert-info\">\n",
37-
"\n",
38-
"__Note__ that the variables `particle_dlon`, `particle_dlat`, and `particle_ddepth` are defined by the wrapper function that Parcels generates for each Kernel. This is why you don't have to define these variables yourself when writing a Kernel. See [here](https://github.com/OceanParcels/parcels/blob/daa4b062ed8ae0b2be3d87367d6b45599d6f95db/parcels/kernel.py#L277-L294) for the implementation of the wrapper functions.\n",
39-
"</div>"
29+
"In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.ddepth`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by adding the changes in position, the ordering of the Kernels has no consequence on the particle displacement."
4030
]
4131
},
4232
{
@@ -52,29 +42,25 @@
5242
"source": [
5343
"Below is a structured overview of the Kernel loop is implemented. Note that this is for longitude only, but the same process is applied for latitude and depth.\n",
5444
"\n",
55-
"1. Define an extra variable `particle.lon_nextloop` for each particle, which is the longitude at the end of the Kernel loop. Inititalise it to `particle.lon`.\n",
56-
"\n",
57-
"2. Also define an extra variable `particle.time_nextloop` for each particle, which is the time at the end of the Kernel loop. Inititalise it to `particle.time`.\n",
45+
"1. Initialise an extra Variable `particles.lon=0` and `particles.time_nextloop = particles.time`\n",
5846
"\n",
59-
"3. Within the Kernel loop, for each particle:<br>\n",
47+
"2. Within the Kernel loop, for each particle:<br>\n",
6048
"\n",
61-
" 1. Update `particle.lon` with `particle.lon_nextloop`<br>\n",
49+
" 1. Update `particles.lon += particles.dlon`<br>\n",
6250
"\n",
63-
" 2. Update `particle.time` with `particle.time_nextloop`<br>\n",
51+
" 2. Set variable `particles.dlon = 0`<br>\n",
6452
"\n",
65-
" 3. Set local variable `particle_dlon = 0`<br>\n",
53+
" 3. Update `particles.time = particles.time_nextloop`\n",
6654
"\n",
6755
" 4. For each Kernel in the list of Kernels:\n",
6856
" \n",
6957
" 1. Execute the Kernel\n",
7058
" \n",
71-
" 2. Update `particle_dlon` by adding the change in longitude, if needed<br>\n",
59+
" 2. Update `particles.dlon` by adding the change in longitude, if needed<br>\n",
7260
"\n",
73-
" 5. Update `particle.lon_nextloop` with `particle.lon + particle_dlon`<br>\n",
74-
" \n",
75-
" 6. Update `particle.time_nextloop` with `particle.time + particle.dt`<br>\n",
61+
" 5. Update `particles.time_nextloop += particles.dt`<br>\n",
7662
"\n",
77-
" 7. If `outputdt` is a multiple of `particle.time`, write `particle.lon` and `particle.time` to zarr output file<br>\n",
63+
" 6. If `outputdt` is a multiple of `particle.time`, write `particle.lon` and `particle.time` to zarr output file<br>\n",
7864
"\n",
7965
"Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particle.temp = fieldset.Temp[particle.time, particle.depth, particle.lat, particle.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file."
8066
]
@@ -268,12 +254,12 @@
268254
"### 1. Avoid updating particle locations directly in Kernels\n",
269255
"It is better not to update `particle.lon` directly in a Kernel, as it can interfere with the loop above. Assigning a value to `particle.lon` in a Kernel will throw a warning. \n",
270256
"\n",
271-
"Instead, update the local variable `particle_dlon`.\n",
257+
"Instead, update the local variable `particle.dlon`.\n",
272258
"\n",
273259
"### 2. Be careful with updating particle variables that do not depend on Fields.\n",
274260
"While assigning the interpolated value of a `Field` to a Particle goes well in the loop above, this is not necessarily so for assigning other attributes. For example, a line like `particle.age += particle.dt` is executed directly so may result in the age being `dt` at `time = 0` in the output file. \n",
275261
"\n",
276-
"A workaround is to either initialise the age to `-dt`, or to increase the `age` only when `particle.time > 0` (using an `if` statement).\n",
262+
"A workaround is to either initialise the age to `-dt`, or to increase the `age` only when `particle.time > 0` (using an `np.where` statement).\n",
277263
"\n",
278264
"\n",
279265
"### 3. The last time is not written to file\n",

parcels/__init__.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
__version__ = version
44

5+
import warnings as _warnings
6+
57
from parcels.application_kernels import *
68
from parcels.field import *
79
from parcels.fieldset import *
@@ -11,3 +13,9 @@
1113
from parcels.particlefile import *
1214
from parcels.particleset import *
1315
from parcels.tools import *
16+
17+
_warnings.warn(
18+
"This is an alpha version of Parcels v4. The API is not stable and may change without deprecation warnings.",
19+
UserWarning,
20+
stacklevel=2,
21+
)

parcels/_index_search.py

Lines changed: 57 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -5,18 +5,17 @@
55

66
import numpy as np
77

8-
from parcels._typing import Mesh
9-
from parcels.tools.statuscodes import (
10-
_raise_grid_searching_error,
11-
_raise_time_extrapolation_error,
12-
)
8+
from parcels.tools.statuscodes import _raise_time_extrapolation_error
139

1410
if TYPE_CHECKING:
1511
from parcels.xgrid import XGrid
1612

1713
from .field import Field
1814

1915

16+
GRID_SEARCH_ERROR = -3
17+
18+
2019
def _search_time_index(field: Field, time: datetime):
2120
"""Find and return the index and relative coordinate in the time array associated with a given time.
2221
@@ -40,13 +39,7 @@ def _search_time_index(field: Field, time: datetime):
4039
return np.atleast_1d(tau), np.atleast_1d(ti)
4140

4241

43-
def _search_indices_curvilinear_2d(
44-
grid: XGrid, y: float, x: float, yi_guess: int | None = None, xi_guess: int | None = None
45-
): # TODO fix typing instructions to make clear that y, x etc need to be ndarrays
46-
yi, xi = yi_guess, xi_guess
47-
if yi is None or xi is None:
48-
yi, xi = grid.get_spatial_hash().query(y, x)
49-
42+
def curvilinear_point_in_cell(grid, y: np.ndarray, x: np.ndarray, yi: np.ndarray, xi: np.ndarray):
5043
xsi = eta = -1.0 * np.ones(len(x), dtype=float)
5144
invA = np.array(
5245
[
@@ -56,67 +49,60 @@ def _search_indices_curvilinear_2d(
5649
[1, -1, 1, -1],
5750
]
5851
)
59-
maxIterSearch = 1e6
60-
it = 0
61-
tol = 1.0e-10
62-
63-
# # ! Error handling for out of bounds
64-
# TODO: Re-enable in some capacity
65-
# if x < field.lonlat_minmax[0] or x > field.lonlat_minmax[1]:
66-
# if grid.lon[0, 0] < grid.lon[0, -1]:
67-
# _raise_grid_searching_error(y, x)
68-
# elif x < grid.lon[0, 0] and x > grid.lon[0, -1]: # This prevents from crashing in [160, -160]
69-
# _raise_grid_searching_error(z, y, x)
70-
71-
# if y < field.lonlat_minmax[2] or y > field.lonlat_minmax[3]:
72-
# _raise_grid_searching_error(z, y, x)
73-
74-
while np.any(xsi < -tol) or np.any(xsi > 1 + tol) or np.any(eta < -tol) or np.any(eta > 1 + tol):
75-
px = np.array([grid.lon[yi, xi], grid.lon[yi, xi + 1], grid.lon[yi + 1, xi + 1], grid.lon[yi + 1, xi]])
76-
77-
py = np.array([grid.lat[yi, xi], grid.lat[yi, xi + 1], grid.lat[yi + 1, xi + 1], grid.lat[yi + 1, xi]])
78-
a = np.dot(invA, px)
79-
b = np.dot(invA, py)
80-
81-
aa = a[3] * b[2] - a[2] * b[3]
82-
bb = a[3] * b[0] - a[0] * b[3] + a[1] * b[2] - a[2] * b[1] + x * b[3] - y * a[3]
83-
cc = a[1] * b[0] - a[0] * b[1] + x * b[1] - y * a[1]
84-
85-
det2 = bb * bb - 4 * aa * cc
86-
with np.errstate(divide="ignore", invalid="ignore"):
87-
det = np.where(det2 > 0, np.sqrt(det2), eta)
88-
89-
eta = np.where(abs(aa) < 1e-12, -cc / bb, np.where(det2 > 0, (-bb + det) / (2 * aa), eta))
90-
91-
xsi = np.where(
92-
abs(a[1] + a[3] * eta) < 1e-12,
93-
((y - py[0]) / (py[1] - py[0]) + (y - py[3]) / (py[2] - py[3])) * 0.5,
94-
(x - a[0] - a[2] * eta) / (a[1] + a[3] * eta),
95-
)
96-
97-
xi = np.where(xsi < -tol, xi - 1, np.where(xsi > 1 + tol, xi + 1, xi))
98-
yi = np.where(eta < -tol, yi - 1, np.where(eta > 1 + tol, yi + 1, yi))
99-
100-
(yi, xi) = _reconnect_bnd_indices(yi, xi, grid.ydim, grid.xdim, grid._mesh)
101-
it += 1
102-
if it > maxIterSearch:
103-
print(f"Correct cell not found after {maxIterSearch} iterations")
104-
_raise_grid_searching_error(0, y, x)
105-
xsi = np.where(xsi < 0.0, 0.0, np.where(xsi > 1.0, 1.0, xsi))
106-
eta = np.where(eta < 0.0, 0.0, np.where(eta > 1.0, 1.0, eta))
107-
108-
if np.any((xsi < 0) | (xsi > 1) | (eta < 0) | (eta > 1)):
109-
_raise_grid_searching_error(y, x)
110-
return (yi, eta, xi, xsi)
11152

53+
px = np.array([grid.lon[yi, xi], grid.lon[yi, xi + 1], grid.lon[yi + 1, xi + 1], grid.lon[yi + 1, xi]])
54+
py = np.array([grid.lat[yi, xi], grid.lat[yi, xi + 1], grid.lat[yi + 1, xi + 1], grid.lat[yi + 1, xi]])
55+
56+
a, b = np.dot(invA, px), np.dot(invA, py)
57+
aa = a[3] * b[2] - a[2] * b[3]
58+
bb = a[3] * b[0] - a[0] * b[3] + a[1] * b[2] - a[2] * b[1] + x * b[3] - y * a[3]
59+
cc = a[1] * b[0] - a[0] * b[1] + x * b[1] - y * a[1]
60+
det2 = bb * bb - 4 * aa * cc
11261

113-
def _reconnect_bnd_indices(yi: int, xi: int, ydim: int, xdim: int, mesh: Mesh):
114-
xi = np.where(xi < 0, (xdim - 2) if mesh == "spherical" else 0, xi)
115-
xi = np.where(xi > xdim - 2, 0 if mesh == "spherical" else (xdim - 2), xi)
62+
with np.errstate(divide="ignore", invalid="ignore"):
63+
det = np.where(det2 > 0, np.sqrt(det2), eta)
64+
eta = np.where(abs(aa) < 1e-12, -cc / bb, np.where(det2 > 0, (-bb + det) / (2 * aa), eta))
11665

117-
xi = np.where(yi > ydim - 2, xdim - xi if mesh == "spherical" else xi, xi)
66+
xsi = np.where(
67+
abs(a[1] + a[3] * eta) < 1e-12,
68+
((y - py[0]) / (py[1] - py[0]) + (y - py[3]) / (py[2] - py[3])) * 0.5,
69+
(x - a[0] - a[2] * eta) / (a[1] + a[3] * eta),
70+
)
11871

119-
yi = np.where(yi < 0, 0, yi)
120-
yi = np.where(yi > ydim - 2, ydim - 2, yi)
72+
is_in_cell = np.where((xsi >= 0) & (xsi <= 1) & (eta >= 0) & (eta <= 1), 1, 0)
12173

122-
return yi, xi
74+
return is_in_cell, np.column_stack((xsi, eta))
75+
76+
77+
def _search_indices_curvilinear_2d(
78+
grid: XGrid, y: np.ndarray, x: np.ndarray, yi_guess: np.ndarray | None = None, xi_guess: np.ndarray | None = None
79+
):
80+
yi_guess = np.array(yi_guess)
81+
xi_guess = np.array(xi_guess)
82+
xi = np.full(len(x), GRID_SEARCH_ERROR, dtype=np.int32)
83+
yi = np.full(len(y), GRID_SEARCH_ERROR, dtype=np.int32)
84+
if np.any(xi_guess):
85+
# If an initial guess is provided, we first perform a point in cell check for all guessed indices
86+
is_in_cell, coords = curvilinear_point_in_cell(grid, y, x, yi_guess, xi_guess)
87+
y_check = y[is_in_cell == 0]
88+
x_check = x[is_in_cell == 0]
89+
zero_indices = np.where(is_in_cell == 0)[0]
90+
else:
91+
# Otherwise, we need to check all points
92+
y_check = y
93+
x_check = x
94+
coords = -1.0 * np.ones((len(y), 2), dtype=np.float32)
95+
zero_indices = np.arange(len(y))
96+
97+
# If there are any points that were not found in the first step, we query the spatial hash for those points
98+
if len(zero_indices) > 0:
99+
yi_q, xi_q, coords_q = grid.get_spatial_hash().query(y_check, x_check)
100+
# Only those points that were not found in the first step are updated
101+
coords[zero_indices, :] = coords_q
102+
yi[zero_indices] = yi_q
103+
xi[zero_indices] = xi_q
104+
105+
xsi = coords[:, 0]
106+
eta = coords[:, 1]
107+
108+
return (yi, eta, xi, xsi)

parcels/basegrid.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,9 +54,9 @@ def search(self, z: float, y: float, x: float, ei=None) -> dict[str, tuple[int,
5454
- Unstructured grid: {"Z": (zi, zeta), "FACE": (fi, bcoords)}
5555
5656
Where:
57-
- index (int): The cell position of a particle along the given axis
57+
- index (int): The cell position of the particles along the given axis
5858
- barycentric_coordinates (float or np.ndarray): The coordinates defining
59-
a particle's position within the grid cell. For structured grids, this
59+
the particles positions within the grid cell. For structured grids, this
6060
is a single coordinate per axis; for unstructured grids, this can be
6161
an array of coordinates for the face polygon.
6262

0 commit comments

Comments
 (0)