Skip to content

Commit 44711f0

Browse files
Merge branch 'v4-dev' into reference-api-v4
2 parents 068f995 + 049d549 commit 44711f0

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+1818
-1503
lines changed

docs/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -522,7 +522,7 @@ def linkcode_resolve(domain, info):
522522
# -- Options for MyST parser ----------------------------------------------
523523
myst_heading_anchors = 3
524524

525-
myst_enable_extensions = ["substitution"]
525+
myst_enable_extensions = ["substitution", "amsmath", "dollarmath"]
526526

527527
# -- Options for MyST-nb --------------------------------------------------
528528
nb_execution_mode = "cache"

docs/getting_started/tutorial_output.ipynb

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -540,15 +540,9 @@
540540
"\n",
541541
"# Create animation\n",
542542
"anim = FuncAnimation(fig, animate, frames=nframes, interval=100)\n",
543+
"plt.close(fig)\n",
543544
"anim"
544545
]
545-
},
546-
{
547-
"cell_type": "code",
548-
"execution_count": null,
549-
"metadata": {},
550-
"outputs": [],
551-
"source": []
552546
}
553547
],
554548
"metadata": {

docs/getting_started/tutorial_quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ TODO: link to a list of included kernels
9999
```
100100

101101
```{code-cell}
102-
kernels = [parcels.kernels.AdvectionEE]
102+
kernels = [parcels.kernels.AdvectionRK2]
103103
```
104104

105105
## Prepare output: `ParticleFile`
Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Interpolator explanation
2+
3+
Interpolation is an important functionality of Parcels. On this page we will discuss the way it is
4+
implemented in **Parcels** and how to write a custom interpolator function.
5+
6+
```{note}
7+
TODO: expand explanation (similar to Kernel loop explanation)
8+
```
9+
10+
When we want to know the state of particles in an environmental field, such as temperature or velocity,
11+
we _evaluate_ the `parcels.Field` at the particles real position in time and space (`t`, `z`, `lat`, `lon`).
12+
In Parcels we can do this using square brackets:
13+
14+
```
15+
particles.temperature = fieldset.temperature[particles]
16+
```
17+
18+
````{note}
19+
The statement above is shorthand for
20+
```python
21+
particles.temperature = fieldset.temperature[particles.time, particles.z, particles.lat, particles.lon, particles]
22+
```
23+
where the `particles` argument at the end provides the grid search algorithm with a first guess for the element indices to interpolate on.
24+
25+
If you want to sample at a different location, or time, that is not necessarily close to the particles location, you can use
26+
```python
27+
particles.temperature = fieldset.temperature[time, depth, lat, lon]
28+
```
29+
but this could be slower for curvilinear and unstructured because the entire grid needs to be searched.
30+
````
31+
32+
The values of the `temperature` field at the particles' positions are determined using an interpolation
33+
method. This interpolation method defines how the discretized values of the `parcels.Field` should
34+
relate to the value at any point within a grid cell.
35+
36+
Each `parcels.Field` is defined on a (structured) `parcels.XGrid` or (unstructured) `parcels.UXGrid`.
37+
The interpolation function takes information about the particles position relative to this grid (`grid_positions`),
38+
as well as the values of the grid points of the `parcels.Field` in time and space, to calculate
39+
the requested value at the particles location. Note that all grid values are available so that higher-order interpolation is possible.
40+
41+
## Interpolator API
42+
43+
The interpolators included in Parcels are designed for common interpolation schemes in Parcels simulations.
44+
If we want to add a custom interpolation method, we need to look at the interpolator API:
45+
46+
We can write an interpolator function that takes a `parcels.Field` (or `parcels.VectorField`), a dictionary with the `particle_positions`
47+
in real space and time, and a dictionary with the `grid_positions`.
48+
49+
The `particle_positions` dictionary contains:
50+
51+
```
52+
particle_positions = {"time", time, "z", z, "lat", lat, "lon", lon}
53+
```
54+
55+
For structured (`X`) grids, the `grid_positions` dictionary contains:
56+
57+
```
58+
grid_positions = {
59+
"T": {"index": ti, "bcoord": tau},
60+
"Z": {"index": zi, "bcoord": zeta},
61+
"Y": {"index": yi, "bcoord": eta},
62+
"X": {"index": xi, "bcoord", xsi},
63+
}
64+
```
65+
66+
where `index` is the grid index in the corresponding dimension, and `bcoord` is the barycentric coordinate in the grid cell.
67+
68+
For unstructured (`UX`) grids, the same dictionary is defined as:
69+
70+
```
71+
grid_positions = {
72+
"T": {"index": ti, "bcoord": tau},
73+
"Z": {"index": zi, "bcoord": zeta},
74+
"FACE": {"index": fi, "bcoord": bcoord}
75+
}
76+
```
Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
---
2+
file_format: mystnb
3+
kernelspec:
4+
name: python3
5+
---
6+
7+
# The Parcels Kernel loop
8+
9+
On this page we discuss Parcels' execution loop, and what happens under the hood when you combine multiple Kernels.
10+
11+
This is not very relevant when you only use the built-in Advection kernels, but can be important when you are writing and combining your own Kernels!
12+
13+
## Background
14+
15+
When you run a Parcels simulation (i.e. a call to `pset.execute()`), the Kernel loop is the main part of the code that is executed. This part of the code loops through time and executes the Kernels for all particle.
16+
17+
In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.dz`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by summing the _changes_ in position, the ordering of the Kernels has no consequence on the particle displacement.
18+
19+
## Basic implementation
20+
21+
Below is a structured overview of how the Kernel loop is implemented. Note that this is for `time` and `lon` only, but the process for `lon` is also applied to `lat` and `z`.
22+
23+
1. Initialise an extra Variable `particles.dlon=0`
24+
25+
2. Within the Kernel loop, for each particle:
26+
1. Update `particles.lon += particles.dlon`
27+
28+
2. Update `particles.time += particles.dt` (except for on the first iteration of the Kernel loop)<br>
29+
30+
3. Set variable `particles.dlon = 0`
31+
32+
4. For each Kernel in the list of Kernels:
33+
1. Execute the Kernel
34+
35+
2. Update `particles.dlon` by adding the change in longitude, if needed
36+
37+
5. If `outputdt` is a multiple of `particles.time`, write `particles.lon` and `particles.time` to zarr output file
38+
39+
Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particles.temp = fieldset.Temp[particles.time, particles.z, particles.lat, particles.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file.
40+
41+
## Example with currents and winds
42+
43+
Below is a simple example of some particles at the surface of the ocean. We create an idealised zonal wind flow that will "push" a particle that is already affected by the surface currents. The Kernel loop ensures that these two forces act at the same time and location.
44+
45+
```{code-cell}
46+
import matplotlib.pyplot as plt
47+
import numpy as np
48+
import xarray as xr
49+
50+
import parcels
51+
52+
# Load the CopernicusMarine data in the Agulhas region from the example_datasets
53+
example_dataset_folder = parcels.download_example_dataset(
54+
"CopernicusMarine_data_for_Argo_tutorial"
55+
)
56+
57+
ds_fields = xr.open_mfdataset(f"{example_dataset_folder}/*.nc", combine="by_coords")
58+
ds_fields.load() # load the dataset into memory
59+
60+
# Create an idealised wind field and add it to the dataset
61+
tdim, ydim, xdim = (len(ds_fields.time),len(ds_fields.latitude), len(ds_fields.longitude))
62+
ds_fields["UWind"] = xr.DataArray(
63+
data=np.ones((tdim, ydim, xdim)) * np.sin(ds_fields.latitude.values)[None, :, None],
64+
coords=[ds_fields.time, ds_fields.latitude, ds_fields.longitude])
65+
66+
ds_fields["VWind"] = xr.DataArray(
67+
data=np.zeros((tdim, ydim, xdim)),
68+
coords=[ds_fields.time, ds_fields.latitude, ds_fields.longitude])
69+
70+
fieldset = parcels.FieldSet.from_copernicusmarine(ds_fields)
71+
72+
# Set unit converters for custom wind fields
73+
fieldset.UWind.units = parcels.GeographicPolar()
74+
fieldset.VWind.units = parcels.Geographic()
75+
```
76+
77+
Now we define a wind kernel that uses a forward Euler method to apply the wind forcing. Note that we update the `particles.dlon` and `particles.dlat` variables, rather than `particles.lon` and `particles.lat` directly.
78+
79+
```{code-cell}
80+
def wind_kernel(particles, fieldset):
81+
particles.dlon += (
82+
fieldset.UWind[particles] * particles.dt
83+
)
84+
particles.dlat += (
85+
fieldset.VWind[particles] * particles.dt
86+
)
87+
```
88+
89+
First run a simulation where we apply kernels as `[AdvectionRK4, wind_kernel]`
90+
91+
```{code-cell}
92+
:tags: [hide-output]
93+
npart = 10
94+
z = np.repeat(ds_fields.depth[0].values, npart)
95+
lons = np.repeat(31, npart)
96+
lats = np.linspace(-32.5, -30.5, npart)
97+
98+
pset = parcels.ParticleSet(fieldset, pclass=parcels.Particle, z=z, lat=lats, lon=lons)
99+
output_file = parcels.ParticleFile(
100+
store="advection_then_wind.zarr", outputdt=np.timedelta64(6,'h')
101+
)
102+
pset.execute(
103+
[parcels.kernels.AdvectionRK4, wind_kernel],
104+
runtime=np.timedelta64(5,'D'),
105+
dt=np.timedelta64(1,'h'),
106+
output_file=output_file,
107+
)
108+
```
109+
110+
Then also run a simulation where we apply the kernels in the reverse order as `[wind_kernel, AdvectionRK4]`
111+
112+
```{code-cell}
113+
:tags: [hide-output]
114+
pset_reverse = parcels.ParticleSet(
115+
fieldset, pclass=parcels.Particle, z=z, lat=lats, lon=lons
116+
)
117+
output_file_reverse = parcels.ParticleFile(
118+
store="wind_then_advection.zarr", outputdt=np.timedelta64(6,"h")
119+
)
120+
pset_reverse.execute(
121+
[wind_kernel, parcels.kernels.AdvectionRK4],
122+
runtime=np.timedelta64(5,"D"),
123+
dt=np.timedelta64(1,"h"),
124+
output_file=output_file_reverse,
125+
)
126+
```
127+
128+
Finally, plot the trajectories to show that they are identical in the two simulations.
129+
130+
```{code-cell}
131+
# Plot the resulting particle trajectories overlapped for both cases
132+
advection_then_wind = xr.open_zarr("advection_then_wind.zarr")
133+
wind_then_advection = xr.open_zarr("wind_then_advection.zarr")
134+
plt.plot(wind_then_advection.lon.T, wind_then_advection.lat.T, "-")
135+
plt.plot(advection_then_wind.lon.T, advection_then_wind.lat.T, "--", c="k", alpha=0.7)
136+
plt.show()
137+
```
138+
139+
```{warning}
140+
It is better not to update `particles.lon` directly in a Kernel, as it can interfere with the loop above. Assigning a value to `particles.lon` in a Kernel will throw a warning.
141+
142+
Instead, update the local variable `particles.dlon`.
143+
```

docs/user_guide/examples/tutorial_Argofloats.ipynb

Lines changed: 7 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -30,54 +30,44 @@
3030
"driftdepth = 1000 # maximum depth in m\n",
3131
"maxdepth = 2000 # maximum depth in m\n",
3232
"vertical_speed = 0.10 # sink and rise speed in m/s\n",
33-
"cycletime = (\n",
34-
" 10 * 86400\n",
35-
") # total time of cycle in seconds # TODO update to \"timedelta64[s]\"\n",
33+
"cycletime = 10 * 86400 # total time of cycle in seconds\n",
3634
"drifttime = 9 * 86400 # time of deep drift in seconds\n",
3735
"\n",
3836
"\n",
3937
"def ArgoPhase1(particles, fieldset):\n",
40-
" dt = particles.dt / np.timedelta64(1, \"s\") # convert dt to seconds\n",
41-
"\n",
4238
" def SinkingPhase(p):\n",
4339
" \"\"\"Phase 0: Sinking with vertical_speed until depth is driftdepth\"\"\"\n",
44-
" p.dz += vertical_speed * dt\n",
40+
" p.dz += vertical_speed * particles.dt\n",
4541
" p.cycle_phase = np.where(p.z + p.dz >= driftdepth, 1, p.cycle_phase)\n",
4642
" p.dz = np.where(p.z + p.dz >= driftdepth, driftdepth - p.z, p.dz)\n",
4743
"\n",
4844
" SinkingPhase(particles[particles.cycle_phase == 0])\n",
4945
"\n",
5046
"\n",
5147
"def ArgoPhase2(particles, fieldset):\n",
52-
" dt = particles.dt / np.timedelta64(1, \"s\") # convert dt to seconds\n",
53-
"\n",
5448
" def DriftingPhase(p):\n",
5549
" \"\"\"Phase 1: Drifting at depth for drifttime seconds\"\"\"\n",
56-
" p.drift_age += dt\n",
50+
" p.drift_age += particles.dt\n",
5751
" p.cycle_phase = np.where(p.drift_age >= drifttime, 2, p.cycle_phase)\n",
5852
" p.drift_age = np.where(p.drift_age >= drifttime, 0, p.drift_age)\n",
5953
"\n",
6054
" DriftingPhase(particles[particles.cycle_phase == 1])\n",
6155
"\n",
6256
"\n",
6357
"def ArgoPhase3(particles, fieldset):\n",
64-
" dt = particles.dt / np.timedelta64(1, \"s\") # convert dt to seconds\n",
65-
"\n",
6658
" def SecondSinkingPhase(p):\n",
6759
" \"\"\"Phase 2: Sinking further to maxdepth\"\"\"\n",
68-
" p.dz += vertical_speed * dt\n",
60+
" p.dz += vertical_speed * particles.dt\n",
6961
" p.cycle_phase = np.where(p.z + p.dz >= maxdepth, 3, p.cycle_phase)\n",
7062
" p.dz = np.where(p.z + p.dz >= maxdepth, maxdepth - p.z, p.dz)\n",
7163
"\n",
7264
" SecondSinkingPhase(particles[particles.cycle_phase == 2])\n",
7365
"\n",
7466
"\n",
7567
"def ArgoPhase4(particles, fieldset):\n",
76-
" dt = particles.dt / np.timedelta64(1, \"s\") # convert dt to seconds\n",
77-
"\n",
7868
" def RisingPhase(p):\n",
7969
" \"\"\"Phase 3: Rising with vertical_speed until at surface\"\"\"\n",
80-
" p.dz -= vertical_speed * dt\n",
70+
" p.dz -= vertical_speed * particles.dt\n",
8171
" p.temp = fieldset.thetao[p.time, p.z, p.lat, p.lon]\n",
8272
" p.cycle_phase = np.where(p.z + p.dz <= fieldset.mindepth, 4, p.cycle_phase)\n",
8373
" p.dz = np.where(\n",
@@ -100,8 +90,7 @@
10090
"\n",
10191
"\n",
10292
"def ArgoPhase6(particles, fieldset):\n",
103-
" dt = particles.dt / np.timedelta64(1, \"s\") # convert dt to seconds\n",
104-
" particles.cycle_age += dt # update cycle_age"
93+
" particles.cycle_age += particles.dt # update cycle_age"
10594
]
10695
},
10796
{
@@ -211,11 +200,7 @@
211200
"x = ds_particles[\"lon\"][:].squeeze()\n",
212201
"y = ds_particles[\"lat\"][:].squeeze()\n",
213202
"z = ds_particles[\"z\"][:].squeeze()\n",
214-
"time = (\n",
215-
" (ds_particles[\"time\"][:].squeeze() - fieldset.time_interval.left).astype(float)\n",
216-
" / 1e9\n",
217-
" / 86400\n",
218-
") # convert time to days\n",
203+
"time = ds_particles[\"time\"][:].squeeze()\n",
219204
"temp = ds_particles[\"temp\"][:].squeeze()"
220205
]
221206
},

0 commit comments

Comments
 (0)