Error in heat PDE implementation of Part3 L2C #211
Replies: 2 comments 1 reply
-
Hi @AliaNajwaMY! Thank you for using Neuromancer. Regarding the runtime error, I believe the issue is in the
Regarding the loss not changing, I believe the cause is the lack of dependence of your control variable
to enforce your boundary condition. This will cause Please let me know if that helps! Best wishes, |
Beta Was this translation helpful? Give feedback.
-
Hello Bruno,Thanks for your reply. From what I’m understanding if I do that, I am enforcing the dy/dt = u at the boundary, which actually doesn’t satisfy the Dirichlet condition y= u at the boundary. When I looked at the documentation for neuromancer, it seems that the constraints are only defined as bounding conditions (ie max/min value of u or x or ref). How can I actually use the constraints to define my boundary conditions instead?Alia(She/ her) On 9 Jan 2025, at 4:19 PM, Bruno Jacob ***@***.***> wrote:
Hi @AliaNajwaMY! Thank you for using Neuromancer.
Regarding the runtime error, I believe the issue is in the DictDataset. Try adding the requires_grad=True there, i.e.,
train_data = DictDataset({'x': torch.tensor(x1_init, dtype=torch.float32, requires_grad=True), # sampled initial conditions of states
'r': batched_ref}, name='train')
dev_data = DictDataset({'x': torch.rand(n_samples, 1, nx, dtype=torch.float32, requires_grad=True), # sampled initial conditions of states
'r': batched_ref}, name='dev')
Regarding the loss not changing, I believe the cause is the lack of dependence of your control variable u in the pde_equations function. I.e., try adding this to the pde_equations function:
dydt[:, 0] = u.squeeze(-1)
to enforce your boundary condition. This will cause u to have an effect on the predicted variables, and therefore the gradient will be non-zero, allowing for some backpropagation and consequently the loss should no longer be constant.
Please let me know if that helps!
Best wishes,
Bruno
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hello! I am trying to train the control (u) of my heat equation as below
$$y_t - ay_{xx} - by_x - cy =0, \quad (x,t) \in [0,1] \times [0,1]$$
$$y(0,t) = 0, \quad y(1,t)=u(t) $$
$$y(x,0)= \sin(\pi * x) $$
$$r(x,t) = 0.1 \sin(\pi x) \cos( \pi t) $$
using FDM, and then trying to solve each ODE for each x-direction steps. However, I keep encountering the error (RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn), which happens due to the fact that my output[loss] in the training process has properties requires_grad = False.
Setting requires_grad = True doesn't help since it makes my loss constant for each output.
I have also checked the computation graph to see any disconnected part, but it looks fine. What could be the issue here? My code is as below, please let me know if there's anything unclear:
Beta Was this translation helpful? Give feedback.
All reactions