-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ElementTriN1 on different length-scales #1016
Comments
Perhaps. Quick comment: Sometimes in solid mechanics you are able to use different length units to solve issues such as this. |
Thanks! At the moment I scale all length units (i.e. the size of the mesh and the wavelength to be in the oder of 1), effectively I just leave out the μ in μm for all values. Might there be some error a minor error in the implementation of ElementTriN1? Is it expected that the values for u_n change a lot while the values for u_p keep constant? And do you maybe know of any literature which addresses such a problem? |
Perhaps not 'error', but there are usually several 'equivalent' options to implementing a finite element. What I mean that there are different sets of basis functions that span the same polynomial space. But since we use floats and not reals they are not equivalent in practice. A simple example is one dimensional P2 element. We can use (1-x, x, (1-x)x) or we can use the ones listed here: https://github.com/kinnala/scikit-fem/blob/master/skfem/element/element_line/element_line_p2.py Former type of basis is used when the polynomial degree changes from element-to-element and a high degree is needed in some parts of the domain. What I'm saying is that there are most likely different 'equivalent' (in the sense of reals but not floats) ways to write down the basis for N1 which could be more robust against such small elements. Literature is usually rather quiet on such topics so it might be worthwhile checking other FEM codes and how they write the basis there. |
BTW did you test with |
What I'd like to see here is some sort of minimal example of the issue. |
I'll try to now study your first example if I can see anything amiss. |
All those elements While looking for some hints on the subject I found this reference which points out an interesting problem related to these elements which I hadn't heard about: https://tigerprints.clemson.edu/cgi/viewcontent.cgi?article=2867&context=all_theses Nevertheless, that is not related to these 'large values' you are describing. I think these are simply caused by scaling 1/det(J) in the Piola transformation. Jacobian determinant gets smaller for smaller elements and consequently the basis function values get larger. |
Thanks! Sad to hear that literature is so quiet on this stuff, seems to be quite relevant for actually doing the simulations (?) I've modified """Waveguide cutoff analysis."""
import numpy as np
from skfem import *
from skfem.helpers import *
# three different mesh and element types
mesh_elem = [
(
MeshTri.init_tensor(np.linspace(0, 1, 40),
np.linspace(0, .5, 20)),
ElementTriN1() * ElementTriP1(),
),
(
MeshQuad.init_tensor(np.linspace(0, 1, 40) ** 0.9,
np.linspace(0, .5, 20)),
ElementQuadN1() * ElementQuad1(),
),
(
MeshTri.init_tensor(np.linspace(0, 1, 20),
np.linspace(0, .5, 10)),
ElementTriN2() * ElementTriP2(),
),
]
mesh, elem = mesh_elem[0]
failing = True
errors = []
scale_factors = np.linspace(-1,8)
for scale_factor in scale_factors:
scale = 10**(-scale_factor)
mesh_scaled = mesh.scaled([scale]*2)
basis = Basis(mesh_scaled, elem)
epsilon = lambda x: 1. + 0. * x[0]
# epsilon = lambda x: 3 * (x[1] < 0.25) + 1
one_over_u_r = 1
@BilinearForm
def aform(E, lam, v, mu, w):
return one_over_u_r * curl(E) * curl(v) * (1 if failing else scale**2)
@BilinearForm
def gauge(E, lam, v, mu, w):
# set div E = 0 using a Lagrange multiplier
return dot(grad(lam), v) + dot(E, grad(mu))
@BilinearForm
def bform(E, lam, v, mu, w):
return epsilon(w.x) * dot(E, v) / (scale**2 if failing else 1)
A = aform.assemble(basis)
B = bform.assemble(basis)
C = gauge.assemble(basis)
lams, xs = solve(*condense(A + C, B, D=basis.get_dofs()),
solver=solver_eigen_scipy_sym(k=6))
# compare against analytical eigenvalues
err1 = np.abs(lams[0] - np.pi ** 2)
err2 = np.abs(lams[1] - 4. * np.pi ** 2)
err3 = np.abs(lams[2] - 4. * np.pi ** 2)
if __name__ == "__main__":
print('TE10 error: {}'.format(err1))
print('TE01 error: {}'.format(err2))
print('TE20 error: {}'.format(err3))
errors.append((err1, err2, err3))
if __name__ == "__main__":
import matplotlib.pyplot as plt
errors = np.array(errors)
plt.xlabel('scale factor')
plt.ylabel('error')
plt.plot(scale_factors, errors)
plt.show()
fig, axs = plt.subplots(4, 1)
for itr in range(4):
(E, Ebasis), _ = basis.split(xs[:, itr])
Ei = Ebasis.interpolate(E)
plotbasis = Ebasis.with_element(ElementDG(ElementTriP2()))
Emag = plotbasis.project(np.sqrt(dot(Ei, Ei)))
plotbasis.plot(Emag,
colorbar=True,
ax=axs[itr],
shading='gouraud',
nrefs=2)
plt.show() It only fails for For this case it's nice, that we can scale both subsystems differently as the Lagrange multipliers anyway go to zero, in the general case this wouldn't be possible, right? |
Okay, I just realize, that I can just scale both subsystems independently. I just multiply every v_z and u_z by that scale and done. |
The ElementTriN1()*ElementTriP1() works fine for me as long as the lengthscale of the system is ~1, if I change it to the order of ~ 1e-6 the results become unprecise. This seems especially for solutions which have non-zero values in both subspaces.
I've had a look on the assembly:
The first assembly has values like
while the second one is like
Could this be the problem, that there are orders of magnitude between the values and eigs cannot solve the system precisely?
The text was updated successfully, but these errors were encountered: