Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss of accuracy for modest N? #52

Closed
AshtonSBradley opened this issue Jun 9, 2017 · 24 comments
Closed

Loss of accuracy for modest N? #52

AshtonSBradley opened this issue Jun 9, 2017 · 24 comments

Comments

@AshtonSBradley
Copy link

I see a sudden loss of accuracy at N=26 using gausslaguerre.jl. I use ApproxFun.jl to create a Laguerre polynomial, and then evaluate it at the quadrature points to check normalisation of the weighted Laguerre's:

using FastGaussQuadrature, ApproxFun, Combinatorics 
M=27
α = 0.
x,w=gausslaguerre(M,α)
n=0:M-1
Lnorm = exp.(lgamma.(n+α+1)-lgamma.(n+1))
c=zeros(M)
c[end]=1.
#check norm via coefficients and quadrature:
#coefficients
N1 = sum(abs(c).^2) = 1.0

#quadrature
L=Fun(Laguerre(α),c./sqrt(Lnorm))
N2=sum(w.*abs(L.(x)).^2)

If M<=26, accuracy is excellent (N2=1.0 to 14 figures). At M=27, N2=0.88929

I have taken the same approach with gausshermite.jl and the results are very well behaved (although accuracy appears to reach a global minimum around M=30). I have tried the different ways to call gausslaguerre.jl using the METHOD argument, but this hasn't improved the result. Any suggestions greatly appreciated!

@dlfivefifty
Copy link
Member

I believe this may be due to round-off error:

julia> L(x[end])^2
6.058613695335461e10

@dlfivefifty
Copy link
Member

Hmm, sorry, that was silly:

julia> w[end]*L(x[end])^2
4.349928550211968e-29

Perhaps Golub-Welsch is losing accuracy for small weights?

@popsomer

@dlfivefifty
Copy link
Member

OK, there's a problem with ApproxFun's evaluation:

julia> L(x[end])
246142.51350255325

# LaguerreL[27, 92.80261352555698] returns 247580.44444444444 in mathematica

@dlfivefifty
Copy link
Member

Hmm, there seems to be some fundamental stability issues:

LaguerreL[27, SetPrecision[92.80261352555698, 100]] // N # returns 254540.90751235274` 

@AshtonSBradley
Copy link
Author

is it possible to do the recursion in log-space?

@popsomer
Copy link
Contributor

popsomer commented Jun 9, 2017

Setting up a recursion in log-space can be avoided by making a recursion for the orthonormal Laguerre polynomials for low n such that stability issues can be reduced. For normalised Hermite polynomials, this is H_0^{\text{norm}}(x) = \pi^{-1/4}, \quad H_{-1}^{\text{norm}}(x) = 0, \quad H_j^{\text{norm}}(x) = \frac{2x H_{j-1}^{\text{norm}}(x)}{\sqrt{2j}} -\frac{(j-1)H_{j-2}^{\text{norm}}(x)}{\sqrt{j(j-1)} }

@dlfivefifty Yes I do think Golub-Welsh loses accuracy, also because the weights w[end-k:end-1] are exactly zero where k increases from 1 at n=30 to 63 at n=127, which is long before they should underflow. One option to counter this would be to use the (forward) recurrence relation to compute the polynomials for low n. For high n, I have almost finished explicit asymptotic expansions with ninth order convergence in n for the nodes and weights near the hard edge and in the bulk. These are quite short, extremely fast and give about double precision from n=100.

@AshtonSBradley
Copy link
Author

AshtonSBradley commented Jun 9, 2017

A related comment: I was led to the above approach because it worked well for Hermite to use the recursion in Fun(Hermite(),c./sqrt(Hnorm)), where I evaluate Hnorm using BigFloat to get past overflow for moderate N, and thus generate othonormal eigenstates of the harmonic oscillator Schroedinger equation. The overflow can also be handled with your comment @popsomer , or going into log-space. However, I thought it was quite nice to be able to just use BigFloat on standard recursion and get back terms that can be used to build an accurate hermitetransform up to order ~450 without hitting overflow. As it is written, is think hermitetransform.jl of ApproxFun.jl will overflow for N~100 for the same reason. Coming back to Laguerre(), the norm doesn't blow up, but the recursion itself seems unstable as currently implemented, If one wants to evaluate Fun(Laguerre(),c./sqrt(Lnorm)) for any moderate N (~30). BigFloat doesnt seem to help. Any scope for stable recursion to large N for Laguerre() in ApproxFun?

@AshtonSBradley
Copy link
Author

AshtonSBradley commented Jun 10, 2017

Ok, I have just checked my old log-space code for evaluation of Laguerre polynomials, trying BigFloat a bit more carefully, and same result (in agreement with @dlfivefifty in Mathematica). I have to say I am a bit surprised that this becomes unstable at such low degree. Not sure what the way forward is here...

@popsomer
Copy link
Contributor

I made a pull request #53 where the recurrence relation for orthonormal polynomials and their derivatives are implemented, also for BigFloats. Those functions laguerreRec and laguerreRecDer do not seem to be that unstable: maybe they can help you? For high n, I recommend using the (unnormalised) asymptotic expansions with T terms in polyAsyRHgen(n, x, alpha, T, 1.0, 1, getUQ(alpha, 1.0, 1, T) )

@AshtonSBradley
Copy link
Author

Thanks for this, ill check it out

@dlfivefifty
Copy link
Member

It would be good if there was a paper explaining what's going on...

@popsomer
Copy link
Contributor

popsomer commented Jun 25, 2017

The recurrence relation for the orthonormal polynomials and their derivatives was derived from the recurrence relation for standard associated Laguerre polynomials after accounting for normalisation. The routines used to compute asymptotic expansions of (possibly generalised) Laguerre polynomials are explained in the paper
"D. Huybrechs and P. Opsomer. Construction and implementation of asymptotic expansions for Laguerre-type orthogonal polynomials. IMA J. Numer. Anal., 2017, Accepted.",
for which a version before review is available on https://arxiv.org/abs/1612.07578, http://www.cs.kuleuven.be/publicaties/rapporten/tw/TW676.abs.html or http://nines.cs.kuleuven.be/software/LAGUERRE/ . Combining these expansions into a Newton method with heuristics and how to compute the asymptotic expansions of nodes and weights in function laguerreExp from this will be explained in
"D. Huybrechs and P. Opsomer. Arbitrary-order asymptotic expansions of generalized Gaussian quadrature rules, 2017, in preparation."

@AshtonSBradley
Copy link
Author

@popsomer any update on the article? I have implemented a stable recursion on the orthonormal eigenfunctions of the 2D quantum harmonic oscillator hamiltonian. But if I try to use gauss-laguerre quadrature to compute the normalisation integral, I run into:

using FastGaussQuadrature
M=27
x,w=gausslaguerre(M)
w
27-element Array{Float64,1}:
 0.128046   
 0.238419   
 0.250493   
 0.190662   
 0.112311   
 0.0525394  
 0.0197409  
 0.00598411 
 0.00146415 
 0.000288498
 4.55765e-5 
 5.73576e-6 
 5.7021e-7  
           
 1.20895e-10
 4.09626e-12
 1.00835e-13
 1.74905e-15
 2.05597e-17
 1.5563e-19 
 7.08127e-22
 1.75747e-24
 2.05704e-27
 8.94742e-31
 0.0        
 7.17974e-40

where I presume the zero is telling, and then major accuracy loss for any quadrature rule of this or higher order. I am really interested in your solution to this problem!

@popsomer
Copy link
Contributor

popsomer commented Dec 24, 2017

I hope to submit the article on quadrature in February, after I finish the Gauss-Jacobi case as well.

The zero in your weights might originate in the Golub-Welsh algorithm; I have changed my pull request to use GW for n <128 but to change the low weights (<1e-10) into the ones given by the recurrence relation. If your integrand for the 2D quantum harmonic oscillator increases quite rapidly, then contributions from high x may be significant. Then, those weights have to be known accurately and you could try to get more digits by doing the same modification but using BigFloats
alpha = big(0.0); M = 27; (x,w) = gausslaguerre( M, Float64(alpha), "GW" ); ixs = find(function(t) t < 1e-10 end, w); for k = ixs

    (~, pder) = lagpnRecDer(M, alpha, big(x[k]))
    (psh, ~) = lagpnRecDer(M-1, alpha, big(x[k]))
    w[k] = (M^2 +alpha*M)^(-1/2)/psh/pder

end
or even convert the GW algo to BigFloats.

I also added some higher order terms in the explicit expressions for higher accuracy at high n, like in chebfun #2200 @ajt60gaibb .

@AshtonSBradley
Copy link
Author

is lagpnRecDer available for testing?

@popsomer
Copy link
Contributor

popsomer commented May 9, 2018

Yes, at popsomer:add-GL-nodeWeightExpansions, see pull request #53 and also chebfun pull 2200

@AshtonSBradley
Copy link
Author

Is there any way I can help this merge?

@AshtonSBradley
Copy link
Author

Thanks for the links, @popsomer . For Julia 1.0, I have updated your gausslaguerre code (just some small things to do with type handling), and it seems to work well aside from a couple of special cases at low n.

Below is the updated code, and the cases where it returns zero values for w at small n (I haven't tracked down what is failing there yet):

using FastGaussQuadrature, SpecialFunctions

"Compute Gauss-Laguerre rule based on the recurrence relation, using Newton iterations on an initial guess."
function laguerreRec(n, alpha=0.0; reduced = false)
    T = typeof(float(alpha))

    n_alloc = reduced ? estimate_reduced_n(n, alpha) : n
    w = zeros(T, n_alloc)
    x = zeros(T, n_alloc)

    # We compute up to 7 starting values for the Newton iterations
    n_pre = min(n, 7)

    nu = 4n + 2alpha + 2
    bes = besselroots(alpha, n_pre).^2 / nu # this is a lower bound by [DLMF 18.16.10]
    x[1:n_pre] = bes

    local pn_deriv
    noUnderflow = true      # this flag turns false once the weights start to underflow
    for k in 1:n
        if k > n_pre
            # Use sextic extrapolation for a new initial guess
            x[k] = 7*x[k-1] -21*x[k-2] +35*x[k-3] -35*x[k-4] +21*x[k-5] -7*x[k-6] +x[k-7]
        end

        step = x[k]
        ov = floatmax(T) # Previous/old value
        ox = x[k] # Old x

        l = 0 # Newton-Raphson iteration number
        max_iter = 20
        while (abs(step) > 40eps(T)*x[k]) && (l < max_iter)
            l = l + 1
            pn, pn_deriv = lagpnRecDer(n, alpha, x[k])
            if abs(pn) >= abs(ov)*(1-50eps(T))
                # The function values do not decrease enough any more due to roundoff errors.
                x[k] = ox # Set to the previous value and quit.
                break
            end
            step = pn / pn_deriv
            ox = x[k]
            x[k] = x[k] - step
            ov = pn
        end
        if ( x[k] < 0 ) || ( x[k] > 4*n + 2*alpha + 2 ) || ( l == max_iter ) || ( ( k != 1 ) && ( x[k - 1] >= x[k] ) )
            @warn "Newton method may not have converged in laguerreRec($n,$alpha)."
        end

        # Compute the weight
        if noUnderflow
            pn_prev, ~ = lagpnRecDer(n-1, alpha, x[k])
            w[k] = (n^2 +alpha*n)^(-1/2)/pn_prev/pn_deriv
        end
        if noUnderflow && (w[k] < underflow_threshold(T))
            # Frome here on after weights will no longer be computed
            noUnderflow = false
        end

        if reduced
            if (k > 1) && !noUnderflow
                x = x[1:k-1]
                w = w[1:k-1]
                return x, w
            end
            if k == n_alloc
                # We have to allocate a bigger array
                n_alloc *= 2
                x1 = x
                w1 = w
                x = zeros(T, n_alloc)
                w = zeros(T, n_alloc)
                x[1:k] = x1
                w[1:k] = w1
            end
        end
    end
    x, w
end

"""
Evaluate the orthonormal associated Laguerre polynomial with positive leading coefficient,
as well as its derivative, in the point x using the recurrence relation.
"""
function lagpnRecDer(n, alpha, x)
    T = promote_type(typeof(float(alpha)), typeof(float(x)))
    pnprev = zero(T)
    pn = 1/sqrt(gamma(alpha+1) )
    pndprev = zero(T)
    pnd = zero(T)
    for k in 1:n
        pnold = pn
        pn = (x -2*k -alpha+1)/sqrt(k*(alpha+k))*pnold-sqrt((k-1+alpha)*(k-1)/k/(k+alpha))*pnprev
        pnprev = pnold
        pndold = pnd
        pnd = (pnold+(x-2*k-alpha+1)*pndold)/sqrt(k*(alpha+k)) -sqrt((k-1+alpha)*(k-1)/k/(alpha+k))*pndprev
        pndprev = pndold
    end
    pn, pnd
end

# Estimate for the number of weights that don't underflow
# This is only an estimate, not an upper bound, so code has to be able to cope
# with the estimate being off
estimate_reduced_n(n, alpha) = round(typeof(n), min(17*sqrt(n), n))

# Our threshold for deciding on underflow
underflow_threshold(x) = underflow_threshold(typeof(x))
underflow_threshold(::Type{T}) where {T <: AbstractFloat} = 10floatmin(T)

# Tests
#works up to underflow:
x,w = laguerreRec(191)
x,w = laguerreRec(192)
x,w = laguerreRec(120,.1)

# currently returns incorrect wi=0 values for small even numbers 8,10,12.
# If alpha≂̸0, can also give incorrect wi=0 values for n = 9,11
x,w = laguerreRec(8)
x,w = laguerreRec(9,2.)

@popsomer
Copy link
Contributor

popsomer commented Apr 7, 2019

Thanks! @ajt60gaibb : Maybe this can be merged with the other changes?

An update on the article: @daanhb submitted it recently. We can put it on arXiv if you would want to.

@dlfivefifty
Copy link
Member

Do you mean merge #69 ? Can we get the coverage back up? If you click on the coverage links it should show which lines are missing.

@AshtonSBradley
Copy link
Author

I'm not sure how best to improve this, but it looks like it went down quite a bit due to adding one line

const floatmin = realmin

to FastGaussQuadrature.jl

because line 17 of that file

repeat(A...; kwds...) = Base.repeat(A...; kwds...)

doesn't have good coverage, and by adding a line, the proportional coverage went down further.

So the only way to fix would seem to be add a test for line 17, or accept that the loss of coverage isn't important?

@dlfivefifty
Copy link
Member

OK it's fine, I'll merge.

@hyrodium
Copy link
Collaborator

Is this issue still active?

@AshtonSBradley
Copy link
Author

seems to be fixed, yay!

julia> x,w=gausslaguerre(M);
julia> w
40-element Vector{Float64}:
 0.08841210619030318
 0.1768147390957118
 0.21136311701596833
 0.19408119531861268
 0.14643428242412265
 0.09332679843577069
 0.05093220436104459
 0.023976193015684367
 0.009774625246714422
 0.003457939993018473
 
 1.5217354931814558e-31
 4.585291614502648e-34
 8.762158657486055e-37
 9.82741572514546e-40
 5.801152019169776e-43
 1.5309086846066978e-46
 1.381986305649325e-50
 2.566633605012361e-55
 2.700360940217009e-61

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants