Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multivariate functions #185

Merged
merged 31 commits into from
Nov 28, 2022
Merged

Multivariate functions #185

merged 31 commits into from
Nov 28, 2022

Conversation

benjione
Copy link
Contributor

@benjione benjione commented Sep 1, 2022

Multivariate functions for Polynomial spaces (and maybe others).

Work in progress

@benjione
Copy link
Contributor Author

benjione commented Sep 1, 2022

current implementation, compared to the 2D case which already existed and still uses 100% the old code:

julia> f2 = Fun(Chebyshev()^2, rand(100));

julia> f3 = Fun(Chebyshev()^3, rand(100));

julia> f4 = Fun(Chebyshev()^4, rand(100));

julia> @btime f2(0.2,0.4);
  262.102 μs (3692 allocations: 178.70 KiB)

julia> @btime f3(0.2,0.4,0.5);
  399.120 μs (6054 allocations: 244.28 KiB)

julia> @btime f4(0.2,0.4,0.5, 0.6);
  584.518 μs (8784 allocations: 378.05 KiB)

Most time is needed to change the coefficients format from #86 to tensor format. Maybe it would be better to change the vectorization after all.

@jishnub
Copy link
Member

jishnub commented Sep 1, 2022

Unfortunately, CI is probably broken now, as JuliaArrays/FillArrays.jl#182 needs to be merged first

@benjione
Copy link
Contributor Author

benjione commented Sep 1, 2022

The PR is not finished yet, I haven't added tests yet and will probably have to optimize some parts.

@jishnub
Copy link
Member

jishnub commented Sep 1, 2022

Ok, I've cancelled the tests for now to avoid clogging CI resources

@benjione
Copy link
Contributor Author

benjione commented Sep 5, 2022

reduced amount of allocations and increased performance:

julia> f2 = Fun(Chebyshev()^2, rand(100));

julia> f3 = Fun(Chebyshev()^3, rand(100));

julia> f4 = Fun(Chebyshev()^4, rand(100));

julia> @btime f2(0.2, 0.4);
  261.099 μs (3725 allocations: 177.47 KiB)

julia> @btime f3(0.2, 0.4, 0.5);
  379.646 μs (4781 allocations: 206.83 KiB)

julia> @btime f3(0.2, 0.4, 0.5);
  379.589 μs (4781 allocations: 206.83 KiB)

julia> @btime f4(0.2, 0.4, 0.5, 0.6);
  474.031 μs (5180 allocations: 247.55 KiB)

@benjione
Copy link
Contributor Author

benjione commented Sep 5, 2022

The basic functionality of multivariate functions of the same type is now working.

julia> f3 = Fun(Chebyshev()^3, rand(100));

julia> f3(0.2, 0.4, 0.5); # working

julia> f3 = Fun(Chebyshev() \otimes Legendre() \otimes Chebyshev(), rand(100));

julia> f3(0.2, 0.4, 0.5); # not working

The mixed type of (polynomial) spaces should be easy to extend, but is beyond this PR in my opinion. The same thing for arithmetics, which also do not work fully in the 2D case #187 (I explicitly did not tuch the 2D case yet).

I think the PR is ready for a review or merge.

@codecov
Copy link

codecov bot commented Sep 6, 2022

Codecov Report

Base: 71.30% // Head: 70.92% // Decreases project coverage by -0.37% ⚠️

Coverage data is based on head (36d9411) compared to base (67978db).
Patch coverage: 23.94% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #185      +/-   ##
==========================================
- Coverage   71.30%   70.92%   -0.38%     
==========================================
  Files          78       79       +1     
  Lines        8105     8178      +73     
==========================================
+ Hits         5779     5800      +21     
- Misses       2326     2378      +52     
Impacted Files Coverage Δ
src/ApproxFunBase.jl 88.88% <ø> (-11.12%) ⬇️
src/Multivariate/Multivariate.jl 56.60% <ø> (ø)
src/Multivariate/ProductFun.jl 45.78% <ø> (+0.54%) ⬆️
src/Multivariate/TrivialTensorFun.jl 0.00% <0.00%> (ø)
src/Multivariate/TensorSpace.jl 70.05% <36.17%> (-5.80%) ⬇️
src/Operators/general/InterlaceOperator.jl 86.45% <0.00%> (ø)
src/Operators/general/algebra.jl 92.12% <0.00%> (+0.09%) ⬆️
src/Operators/Operator.jl 66.85% <0.00%> (+0.09%) ⬆️
src/Operators/banded/ToeplitzOperator.jl 75.93% <0.00%> (+0.18%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@benjione
Copy link
Contributor Author

benjione commented Sep 9, 2022

From my side the PR is ready to be merged @jishnub

test/MultivariateTest.jl Outdated Show resolved Hide resolved
@jishnub
Copy link
Member

jishnub commented Sep 9, 2022

How much does including Combinatorics add to the package import time?

@@ -35,6 +45,15 @@ function ProductFun(cfs::AbstractMatrix{T},sp::AbstractProductSpace{Tuple{S,V},D
end
end

## TODO: This Product Fun actually does not return a productfun, dirty but was easiest to implement. Probably an abstract type of ProductFuns
# is needed in the future.
function ProductFun(iter::TrivialTensorizer{d},cfs::Vector{T},blk::Block, sp::AbstractProductSpace{NTuple{d, S},DD}) where {S<:UnivariateSpace,T<:Number,d,DD}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary? Or could we define a function productfun that can choose to return an appropriate type? It'll be good if the constructor of a struct retains its type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently multivariate functions are designed in a way to first build a tensor out of the coefficients and then first solve the first dimension followed by the second. ProductFun is designed for that, storing a vector of functions.
I first implemented it that way, but this needs too much memory for high dimensional cases and threw errors for me for dimensions larger about 80 and 10000 coefficients (~1 million coefficients are possible in the 1D case for me). Therefore, I never build this tensor and don't use the vector of functions ProductFun is using. I think the approach is not compatible with the current ProductFun.

I think the best way would actually be to change the order of how the coefficients are vectorized, see #86 , because this change in coefficients is computationally costly (functions fromtensor and totensor build the "real" tensor out of the current way the coefficients are stored. I introduced totensoriterator for evaluating d>2 dimensional functions to avoid building the full tensor).

However, this change could be breaking with current code using ApproxFun

@benjione
Copy link
Contributor Author

How do I measure the import time exactly? I got with compilation the import time of

julia> @time import Combinatorics
  0.001712 seconds (1.08 k allocations: 72.438 KiB, 84.56% compilation time)

and precompiled in the package

@time using ApproxFunBase
  0.000143 seconds (127 allocations: 11.344 KiB)

@jishnub
Copy link
Member

jishnub commented Sep 12, 2022

You can try @time_imports using ApproxFunBase on Julia v1.8+

@benjione
Copy link
Contributor Author

You can try @time_imports using ApproxFunBase on Julia v1.8+

I get
5.8 ms Combinatorics

@jishnub
Copy link
Member

jishnub commented Sep 15, 2022

Could you post the full report? In particular, I want to see if including Combinatorics increases invalidation and impacts other packages. In itself, 5ms is hardly anything, so this shouldn't be an issue.

@benjione
Copy link
Contributor Author

julia> @time_imports using ApproxFunBase
    162.9 ms  FillArrays
    936.6 ms  ArrayLayouts
    181.6 ms  BlockArrays
    241.1 ms  BandedMatrices
    162.2 ms  MatrixFactorizations
     61.7 ms  BlockBandedMatrices
      2.9 ms  StaticArraysCore
    860.8 ms  StaticArrays
     34.0 ms  IntervalSets
      2.3 ms  CompositeTypes
    143.8 ms  DomainSets
     11.0 ms  AbstractFFTs
      0.8 ms  FFTW_jll
    305.1 ms  FFTW 6.88% compilation time (100% recompilation)
     45.9 ms  IterTools
    243.7 ms  Polynomials
     53.2 ms  DSP
      4.5 ms  Calculus
    139.6 ms  DualNumbers
     19.2 ms  Nullables
   1388.7 ms  LowRankApprox
     38.9 ms  Infinities
    163.0 ms  MacroTools
    133.3 ms  LazyArrays
    151.7 ms  InfiniteArrays
    214.7 ms  LazyBandedMatrices
     14.4 ms  SemiseparableMatrices
    952.0 ms  InfiniteLinearAlgebra
      6.7 ms  Combinatorics
    370.4 ms  ApproxFunBase

@jishnub
Copy link
Member

jishnub commented Sep 15, 2022

Ok, seems reasonable as it's fairly low in the chain. We can include this then

@jishnub
Copy link
Member

jishnub commented Sep 21, 2022

Tests seem to fail because of unbound type parameters in

ApproxFunBase.TensorIteratorFun(space::SS, coefficients::Vector{T}, iterator::ApproxFunBase.Tensorizer{Tuple{Vararg{FillArrays.Ones{Int64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}, d}}}, orders::BlockArrays.Block) where {S<:(Space{D} where D<:(Domain{<:Number})), d, SS<:(TensorSpace{Tuple{Vararg{S, d}}}), T<:Number} in ApproxFunBase at /Users/runner/work/ApproxFunBase.jl/ApproxFunBase.jl/src/Multivariate/ProductFun.jl:12, totensor(SS::TensorSpace{Tuple{Vararg{S, d}}}, M::AbstractVector) where {d, S<:(Space{D} where D<:(Domain{<:Number}))} in ApproxFunBase at /Users/runner/work/ApproxFunBase.jl/ApproxFunBase.jl/src/Multivariate/TensorSpace.jl:493

ProductFun(iter::ApproxFunBase.Tensorizer{Tuple{Vararg{FillArrays.Ones{Int64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}, d}}}, cfs::Vector{T}, blk::BlockArrays.Block, sp::ApproxFunBase.AbstractProductSpace{Tuple{Vararg{S, d}}, DD}) where {S<:(Space{D} where D<:(Domain{<:Number})), T<:Number, d, DD} in ApproxFunBase at /Users/runner/work/ApproxFunBase.jl/ApproxFunBase.jl/src/Multivariate/ProductFun.jl:50

Could you fix these?

@benjione
Copy link
Contributor Author

benjione commented Oct 18, 2022

Ok, I think I understand. The Multivariate methods I added only work for Univariate Tensor Spaces of the same type.

If I understand it correctly, I will delete the tests from here and move them to Orthogonal Polynomials.

The problem is probably that if there is a braking change in this package, the test in the orthogonal polynomial package will break, not here. Is it possible to also include those tests in the github ci of this package?

@jishnub
Copy link
Member

jishnub commented Oct 18, 2022

The problem is probably that if there is a braking change in this package, the test in the orthogonal polynomial package will break, not here. Is it possible to also include those tests in the github ci of this package?

Since the compat bound of ApproxFunOrthogonalPolynomials will prevent automatically fetching breaking changes, tests there won't break if there is a breaking change to ApproxFunBase. In that case, the tests that break will need to be fixed manually before bumping the compat bound.

The problem is that if we want to use ApproxFunOrthogonalPolynomials for tests after bumping the version of this package to v0.8, there won't be any version of ApproxFunOrthogonalPolynomials that's compatible with that version. So tests for ApproxFunBase will fail because it'll simply not be able to resolve dependency versions.

If I understand it correctly, I will delete the tests from here and move them to Orthogonal Polynomials.

Yes, this is correct. There are already some multivariate tests in ApproxFunOrthogonalPolynomials, so you may add your tests there.

@jishnub
Copy link
Member

jishnub commented Oct 19, 2022

Could you re-run the 2D and other dimensional benchmarks, but this time interpolating the function? This is what I obtain, for example, on the master branches on ApproxFunBase and ApproxFunOrthogonalPolynomials:

julia> f2 = Fun(Chebyshev()^2, rand(100));

julia> @btime $f2(0.2, 0.4);
  1.502 μs (17 allocations: 4.34 KiB)

@benjione
Copy link
Contributor Author

Thanks for catching that!

My changes should only apply to higher dimensional functions, d>2, but I did a mistake here. Currently the wrong iterator is called, which is using multiexponential from Combinatorics.jl and that is unfortunately slow. I will fix this, so that the original code is called for the 2d case and run the benchmark.

@benjione
Copy link
Contributor Author

benjione commented Nov 17, 2022

I fixed the 2D case, it is now the same as before. Unfortunately, the higher dimensional cases are not very fast, because of the slow Stars and Bars implementation I am using.... I already implemented an own version of the algorithm which is faster than the Julia implementation, maybe I will add this later on.

# all functions with 100 coefficients
917.500 ns (17 allocations: 4.34 KiB) #2D function evaluation
  382.679 μs (4675 allocations: 189.69 KiB) #3D function evaluation
  3.547 ms (16627 allocations: 1.02 MiB) #20D function evaluation

I will open a PR in the ApproxFunOrthogonalPolynomials for the multivariate test cases soon.

@benjione
Copy link
Contributor Author

I have rewritten the iterator for better performances:

# all functions with 100 coefficients
1.007 μs (17 allocations: 4.34 KiB)  2D evaluation
  83.821 μs (1869 allocations: 98.86 KiB) 3D evaluation
  164.725 μs (6414 allocations: 201.72 KiB) 20D evaluation

Project.toml Outdated Show resolved Hide resolved
Project.toml Outdated Show resolved Hide resolved
@benjione
Copy link
Contributor Author

@jishnub Are you going to accept this PR?

@jishnub
Copy link
Member

jishnub commented Nov 28, 2022

Yes sorry I've been busy, this seems good now

@jishnub jishnub merged commit 865496d into JuliaApproximation:master Nov 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants