Skip to content

Commit d031439

Browse files
author
Sébastien Loisel
committed
solveLP: bypass uncmin and embed Newton iteration
1 parent 8638eb6 commit d031439

12 files changed

+416
-291
lines changed

benchmark.html

+5-33
Original file line numberDiff line numberDiff line change
@@ -1,51 +1,23 @@
11
<html>
22
<head>
33
<link rel="SHORTCUT ICON" href="favicon.ico">
4+
<link href='http://fonts.googleapis.com/css?family=Lato' rel='stylesheet' type='text/css'>
45
<link rel="stylesheet" type="text/css" href="resources/style.css">
56
<title>Numeric Javascript: Benchmarks</title>
67
</head>
78
<body>
89
<!--#include file="resources/header.html" -->
910

10-
We are now running a linear algebra performance benchmark in your browser! Please ensure that your seatbelt
11-
is fastened and your tray table is upright while we invert 100x100 matrices.<br><br>
11+
We are now running a linear algebra performance benchmark; the results are plotted below. As we move right within each
12+
test, the matrix size increases.<br><br>
1213

13-
<b>Performance (<a href="http://en.wikipedia.org/wiki/FLOPS">MFLOPS</a>)</b>
14-
<div style="width:1000px;overflow:hidden;font-size:14px;">
14+
<b>Performance (<a href="http://en.wikipedia.org/wiki/FLOPS">MFLOPS</a>; higher is better).</b>
15+
<div style="width:1000px;overflow:hidden;font-size:14px;line-height:100%;">
1516
<div id="placeholder" style="width:700px;height:500px;float:left;"></div>
1617
<div id="legend" style="width:250px;height:100px;overflow:hidden;"></div>
1718
</div>
1819
<div id="meanscore">Geometric mean of scores: </div><br>
1920

20-
<b>Higher is better:</b> For each benchmark and library, a function is called repeatedly for a certain amount of time.
21-
The number of function calls per seconds is converted into a FLOP rate. As we move right within each test, the matrix size increases.<br><br>
22-
23-
<b>What tricks are used to increase performance in Numeric?</b>
24-
<ul>
25-
<li>Replace inner loop <tt>for(j=0;j&lt;n;j++) A[i][j]</tt> by the equivalent <tt>Ai = A[i]; for(j=0;j&lt;n;j++) Ai[j]</tt> ("hoisting" the <tt>[i]</tt> out of the loop</tt>).
26-
<li>Preallocate Arrays: <tt>A = new Array(n)</tt> instead of <tt>A = []</tt>.
27-
<li>Use <tt>Array</tt> objects directly (abstractions slow you down). Getters and setters are bad.
28-
<li>Use <tt>for(j=n-1;j&gt;=0;j--)</tt> if it is faster.
29-
<li>Do not put anything in <tt>Array.prototype</tt>. If you modify <tt>Array.prototype</tt>, it slows down everything significantly.
30-
<li>Big Matrix*Matrix product: transpose the second matrix and rearrange the loops to exploit locality.
31-
<li>Unroll loops.
32-
<li>Don't nest functions.
33-
<li>Don't call functions, inline them manually. Except...
34-
<li>...big functions can confuse the JIT. If a new code path is run in a function, the function can be deoptimized by the JIT.
35-
<li>Avoid polymorphism.
36-
<li>Localize references. For example: replace <tt>for(i=n-1;i>=0;i--) x[i] = Math.random();</tt> by <tt>rnd = Math.random; for(i=n-1;i>=0;i--) x[i] = rnd();</tt>. (Make sure <tt>rnd</tt> and <tt>x</tt> really are local variables!)
37-
<li>Deep lexical scopes are bad. You can create a function without much of a lexical scope by using
38-
<tt>new Function('...');</tt>.
39-
</ul>
40-
<br>
41-
42-
<b>GC pauses?</b>
43-
If you reload the page, the benchmark will run again and will give slightly different results.
44-
This could be due to GC pauses or other background tasks, DOM updates, etc...
45-
To get an idea of the impact of this, load this page in two or more different tabs (not at the same time,
46-
one after the other) and switch between the tabs and see the differences in the performance chart.
47-
<br><br><br>
48-
4921
<table id="bench"></table>
5022

5123
<!--[if lte IE 9]><script language="javascript" type="text/javascript" src="tools/excanvas.min.js"></script><![endif]-->

documentation.html

+53-23
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
<head>
22
<link rel="SHORTCUT ICON" href="favicon.ico">
3+
<link href='http://fonts.googleapis.com/css?family=Lato' rel='stylesheet' type='text/css'>
34
<link rel="stylesheet" type="text/css" href="resources/style.css">
45
<style>
56
table { font-size:14px; }
@@ -171,7 +172,7 @@
171172
<tr><td><tt>xoreq</tt><td>Pointwise x^=y
172173
</table></table>
173174

174-
175+
<br>
175176
<h1>Numerical analysis in Javascript</h1>
176177

177178
<a href="http://www.numericjs.com/">Numeric Javascript</a> is
@@ -258,6 +259,7 @@ <h1>Numerical analysis in Javascript</h1>
258259
In particular, if you want to print all Arrays regardless of size, set <tt>numeric.largeArray = Infinity</tt>.
259260
<br><br>
260261

262+
261263
<h1>Math Object functions</h1>
262264

263265
The <tt>Math</tt> object functions have also been adapted to work on Arrays as follows:
@@ -295,6 +297,7 @@ <h1>Math Object functions</h1>
295297
OUT> [1.557,-2.185]
296298
</pre>
297299

300+
298301
<h1>Utility functions</h1>
299302

300303
The function <tt>numeric.dim()</tt> allows you to compute the dimensions of an Array.
@@ -410,6 +413,7 @@ <h1>Utility functions</h1>
410413
</pre>
411414
-->
412415

416+
413417
<h1>Arithmetic operations</h1>
414418

415419
The standard arithmetic operations have been vectorized:
@@ -474,6 +478,7 @@ <h1>Arithmetic operations</h1>
474478
</pre>
475479
-->
476480

481+
477482
<h1>Linear algebra</h1>
478483

479484
Matrix products are implemented in the functions
@@ -620,6 +625,7 @@ <h1>Data manipulation</h1>
620625
OUT> "Hello, world!"
621626
</pre>
622627

628+
623629
<h1>Complex linear algebra</h1>
624630
You can also manipulate complex numbers:
625631
<pre>
@@ -695,6 +701,7 @@ <h1>Complex linear algebra</h1>
695701
[3,3]] }
696702
</pre>
697703

704+
698705
<h1>Eigenvalues</h1>
699706
Eigenvalues:
700707
<pre>
@@ -717,21 +724,21 @@ <h1>Eigenvalues</h1>
717724
Note that eigenvalues and eigenvectors are returned as complex numbers (type <tt>numeric.T</tt>). This is because
718725
eigenvalues are often complex even when the matrix is real.<br><br>
719726

727+
720728
<h1>Singular value decomposition (Shanti Rao)</h1>
721729

722730
Shanti Rao kindly emailed me an implementation of the Golub and Reisch algorithm:
723731

724732
<pre>
725-
IN> A = [[22,10,2,3,7],[14,7,10,0,8],[-1,13,-1,-11,3],[-3,-2,13,-2,4],[9,8,1,-2,4],[9,1,-7,5,-1],[2,-6,6,5,1],[4,5,0,-2,2]]
726-
OUT> [[ 22, 10, 2, 3, 7],
727-
[ 14, 7, 10, 0, 8],
728-
[ -1, 13, -1,-11, 3],
729-
[ -3, -2, 13, -2, 4],
730-
[ 9, 8, 1, -2, 4],
731-
[ 9, 1, -7, 5, -1],
732-
[ 2, -6, 6, 5, 1],
733-
[ 4, 5, 0, -2, 2]]
734-
IN> numeric.svd(A)
733+
IN> A=[[ 22, 10, 2, 3, 7],
734+
[ 14, 7, 10, 0, 8],
735+
[ -1, 13, -1,-11, 3],
736+
[ -3, -2, 13, -2, 4],
737+
[ 9, 8, 1, -2, 4],
738+
[ 9, 1, -7, 5, -1],
739+
[ 2, -6, 6, 5, 1],
740+
[ 4, 5, 0, -2, 2]];
741+
numeric.svd(A);
735742
OUT> {U:
736743
[[ -0.7071, -0.1581, 0.1768, 0.2494, 0.4625],
737744
[ -0.5303, -0.1581, -0.3536, 0.1556, -0.4984],
@@ -790,6 +797,7 @@ <h1>Singular value decomposition (Shanti Rao)</h1>
790797
</pre>
791798
-->
792799

800+
793801
<h1>Sparse linear algebra</h1>
794802

795803
Sparse matrices are matrices that have a lot of zeroes. In numeric, sparse matrices are stored in the
@@ -913,6 +921,7 @@ <h1>Sparse linear algebra</h1>
913921
OUT> [[0,0,1,1,2,2],[0,1,1,2,2,3],[1,2,3,4,5,6]]
914922
</pre>
915923

924+
916925
<h1>Coordinate matrices</h1>
917926

918927
We also provide a banded matrix implementation using the coordinate encoding.<br><br>
@@ -931,6 +940,7 @@ <h1>Coordinate matrices</h1>
931940
</pre>
932941
Note that <tt>numeric.cLU()</tt> does not have any pivoting.
933942

943+
934944
<h1>Solving PDEs</h1>
935945

936946
The functions <tt>numeric.cgrid()</tt> and <tt>numeric.cdelsq()</tt> can be used to obtain a
@@ -995,22 +1005,23 @@ <h1>Solving PDEs</h1>
9951005
[-1,-1,-1,-1,-1]]
9961006
</pre>
9971007

1008+
9981009
<h1>Cubic splines</h1>
9991010

10001011
You can do some (natural) cubic spline interpolation:
10011012
<pre>
10021013
IN> numeric.spline([1,2,3,4,5],[1,2,1,3,2]).at(numeric.linspace(1,5,10))
1003-
OUT> [ 1, 1.731, 2.039, 1.604, 1.019, 1.294, 2.364, 3.085, 2.82, 2]
1014+
OUT> [ 1, 1.731, 2.039, 1.604, 1.019, 1.294, 2.364, 3.085, 2.82, 2]
10041015
</pre>
10051016
For clamped splines:
10061017
<pre>
10071018
IN> numeric.spline([1,2,3,4,5],[1,2,1,3,2],0,0).at(numeric.linspace(1,5,10))
1008-
OUT> [ 1, 1.435, 1.98, 1.669, 1.034, 1.316, 2.442, 3.017, 2.482, 2]
1019+
OUT> [ 1, 1.435, 1.98, 1.669, 1.034, 1.316, 2.442, 3.017, 2.482, 2]
10091020
</pre>
10101021
For periodic splines:
10111022
<pre>
10121023
IN> numeric.spline([1,2,3,4],[0.8415,0.04718,-0.8887,0.8415],'periodic').at(numeric.linspace(1,4,10))
1013-
OUT> [ 0.8415, 0.9024, 0.5788, 0.04718, -0.5106, -0.8919, -0.8887, -0.3918, 0.3131, 0.8415]
1024+
OUT> [ 0.8415, 0.9024, 0.5788, 0.04718, -0.5106, -0.8919, -0.8887, -0.3918, 0.3131, 0.8415]
10141025
</pre>
10151026
Vector splines:
10161027
<pre>
@@ -1028,6 +1039,7 @@ <h1>Cubic splines</h1>
10281039
OUT> [0, 3.142, 6.284, 9.425, 12.57, 15.71, 18.85, 21.99, 25.13, 28.27]
10291040
</pre>
10301041

1042+
10311043
<h1>Fast Fourier Transforms</h1>
10321044
FFT on numeric.T objects:
10331045
<pre>
@@ -1042,6 +1054,7 @@ <h1>Fast Fourier Transforms</h1>
10421054
y:[6,7,8,9,10]}
10431055
</pre>
10441056

1057+
10451058
<h1>Quadratic Programming (Alberto Santini)</h1>
10461059

10471060
The Quadratic Programming function <tt>numeric.solveQP()</tt> is based on <a href="https://github.com/albertosantini/node-quadprog">Alberto Santini's
@@ -1058,9 +1071,12 @@ <h1>Quadratic Programming (Alberto Santini)</h1>
10581071
message: "" }
10591072
</pre>
10601073

1074+
10611075
<h1>Unconstrained optimization</h1>
10621076

1063-
We also include a simple unconstrained optimization routine. Here are some demos from from Mor&eacute; et al., 1981:
1077+
To minimize a function f(x) we provide the function <tt>numeric.uncmin(f,x0)</tt> where x0
1078+
is a starting point for the optimization.
1079+
Here are some demos from from Mor&eacute; et al., 1981:
10641080

10651081
<pre>
10661082
IN> sqr = function(x) { return x*x; };
@@ -1075,7 +1091,9 @@ <h1>Unconstrained optimization</h1>
10751091
IN> f = function(x) { return sqr(x[0]-1e6)+sqr(x[1]-2e-6)+sqr(x[0]*x[1]-2)};
10761092
x0 = numeric.uncmin(f,[0,1]).solution
10771093
OUT> [1e6,2e-6]
1078-
IN> f = function(x) { return sqr(1.5-x[0]*(1-x[1]))+sqr(2.25-x[0]*(1-x[1]*x[1]))+sqr(2.625-x[0]*(1-x[1]*x[1]*x[1])); };
1094+
IN> f = function(x) {
1095+
return sqr(1.5-x[0]*(1-x[1]))+sqr(2.25-x[0]*(1-x[1]*x[1]))+sqr(2.625-x[0]*(1-x[1]*x[1]*x[1]));
1096+
};
10791097
x0 = numeric.uncmin(f,[1,1]).solution
10801098
OUT> [3,0.5]
10811099
IN> f = function(x) {
@@ -1148,7 +1166,12 @@ <h1>Unconstrained optimization</h1>
11481166
gradient of <tt>f()</tt>. If it is not provided, a numerical gradient is used. The iteration stops when
11491167
<tt>maxit</tt> iterations have been performed. The optional <tt>callback()</tt> parameter, if provided, is called at each step:
11501168
<pre>
1151-
IN> z = []; cb = function(i,x,f,g,H) { z.push({i:i, x:x, f:f, g:g, H:H }); }; x0 = numeric.uncmin(function(x) { return Math.cos(2*x[0]); },[1],1e-10,function(x) { return [-2*Math.sin(2*x[0])]; },100,cb);
1169+
IN> z = [];
1170+
cb = function(i,x,f,g,H) { z.push({i:i, x:x, f:f, g:g, H:H }); };
1171+
x0 = numeric.uncmin(function(x) { return Math.cos(2*x[0]); },
1172+
[1],1e-10,
1173+
function(x) { return [-2*Math.sin(2*x[0])]; },
1174+
100,cb);
11521175
OUT> {solution: [1.571],
11531176
f: -1,
11541177
gradient: [2.242e-11],
@@ -1164,6 +1187,7 @@ <h1>Unconstrained optimization</h1>
11641187
{i:6, x:[1.571], f:-1 , g: [ 2.242e-11] , H:[[0.25 ]] }]
11651188
</pre>
11661189

1190+
11671191
<h1>Linear programming</h1>
11681192

11691193
Linear programming is available:
@@ -1185,13 +1209,13 @@ <h1>Linear programming</h1>
11851209
We also handle infeasible problems:
11861210
<pre>
11871211
IN> numeric.solveLP([1,1],[[1,0],[0,1],[-1,-1]],[-1,-1,-1])
1188-
OUT> { solution: NaN, message: "Infeasible" }
1212+
OUT> { solution: NaN, message: "Infeasible", iterations: 6 }
11891213
</pre>
11901214

11911215
Unbounded problems:
11921216
<pre>
1193-
IN> numeric.solveLP([1,1],[[1,0],[0,1]],[0,0]);
1194-
OUT> { solution:[-1.009,-1.009], message:"Unbounded" }
1217+
IN> numeric.solveLP([1,1],[[1,0],[0,1]],[0,0]).message;
1218+
OUT> "Unbounded"
11951219
</pre>
11961220

11971221
With an equality constraint:
@@ -1202,7 +1226,7 @@ <h1>Linear programming</h1>
12021226
[[1,1,1]], /* matrix Aeq of equality constraint */
12031227
[3] /* vector beq of equality constraint */
12041228
);
1205-
OUT> { solution: [3,2.355e-17,9.699e-17], message:"" }
1229+
OUT> { solution: [3,4.167e-19,1.086e-18], message:"", iterations:11 }
12061230
</pre>
12071231

12081232
<!--
@@ -1228,6 +1252,7 @@ <h1>Linear programming</h1>
12281252
</pre>
12291253
-->
12301254

1255+
12311256
<h1>Solving ODEs</h1>
12321257

12331258
The function <tt>numeric.dopri()</tt> is an implementation of the Dormand-Prince-Runge-Kutta integrator with
@@ -1252,7 +1277,8 @@ <h1>Solving ODEs</h1>
12521277
</pre>
12531278
The integrator is also able to handle vector equations. This is a harmonic oscillator:
12541279
<pre>
1255-
IN> numeric.dopri(0,10,[3,0],function (x,y) { return [y[1],-y[0]]; }).at([0,0.5*Math.PI,Math.PI,1.5*Math.PI,2*Math.PI])
1280+
IN> sol = numeric.dopri(0,10,[3,0],function (x,y) { return [y[1],-y[0]]; });
1281+
sol.at([0,0.5*Math.PI,Math.PI,1.5*Math.PI,2*Math.PI])
12561282
OUT> [[ 3, 0],
12571283
[ -9.534e-8, -3],
12581284
[ -3, 2.768e-7],
@@ -1284,7 +1310,10 @@ <h1>Solving ODEs</h1>
12841310

12851311
Events can also be vector-valued:
12861312
<pre>
1287-
IN> sol = numeric.dopri(0,2,1,function(x,y) { return y; },undefined,50,function(x,y) { return [y-1.5,Math.sin(y-1.5)]; });
1313+
IN> sol = numeric.dopri(0,2,1,
1314+
function(x,y) { return y; },
1315+
undefined,50,
1316+
function(x,y) { return [y-1.5,Math.sin(y-1.5)]; });
12881317
OUT> { x: [ 0, 0.2, 0.4055],
12891318
y: [ 1, 1.221, 1.5],
12901319
f: [ 1, 1.221, 1.5],
@@ -1310,6 +1339,7 @@ <h1>Solving ODEs</h1>
13101339
</pre>
13111340
-->
13121341

1342+
13131343
<h1>Seedrandom (David Bau)</h1>
13141344

13151345
The object <tt>numeric.seedrandom</tt> is based on

0 commit comments

Comments
 (0)