I used to solve differential algebraic equations using Lagrange polynomials.
Essentially you convert the differential equations into an algebraic system by discretizing the solution. The method is called Orthogonal Collocation on Finite Elements (OCFE), and it was developed by chemical engineers.
The Lagrange polynomials were calculated at special knots that corresponded to Radau interior points, which work great for stiff systems.
It’s great for solving differential algebraic equations through purely sparse matrix operations, no explicit integration like Runge Kutta.
(Well, it’s implicit Runge Kutta).
Every polynomial interpolates itself -- meaning that you can often apply this interpolation procedure to your favorite/nemesis polynomial or equivalently rewrite your polynomial of interest in this Lagrange basis, and see if the coefficients lead you anywhere. This is especially helpful in proving polynomial inequalities. For instance, Chebyshev polynomials T_n enjoy an alternation property over their extremal points -- so in the Lagrange basis, in many problems (e.g. Markov type inequalities) they emerge as the extremal case in the triangle inequality.
My beef with this approach is that it is a little unsatisfying in the sense that it sort of feels like we "got lucky". That is, it highlights this special feature (alternation) while burying the interesting structure that leads to such polynomials being extremal in these problems, as can be seen if you attempt certain seemingly trivial extensions of classical inequalities -- but nevertheless it's an important trick in extremal polynomial theory and approximation more broadly
(proof-reading through HN is a mildly embarrassing process, sorry about that! I do go over these posts and proof-read them several times myself before publishing)
The Lagrange polynomials form the normal basis of most Finite Elements Method (FEM) software. There are other polynomials which are used as well, but these are the workhorse of most solvers.
the last matrix before the appendix is not the identity matrix, right now the matrix is: \begin{bmatrix}
1 & 0 & 0 & \dots & 0\\
1 & 0 & 0 & \dots & 0\\
1 & 0 & 0 & \dots & 0\\
\vdots & \vdots & \vdots & \ddots &\vdots \\
1 & 0 & 0 & \dots & 1
\end{bmatrix}
Maybe there's a gap in my knowledge of notation, but I was confused by:
> Let’s define the Lagrange basis functions
It looks like you defined `l_i(x)` as a piecewise function or a step function.
But then you show with it's actual definition later in that section. (That's what that section is building to and explaining.)
I used to solve differential algebraic equations using Lagrange polynomials.
Essentially you convert the differential equations into an algebraic system by discretizing the solution. The method is called Orthogonal Collocation on Finite Elements (OCFE), and it was developed by chemical engineers.
The Lagrange polynomials were calculated at special knots that corresponded to Radau interior points, which work great for stiff systems.
It’s great for solving differential algebraic equations through purely sparse matrix operations, no explicit integration like Runge Kutta. (Well, it’s implicit Runge Kutta).
Every polynomial interpolates itself -- meaning that you can often apply this interpolation procedure to your favorite/nemesis polynomial or equivalently rewrite your polynomial of interest in this Lagrange basis, and see if the coefficients lead you anywhere. This is especially helpful in proving polynomial inequalities. For instance, Chebyshev polynomials T_n enjoy an alternation property over their extremal points -- so in the Lagrange basis, in many problems (e.g. Markov type inequalities) they emerge as the extremal case in the triangle inequality.
My beef with this approach is that it is a little unsatisfying in the sense that it sort of feels like we "got lucky". That is, it highlights this special feature (alternation) while burying the interesting structure that leads to such polynomials being extremal in these problems, as can be seen if you attempt certain seemingly trivial extensions of classical inequalities -- but nevertheless it's an important trick in extremal polynomial theory and approximation more broadly
If we already have access the machinery of the fundamental theorem of algebra then the invertibility of the Vandermonde Matrix follows as a corollary.
In the Polynomial Interpolation Theorem, you have r(x) = p(x) - r(x), but I think it should be q(x) = p(x) - r(x).
Fixed, thank you! (it's actually r(x)=p(x)-q(x))
(proof-reading through HN is a mildly embarrassing process, sorry about that! I do go over these posts and proof-read them several times myself before publishing)
The Lagrange polynomials form the normal basis of most Finite Elements Method (FEM) software. There are other polynomials which are used as well, but these are the workhorse of most solvers.
If you’re interested in numerical stability issues associated with Lagrange interpolation, Trefethen and Boyd is still relevant:
https://people.maths.ox.ac.uk/trefethen/barycentric.pdf
the last matrix before the appendix is not the identity matrix, right now the matrix is: \begin{bmatrix} 1 & 0 & 0 & \dots & 0\\ 1 & 0 & 0 & \dots & 0\\ 1 & 0 & 0 & \dots & 0\\ \vdots & \vdots & \vdots & \ddots &\vdots \\ 1 & 0 & 0 & \dots & 1 \end{bmatrix}
Thanks for noticing, I'll fix it shortly
\begin{pmatrix} H_{1} & H_{2} &\dots & P_{Mc} V_{W} & V_{1} &\dots & P_{Mc} \hdotsfor{4} \\ \end{pmatrix}