Algorithms
Obtain bath fitting from pole fitting
In bath fitting, given \(\Delta(\mathrm i\nu_k)\) evaluated on \(\{\mathrm i\nu_k\}_{k=1}^{N_{w}}\), we wish to find \(V_j, E_j\) such that
This is achieved by the following strategy:
Find pole fitting with semidefinite constraints:
\[\begin{equation} \Delta(\mathrm i\nu_k) = \sum_{p=1}^{N_p} \frac{M_p}{\mathrm i\nu_k - \lambda_p}, M_p\geq 0, \tag{1} \label{polefit} \end{equation}\]Here \(M_p\) are \(N_{\text{orb}}\times N_{\text{orb}}\) positive semidefinite matrices.
Compute eigenvalue decomposition of \(M_p\):
\[M_p = \sum_{j=1}^{N_{\text{orb}}} V_{j}^{(p)} (V_{j}^{(p)})^{\dagger}. \tag{2} \label{eigdecomp}\]Combining \(\eqref{polefit}\) and \(\eqref{eigdecomp}\), we obtain the desired bath fitting:
Rational approximation via (modified) AAA algorithm
To find the poles \(\lambda_p\) in \(\eqref{polefit}\), we use the AAA algorithm, which is a rational approximation algorithm based on the Barycentric interpolant:
The AAA algorithm is an iterative procedure. It selects the next support point in a greedy fashion. Suppose we have obtained an approximant \(\widetilde f\) from the \((k-1)\)-th iteration, using support point \(z_1,\cdots z_{k-1}\). At the \(k\)-th iteration, we do the following:
Select the next support point \(z_k\) at which the previous approximant \(\widetilde f\) has the largest error.
Find \(c_k\) in \(\eqref{bary}\) by solving the following linear square problem:
\[\begin{equation} \min_{\{c_k\}} \sum_{z\neq z_1,\cdots z_k} \left| f(z) q(z) - p(z) \right|^2. \quad \text{s.t.} \|c\|_2= 1. \end{equation}\]This is a linear problem and amounts to solving a SVD problem. (See details in paper).
If the new approximant has reached desired accuracy, stop the iteration. Otherwise, repeat the above steps.
The poles of \(f(z)\) are the zeros of \(q(z)\), which can be found by solving the following generalized eigenvalue problem:
For our application, we modify the AAA algorithm to deal with matrix-valued functions by replacing \(f_j\) with matrices \(F_j\).
Semidefinite programming
After obtaining \(\lambda_p\), we need to find the weight matrices \(M_p\) in \(\eqref{polefit}\). We are solving the following problem:
This is a linear problem with respect to \(M_p\), and has semidefinite constraints, therefore could be solved efficiently via standard semidefinite programming (SDP) solvers.
Bi-level optimization
With \(\lambda_p\) and \(M_p\) obtained, we can further refine the poles and weights by solving the following bi-level optimization. Let us define the error function as
Note that \(\text{Err}\) is linear in \(M_p\) but nonlinear in \(\lambda_p\). As we have mentioned, optimization in \(M_p\) is a SDP problem and therefore is robust, while optimization in \(\lambda_p\) is a nonlinear problem and could be very challenging. This motivates us to define \(\text{Err}(\lambda_1,\cdots, \lambda_{N_p})\) as a function of \(\{\lambda_p\}\) only:
The value of \(\text{Err}(\lambda_1,\cdots, \lambda_{N_p})\) is obtained by solving a SDP problem. The gradient of \(\text{Err}(\lambda_1,\cdots, \lambda_{N_p})\) could also be obtained analytically. (For details, see eq. 28 here.) And thus we could use a gradient-based optimization algorithm (L-BFGS) to minimize \(\text{Err}(\lambda_1,\cdots, \lambda_{N_p})\) with respect to \(\{\lambda_p\}\).
For performances, robustness and other details of this bi-level optimization framework, see again our original paper.