Algorithms

Obtain bath fitting from pole fitting

In bath fitting, given Δ(iνk) evaluated on {iνk}k=1Nw, we wish to find Vj,Ej such that

Δ(iνk)=j=1NbVjVjiνkEj.

This is achieved by the following strategy:

  • Find pole fitting with semidefinite constraints:

    (1)Δ(iνk)=p=1NpMpiνkλp,Mp0,

    Here Mp are Norb×Norb positive semidefinite matrices.

  • Compute eigenvalue decomposition of Mp:

    (2)Mp=j=1NorbVj(p)(Vj(p)).
  • Combining (1) and (2), we obtain the desired bath fitting:

Δ(iνk)=pjVj(p)(Vj(p))iνkλp.

Rational approximation via (modified) AAA algorithm

To find the poles λp in (1), we use the AAA algorithm, which is a rational approximation algorithm based on the Barycentric interpolant:

(3)f(z)=p(z)q(z)=j=1mcjfjzzjj=1mcjzzj..

The AAA algorithm is an iterative procedure. It selects the next support point in a greedy fashion. Suppose we have obtained an approximant f~ from the (k1)-th iteration, using support point z1,zk1. At the k-th iteration, we do the following:

  1. Select the next support point zk at which the previous approximant f~ has the largest error.

  2. Find ck in (3) by solving the following linear square problem:

    min{ck}zz1,zk|f(z)q(z)p(z)|2.s.t.c2=1.

    This is a linear problem and amounts to solving a SVD problem. (See details in paper).

  3. If the new approximant has reached desired accuracy, stop the iteration. Otherwise, repeat the above steps.

The poles of f(z) are the zeros of q(z), which can be found by solving the following generalized eigenvalue problem:

(0c1c2cm1z11z21zm)=λ(0111)

For our application, we modify the AAA algorithm to deal with matrix-valued functions by replacing fj with matrices Fj.

Semidefinite programming

After obtaining λp, we need to find the weight matrices Mp in (1). We are solving the following problem:

min{Mp}k=1NwΔ(iνk)p=1NpMpiνkλpF2,s.t. Mp0.

This is a linear problem with respect to Mp, and has semidefinite constraints, therefore could be solved efficiently via standard semidefinite programming (SDP) solvers.

Bi-level optimization

With λp and Mp obtained, we can further refine the poles and weights by solving the following bi-level optimization. Let us define the error function as

Err(λp,Mp)=k=1NwΔ(iνk)p=1NpMpiνkλpF2.

Note that Err is linear in Mp but nonlinear in λp. As we have mentioned, optimization in Mp is a SDP problem and therefore is robust, while optimization in λp is a nonlinear problem and could be very challenging. This motivates us to define Err(λ1,,λNp) as a function of {λp} only:

Err(λ1,,λNp)=min{Mp}Err(λp,Mp)

The value of Err(λ1,,λNp) is obtained by solving a SDP problem. The gradient of Err(λ1,,λNp) could also be obtained analytically. (For details, see eq. 28 here.) And thus we could use a gradient-based optimization algorithm (L-BFGS) to minimize Err(λ1,,λNp) with respect to {λp}.

For performances, robustness and other details of this bi-level optimization framework, see again our original paper.