HLIBpro
2.6
|
𝖧𝖫𝖨𝖡𝗉𝗋𝗈 implements several solvers for linear equation systems
\[ Ax = b \]
TLinearIteration:
standard linear iteration \(x_{i+1} \leftarrow x_i + N ( Ax - b )\) with iteration matrix \(N\),TCG:
conjugate gradient method [1],TCGS:
square conjugate gradient method [2],TBiCGStab:
biconjugate gradient stabilized method [3],TTFQMR:
transpose free quasi-minimal residual method [4]TMINRES:
minimal residual method for symmetric matrices [5],TGMRES:
generalized minimal residual method [6].All solvers have the same interface function for solving the given system:
Here, W
is an optional operator used as a (left) preconditioner for solving the equation system. Furthermore, info
is an optional object for storing information about the iteration process, e.g., if converged, residual norm or convergence rate.
Assuming a given matrix object A
and vectors x
and y
, the minimal solving code looks as
20
for GMRES defines the restart of the method, e.g., the dimension of the local Krylov subspace.By default, the initial guess \(x\) is initialized to some solver dependent value before the iteration. To disable this, you may use the function initialise_start_value:
Since not all solvers are suitable for all matrices, e.g., not applicable for unsymmetric matrices, TAutoSolver
decides which solver to use for a given combination of matrix and preconditioner.
The iteration process is controlled using objects of type TStopCriterion
. The following parameters may be used to define the stop of the iteration:
max_iter:
maximal number of iteration steps,abs_res_reduct:
absolute residual norm (often also called tolerance), i.e., stop if \(|r_i| \le \varepsilon\),rel_res_reduct:
relative residual norm with respect to initial residual, i.e., stop if \(|r_i| \le \varepsilon |r_0|\),rel_res_growth:
relative residual growth, i.e., stop if \(|r_i| \ge \varepsilon |r_0|\) (in case of divergence).All parameters may be specified while construction TStopCriterion
objects:
However, resonable default values are set.
Alternatively, the functions max_steps
, relative_reduction
and absolute_reduction
can be combined to construct TStopCriterion
objects, e.g.,
All parameters not defined are set with default parameters (see HLIB::CFG::Solver).
To obtain more information of the solution process, an object of type TSolverInfo
has to be provided to the solve function. Afterwards, the following function may be used to gather the corresponding data:
has_converged()
: did the iteration converge,has_diverged()
: did the iteration diverge (bases on TStopCriterion),has_failed()
: did the iteration failed, e.g., due to breakdown,n_iter()
: return number of iteration stepsconv_rate()
: return convergence rate,res_norm()
: return norm of residual after iteration stopped.During construction of TSolverInfo
, one may also specify to store the complete iteration history and to print this data during iteration:
Beside various solvers, 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 also implements different preconditioning techniques for the iteration methods.
Of course, the most important of these preconditioners is based on some form of matrix inversion using H-arithmetic, e.g., H-LU factorization, H-WAZ factorization of H-inversion. Simply provide the corresponding linear operator object to the solve function:
Beside H-based methods, 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 also implements some classical preconditioners:
TJacobi:
point-wise and block-wise Jacobi method, e.g., \(D^{-1}\), with \(D\) being the (block) diagonal of \(A\).TGaussSeidel:
point-wise and block-wise forward/backward/symmetric Gauss-Seidel method, e.g., using the lower or upper triangular part of \(A\).TSOR:
point-wise forward/backward/symmetric SOR method.However, the point-wise Gauss-Seidel and SOR methods may only be used (and only make sense) when using a sparse matrix (TSparseMatrix
). For a H-matrix, only block-wise methods are available for Gauss-Seidel.
In case of Jacobi, both, point- and block-wise may be used for all matrix types. Furthermore, it also allows to define the size of the diagonal blocks.
The example above using a block-wise Gauss-Seidel precondtionier would look like:
TLinearIteration
, the resulting iteration method corresponds to the Jacobi, Gauss-Seidel or SOR iteration, respectively. Without any preconditioner TLinearIteration
is identical to the Richardson iteration.