Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add COCG method for complex symmetric linear systems #289

Open
wants to merge 16 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Use Conjugate Gradient (no plural) as full name of CG
wsshin committed Jan 5, 2022
commit f3710c35c8436dcc5838aaf74405268a5509ea23
2 changes: 1 addition & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
@@ -16,7 +16,7 @@ makedocs(
"Getting started" => "getting_started.md",
"Preconditioning" => "preconditioning.md",
"Linear systems" => [
"Conjugate Gradients" => "linear_systems/cg.md",
"Conjugate Gradient" => "linear_systems/cg.md",
"Chebyshev iteration" => "linear_systems/chebyshev.md",
"MINRES" => "linear_systems/minres.md",
"BiCGStab(l)" => "linear_systems/bicgstabl.md",
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -12,13 +12,13 @@ When solving linear systems $Ax = b$ for a square matrix $A$ there are quite som

| Method | When to use it |
|---------------------|--------------------------------------------------------------------------|
| [Conjugate Gradients](@ref CG) | Best choice for **symmetric**, **positive-definite** matrices |
| [Conjugate Gradient](@ref CG) | Best choice for **symmetric**, **positive-definite** matrices |
| [MINRES](@ref MINRES) | For **symmetric**, **indefinite** matrices |
| [GMRES](@ref GMRES) | For **nonsymmetric** matrices when a good [preconditioner](@ref Preconditioning) is available |
| [IDR(s)](@ref IDRs) | For **nonsymmetric**, **strongly indefinite** problems without a good preconditioner |
| [BiCGStab(l)](@ref BiCGStabl) | Otherwise for **nonsymmetric** problems |

We also offer [Chebyshev iteration](@ref Chebyshev) as an alternative to Conjugate Gradients when bounds on the spectrum are known.
We also offer [Chebyshev iteration](@ref Chebyshev) as an alternative to Conjugate Gradient when bounds on the spectrum are known.

Stationary methods like [Jacobi](@ref), [Gauss-Seidel](@ref), [SOR](@ref) and [SSOR](@ref) can be used as smoothers to reduce high-frequency components in the error in just a few iterations.

4 changes: 2 additions & 2 deletions docs/src/linear_systems/cg.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# [Conjugate Gradients (CG)](@id CG)
# [Conjugate Gradient (CG)](@id CG)

Conjugate Gradients solves $Ax = b$ approximately for $x$ where $A$ is a symmetric, positive-definite linear operator and $b$ the right-hand side vector. The method uses short recurrences and therefore has fixed memory costs and fixed computational costs per iteration.
Conjugate Gradient solves $Ax = b$ approximately for $x$ where $A$ is a symmetric, positive-definite linear operator and $b$ the right-hand side vector. The method uses short recurrences and therefore has fixed memory costs and fixed computational costs per iteration.

## Usage

2 changes: 1 addition & 1 deletion docs/src/linear_systems/chebyshev.md
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

Chebyshev iteration solves the problem $Ax=b$ approximately for $x$ where $A$ is a symmetric, definite linear operator and $b$ the right-hand side vector. The methods assumes the interval $[\lambda_{min}, \lambda_{max}]$ containing all eigenvalues of $A$ is known, so that $x$ can be iteratively constructed via a Chebyshev polynomial with zeros in this interval. This polynomial ultimately acts as a filter that removes components in the direction of the eigenvectors from the initial residual.

The main advantage with respect to Conjugate Gradients is that BLAS1 operations such as inner products are avoided.
The main advantage with respect to Conjugate Gradient is that BLAS1 operations such as inner products are avoided.

## Usage

2 changes: 1 addition & 1 deletion test/cg.jl
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@ end

ldiv!(y, P::JacobiPrec, x) = y .= x ./ P.diagonal

@testset "Conjugate Gradients" begin
@testset "Conjugate Gradient" begin

Random.seed!(1234321)

2 changes: 1 addition & 1 deletion test/runtests.jl
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@ include("hessenberg.jl")
#Stationary solvers
include("stationary.jl")

#Conjugate gradients
#Conjugate gradient
include("cg.jl")

#BiCGStab(l)