Skip to content

Commit

Permalink
Merge branch 'master' into cocg
Browse files Browse the repository at this point in the history
  • Loading branch information
wsshin committed Jan 4, 2022
2 parents 4800129 + b46b587 commit 9a7fb26
Show file tree
Hide file tree
Showing 15 changed files with 78 additions and 40 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "IterativeSolvers"
uuid = "42fd0dbc-a981-5370-80f2-aaf504508153"
version = "0.9.0"
version = "0.9.2"

[deps]
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

## Resources

- [Manual](https://julialinearalgebra.github.io/IterativeSolvers.jl/dev/)
- [Manual](https://IterativeSolvers.julialinearalgebra.org/dev/)
- [Contributing](https://julialinearalgebra.github.io/IterativeSolvers.jl/dev/about/CONTRIBUTING/)

## Installing
Expand Down
16 changes: 8 additions & 8 deletions benchmark/setup-florida.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,19 @@

#Benchmark iterative methods against matrices from the University of Florida
#sparse collection
using Pkg

#Root URL to the matrix collection
UFL_URL_ROOT = "http://www.cise.ufl.edu/research/sparse/matrices"
const UFL_URL_ROOT = "http://www.cise.ufl.edu/research/sparse/matrices"
#Plain text file containing list of matrices
MASTERLIST = "matrices.txt"
const MASTERLIST = "matrices.txt"
#Download UFL collection to this directory
BASEDIR = "florida"
const BASEDIR = "florida"

# 1. Read in master list of matrices from BASEDIR/MASTERLIST
# If absent, generate this file. Requires Gumbo.jl
Pkg.installed("Gumbo")==nothing || using Gumbo
Pkg.installed("Gumbo") === nothing || using Gumbo

isdir(BASEDIR) || mkdir(BASEDIR)
if !isfile(joinpath(BASEDIR, MASTERLIST))
Pkg.installed("Gumbo")===nothing && error("Parsing list from UFL website requires Gumbo.jl")
Expand Down Expand Up @@ -61,11 +63,9 @@ for (group, matrix) in matrices
isdir(groupdir) || mkdir(groupdir)

if !isfile(joinpath(groupdir, matrix*".mat"))
info("Downloading $group/$matrix.mat")
@info("Downloading $group/$matrix.mat")
download(joinpath(UFL_URL_ROOT, "..", "mat", group, matrix*".mat"),
joinpath(BASEDIR, group, matrix*".mat"))
end
end
info("Downloaded $(length(matrices)) matrices")


@info("Downloaded $(length(matrices)) matrices")
4 changes: 2 additions & 2 deletions docs/src/linear_systems/gmres.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ gmres!

The implementation pre-allocates a matrix $V$ of size `n` by `restart` whose columns form an orthonormal basis for the Krylov subspace. This allows BLAS2 operations when updating the solution vector $x$. The Hessenberg matrix is also pre-allocated.

Modified Gram-Schmidt is used to orthogonalize the columns of $V$.
By default, modified Gram-Schmidt is used to orthogonalize the columns of $V$, since it is numerically more stable than classical Gram-Schmidt. Modified Gram-Schmidt is however inherently sequential, and if stability is not a concern, classical Gram-Schmidt can be used, which is implemented using BLAS2 operations. As a compromise the "DGKS criterion" can be used, which conditionally applies classical Gram-Schmidt repeatedly to stabilize it, and is typically one to two times slower than classical Gram-Schmidt.

The computation of the residual norm is implemented in a non-standard way, namely keeping track of a vector $\gamma$ in the null-space of $H_k^*$, which is the adjoint of the $(k + 1) \times k$ Hessenberg matrix $H_k$ at the $k$th iteration. Only when $x$ needs to be updated is the Hessenberg matrix mutated with Givens rotations.

!!! tip
GMRES can be used as an [iterator](@ref Iterators). This makes it possible to access the Hessenberg matrix and Krylov basis vectors during the iterations.
GMRES can be used as an [iterator](@ref Iterators). This makes it possible to access the Hessenberg matrix and Krylov basis vectors during the iterations.
4 changes: 2 additions & 2 deletions src/bicgstabl.jl
Original file line number Diff line number Diff line change
Expand Up @@ -159,8 +159,8 @@ For BiCGStab(l) this is a less dubious term than "number of iterations";
- `abstol::Real = zero(real(eltype(b)))`,
`reltol::Real = sqrt(eps(real(eltype(b))))`: absolute and relative
tolerance for the stopping condition
`|r_k| / |r_0| ≤ max(reltol * resnorm, abstol)`, where `r_k = A * x_k - b`
is the residual in the `k`th iteration;
`|r_k| ≤ max(reltol * |r_0|, abstol)`, where `r_k A * x_k - b`
is the approximate residual in the `k`th iteration;
!!! note
1. The true residual norm is never computed during the iterations,
only an approximation;
Expand Down
7 changes: 5 additions & 2 deletions src/cg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -183,8 +183,11 @@ cg(A, b; kwargs...) = cg!(zerox(A, b), A, b; initially_zero = true, kwargs...)
- `abstol::Real = zero(real(eltype(b)))`,
`reltol::Real = sqrt(eps(real(eltype(b))))`: absolute and relative
tolerance for the stopping condition
`|r_k| / |r_0| ≤ max(reltol * resnorm, abstol)`, where `r_k = A * x_k - b`
is the residual in the `k`th iteration;
`|r_k| ≤ max(reltol * |r_0|, abstol)`, where `r_k ≈ A * x_k - b`
is approximately the residual in the `k`th iteration.
!!! note
The true residual norm is never explicitly computed during the iterations
for performance reasons; it may accumulate rounding errors.
- `maxiter::Int = size(A,2)`: maximum number of iterations;
- `verbose::Bool = false`: print method information;
- `log::Bool = false`: keep track of the residual norm in each iteration.
Expand Down
2 changes: 1 addition & 1 deletion src/chebyshev.jl
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ Solve Ax = b for symmetric, definite matrices A using Chebyshev iteration.
- `abstol::Real = zero(real(eltype(b)))`,
`reltol::Real = sqrt(eps(real(eltype(b))))`: absolute and relative
tolerance for the stopping condition
`|r_k| / |r_0| ≤ max(reltol * resnorm, abstol)`, where `r_k = A * x_k - b`
`|r_k| ≤ max(reltol * |r_0|, abstol)`, where `r_k = A * x_k - b`
is the residual in the `k`th iteration;
- `maxiter::Int = size(A, 2)`: maximum number of inner iterations of GMRES;
- `Pl = Identity()`: left preconditioner;
Expand Down
22 changes: 15 additions & 7 deletions src/gmres.jl
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Residual(order, T::Type) = Residual{T, real(T)}(
one(real(T))
)

mutable struct GMRESIterable{preclT, precrT, solT, rhsT, vecT, arnoldiT <: ArnoldiDecomp, residualT <: Residual, resT <: Real}
mutable struct GMRESIterable{preclT, precrT, solT, rhsT, vecT, arnoldiT <: ArnoldiDecomp, residualT <: Residual, resT <: Real, orthmethT}
Pl::preclT
Pr::precrT
x::solT
Expand All @@ -44,6 +44,8 @@ mutable struct GMRESIterable{preclT, precrT, solT, rhsT, vecT, arnoldiT <: Arnol
maxiter::Int
tol::resT
β::resT

orth_meth::orthmethT
end

converged(g::GMRESIterable) = g.residual.current g.tol
Expand All @@ -66,7 +68,8 @@ function iterate(g::GMRESIterable, iteration::Int=start(g))
g.arnoldi.H[g.k + 1, g.k] = orthogonalize_and_normalize!(
view(g.arnoldi.V, :, 1 : g.k),
view(g.arnoldi.V, :, g.k + 1),
view(g.arnoldi.H, 1 : g.k, g.k)
view(g.arnoldi.H, 1 : g.k, g.k),
g.orth_meth
)

# Implicitly computes the residual
Expand Down Expand Up @@ -109,7 +112,8 @@ function gmres_iterable!(x, A, b;
reltol::Real = sqrt(eps(real(eltype(b)))),
restart::Int = min(20, size(A, 2)),
maxiter::Int = size(A, 2),
initially_zero::Bool = false)
initially_zero::Bool = false,
orth_meth::OrthogonalizationMethod = ModifiedGramSchmidt())
T = eltype(x)

# Approximate solution
Expand All @@ -126,7 +130,8 @@ function gmres_iterable!(x, A, b;

GMRESIterable(Pl, Pr, x, b, Ax,
arnoldi, residual,
mv_products, restart, 1, maxiter, tolerance, residual.current
mv_products, restart, 1, maxiter, tolerance, residual.current,
orth_meth
)
end

Expand Down Expand Up @@ -156,13 +161,14 @@ Solves the problem ``Ax = b`` with restarted GMRES.
- `abstol::Real = zero(real(eltype(b)))`,
`reltol::Real = sqrt(eps(real(eltype(b))))`: absolute and relative
tolerance for the stopping condition
`|r_k| / |r_0| ≤ max(reltol * resnorm, abstol)`, where `r_k = A * x_k - b`
`|r_k| ≤ max(reltol * |r_0|, abstol)`, where `r_k = A * x_k - b`
- `restart::Int = min(20, size(A, 2))`: restarts GMRES after specified number of iterations;
- `maxiter::Int = size(A, 2)`: maximum number of inner iterations of GMRES;
- `Pl`: left preconditioner;
- `Pr`: right preconditioner;
- `log::Bool`: keep track of the residual norm in each iteration;
- `verbose::Bool`: print convergence information during the iterations.
- `orth_meth::OrthogonalizationMethod = ModifiedGramSchmidt()`: orthogonalization method (ModifiedGramSchmidt(), ClassicalGramSchmidt(), DGKS())
# Return values
Expand All @@ -184,15 +190,17 @@ function gmres!(x, A, b;
maxiter::Int = size(A, 2),
log::Bool = false,
initially_zero::Bool = false,
verbose::Bool = false)
verbose::Bool = false,
orth_meth::OrthogonalizationMethod = ModifiedGramSchmidt())
history = ConvergenceHistory(partial = !log, restart = restart)
history[:abstol] = abstol
history[:reltol] = reltol
log && reserve!(history, :resnorm, maxiter)

iterable = gmres_iterable!(x, A, b; Pl = Pl, Pr = Pr,
abstol = abstol, reltol = reltol, maxiter = maxiter,
restart = restart, initially_zero = initially_zero)
restart = restart, initially_zero = initially_zero,
orth_meth = orth_meth)

verbose && @printf("=== gmres ===\n%4s\t%4s\t%7s\n","rest","iter","resnorm")

Expand Down
17 changes: 13 additions & 4 deletions src/idrs.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,11 @@ shadow space.
## Keywords
- `s::Integer = 8`: dimension of the shadow space;
- `Pl::precT`: left preconditioner,
- `abstol::Real = zero(real(eltype(b)))`,
`reltol::Real = sqrt(eps(real(eltype(b))))`: absolute and relative
tolerance for the stopping condition
`|r_k| / |r_0| ≤ max(reltol * resnorm, abstol)`, where `r_k = A * x_k - b`
`|r_k| ≤ max(reltol * |r_0|, abstol)`, where `r_k = A * x_k - b`
is the residual in the `k`th iteration;
- `maxiter::Int = size(A, 2)`: maximum number of iterations;
- `log::Bool`: keep track of the residual norm in each iteration;
Expand All @@ -46,6 +47,7 @@ shadow space.
"""
function idrs!(x, A, b;
s = 8,
Pl = Identity(),
abstol::Real = zero(real(eltype(b))),
reltol::Real = sqrt(eps(real(eltype(b)))),
maxiter=size(A, 2),
Expand All @@ -55,7 +57,7 @@ function idrs!(x, A, b;
history[:abstol] = abstol
history[:reltol] = reltol
log && reserve!(history, :resnorm, maxiter)
idrs_method!(history, x, A, b, s, abstol, reltol, maxiter; kwargs...)
idrs_method!(history, x, A, b, s, Pl, abstol, reltol, maxiter; kwargs...)
log && shrink!(history)
log ? (x, history) : x
end
Expand All @@ -78,8 +80,8 @@ end
end

function idrs_method!(log::ConvergenceHistory, X, A, C::T,
s::Number, abstol::Real, reltol::Real, maxiter::Number; smoothing::Bool=false, verbose::Bool=false
) where {T}
s::Number, Pl::precT, abstol::Real, reltol::Real, maxiter::Number; smoothing::Bool=false, verbose::Bool=false
) where {T, precT}

verbose && @printf("=== idrs ===\n%4s\t%7s\n","iter","resnorm")
R = C - A*X
Expand Down Expand Up @@ -132,6 +134,9 @@ function idrs_method!(log::ConvergenceHistory, X, A, C::T,
# Compute new U[:,k] and G[:,k], G[:,k] is in space G_j
V .= R .- V

# Preconditioning
ldiv!(Pl, V)

U[k] .= Q .+ om .* V
mul!(G[k], A, U[k])

Expand Down Expand Up @@ -181,6 +186,10 @@ function idrs_method!(log::ConvergenceHistory, X, A, C::T,
# Now we have sufficient vectors in G_j to compute residual in G_j+1
# Note: r is already perpendicular to P so v = r
copyto!(V, R)

# Preconditioning
ldiv!(Pl, V)

mul!(Q, A, V)
om = omega(Q, R)
R .-= om .* Q
Expand Down
4 changes: 1 addition & 3 deletions src/minres.jl
Original file line number Diff line number Diff line change
Expand Up @@ -178,13 +178,11 @@ Solve Ax = b for (skew-)Hermitian matrices A using MINRES.
- `abstol::Real = zero(real(eltype(b)))`,
`reltol::Real = sqrt(eps(real(eltype(b))))`: absolute and relative
tolerance for the stopping condition
`|r_k| / |r_0| ≤ max(reltol * resnorm, abstol)`, where `r_k = A * x_k - b`
`|r_k| ≤ max(reltol * |r_0|, abstol)`, where `r_k = A * x_k - b`
is the residual in the `k`th iteration
!!! note
The residual is computed only approximately.
- `maxiter::Int = size(A, 2)`: maximum number of iterations;
- `Pl`: left preconditioner;
- `Pr`: right preconditioner;
- `log::Bool = false`: keep track of the residual norm in each iteration;
- `verbose::Bool = false`: print convergence information during the iterations.
Expand Down
11 changes: 6 additions & 5 deletions src/orthogonalize.jl
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,10 @@ struct ClassicalGramSchmidt <: OrthogonalizationMethod end
struct ModifiedGramSchmidt <: OrthogonalizationMethod end

# Default to MGS, good enough for solving linear systems.
@inline orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}) where {T} = orthogonalize_and_normalize!(V, w, h, ModifiedGramSchmidt)
@inline orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}) where {T} =
orthogonalize_and_normalize!(V, w, h, ModifiedGramSchmidt())

function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}, ::Type{DGKS}) where {T}
function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}, ::DGKS) where {T}
# Orthogonalize using BLAS-2 ops
mul!(h, adjoint(V), w)
mul!(w, V, h, -one(T), one(T))
Expand Down Expand Up @@ -37,7 +38,7 @@ function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T},
nrm
end

function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}, ::Type{ClassicalGramSchmidt}) where {T}
function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}, ::ClassicalGramSchmidt) where {T}
# Orthogonalize using BLAS-2 ops
mul!(h, adjoint(V), w)
mul!(w, V, h, -one(T), one(T))
Expand All @@ -49,7 +50,7 @@ function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T},
nrm
end

function orthogonalize_and_normalize!(V::StridedVector{Vector{T}}, w::StridedVector{T}, h::StridedVector{T}, ::Type{ModifiedGramSchmidt}) where {T}
function orthogonalize_and_normalize!(V::StridedVector{Vector{T}}, w::StridedVector{T}, h::StridedVector{T}, ::ModifiedGramSchmidt) where {T}
# Orthogonalize using BLAS-1 ops
for i = 1 : length(V)
h[i] = dot(V[i], w)
Expand All @@ -63,7 +64,7 @@ function orthogonalize_and_normalize!(V::StridedVector{Vector{T}}, w::StridedVec
nrm
end

function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}, ::Type{ModifiedGramSchmidt}) where {T}
function orthogonalize_and_normalize!(V::StridedMatrix{T}, w::StridedVector{T}, h::StridedVector{T}, ::ModifiedGramSchmidt) where {T}
# Orthogonalize using BLAS-1 ops and column views.
for i = 1 : size(V, 2)
column = view(V, :, i)
Expand Down
2 changes: 1 addition & 1 deletion src/qmr.jl
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ Solves the problem ``Ax = b`` with the Quasi-Minimal Residual (QMR) method.
- `abstol::Real = zero(real(eltype(b)))`,
`reltol::Real = sqrt(eps(real(eltype(b))))`: absolute and relative
tolerance for the stopping condition
`|r_k| / |r_0| ≤ max(reltol * resnorm, abstol)`, where `r_k = A * x_k - b`
`|r_k| ≤ max(reltol * |r_0|, abstol)`, where `r_k = A * x_k - b`
- `log::Bool`: keep track of the residual norm in each iteration;
- `verbose::Bool`: print convergence information during the iteration.
Expand Down
2 changes: 1 addition & 1 deletion src/svdl.jl
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ function svdl_method!(log::ConvergenceHistory, A, l::Int=min(6, size(A,1)); k::I
extend!(log, A, L, k)
if verbose
elapsedtime = round((time_ns()-T0)*1e-9, digits=3)
info("Iteration $iter: $elapsedtime seconds")
@info("Iteration $iter: $elapsedtime seconds")
end

conv = isconverged(L, F, l, tol, reltol, log, verbose)
Expand Down
19 changes: 19 additions & 0 deletions test/idrs.jl
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,25 @@ end
@test norm(A * x - b) / norm(b) reltol
end

@testset "SparseMatrixCSC{$T, $Ti} with preconditioner" for T in (Float64, ComplexF64), Ti in (Int64, Int32)
A = sprand(T, 1000, 1000, 0.1) + 30 * I
b = rand(T, 1000)
reltol = eps(real(T))

x, history = idrs(A, b, log=true)
@test history.isconverged
@test norm(A * x - b) / norm(b) reltol

Apre = lu(droptol!(copy(A), 0.1)) # inexact preconditioner
xpre, historypre = idrs(A, b, Pl = Apre, log=true)
@test historypre.isconverged
@test norm(A * xpre - b) / norm(b) reltol

@test isapprox(x, xpre, rtol = 1e-3)
@test historypre.iters < 0.5history.iters

end

@testset "Maximum number of iterations" begin
x, history = idrs(rand(5, 5), rand(5), log=true, maxiter=2)
@test history.iters == 2
Expand Down
4 changes: 2 additions & 2 deletions test/orthogonalize.jl
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ m = 3
end

# Assuming V is a matrix
@testset "Using $method" for method = (DGKS, ClassicalGramSchmidt, ModifiedGramSchmidt)
@testset "Using $method" for method = (DGKS(), ClassicalGramSchmidt(), ModifiedGramSchmidt())

# Projection size
h = zeros(T, m)
Expand All @@ -55,7 +55,7 @@ m = 3

# Orthogonalize w in-place
w = copy(w_original)
nrm = orthogonalize_and_normalize!(V_vec, w, h, ModifiedGramSchmidt)
nrm = orthogonalize_and_normalize!(V_vec, w, h, ModifiedGramSchmidt())

is_orthonormalized(w, h, nrm)
end
Expand Down

0 comments on commit 9a7fb26

Please sign in to comment.