Skip to content

Commit 2b99da3

Browse files
Update FAQ.md
1 parent 3e3c32d commit 2b99da3

File tree

1 file changed

+31
-0
lines changed

1 file changed

+31
-0
lines changed

docs/src/basics/FAQ.md

+31
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,37 @@
22

33
Ask more questions.
44

5+
## How is LinearSolve.jl compared to just using normal \, i.e. A\b?
6+
7+
Check out [this video from JuliaCon 2022](https://www.youtube.com/watch?v=JWI34_w-yYw) which goes
8+
into detail on how and why LinearSolve.jl is able to be a more general and efficient interface.
9+
10+
Note that if `\` is good enough for you, great! We still tend to use `\` in the REPL all of the time!
11+
However, if you're building a package, you may want to consider using LinearSolve.jl for the improved
12+
efficiency and ability to choose solvers.
13+
14+
## Python's NumPy/SciPy just calls fast Fortran/C code, why would LinearSolve.jl be any better?
15+
16+
This is addressed in the [JuliaCon 2022 video](https://youtu.be/JWI34_w-yYw?t=182). This happens in
17+
a few ways:
18+
19+
1. The Fortran/C code that NumPy/SciPy uses is actually slow. It's [OpenBLAS](https://github.com/xianyi/OpenBLAS),
20+
a library developed in part by the Julia Lab back in 2012 as a fast open source BLAS implementation. Many
21+
open source environments now use this build, including many R distributions. However, the Julia Lab has greatly
22+
improved its ability to generate optimized SIMD in platform-specific ways. This, and improved multithreading support
23+
(OpenBLAS's multithreading is rather slow), has led to pure Julia-based BLAS implementations which the lab now
24+
works on. This includes [RecursiveFactorization.jl](https://github.com/JuliaLinearAlgebra/RecursiveFactorization.jl)
25+
which generally outperforms OpenBLAS by 2x-10x depending on the platform. It even outperforms MKL for small matrices
26+
(<100). LinearSolve.jl uses RecursiveFactorization.jl by default sometimes, but switches to BLAS when it would be
27+
faster (in a platform and matrix-specific way).
28+
2. Standard approaches to handling linear solves re-allocate the pivoting vector each time. This leads to GC pauses that
29+
can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
30+
3. LinearSolve.jl makes a lot of other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
31+
Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
32+
4. LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
33+
matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
34+
and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.
35+
536
## How do I use IterativeSolvers solvers with a weighted tolerance vector?
637

738
IterativeSolvers.jl computes the norm after the application of the left precondtioner

0 commit comments

Comments
 (0)