Skip to content

Commit bc74f3e

Browse files
authored
New interpretation/Visualization API (#176)
* Switch to InlineTest for new test cases * Add abstract types and interface * Add text backend * Document `showblocks` * Change `plotimage` defaults * Add `ShowMakie` backend * Add missing method for `showblocks!(::ShowText, ...)` * Add detection for default `ShowBackend` * Add `showblockintepretable` * fix `setup` docstring being overwritten * Fix column detection for Tabular data * Fix `ShowText` `io` argument * Clean up `Label*` blocks * add learning method show helpers * fix makie nested block alingment and keypoints * add WrapperBlock default visualization * add error message to showinterpretable * Add `Bounded` wrapper block * Add makie bacend tests * Fill in training interface for `Wrapped` * remove old plotting code * Add showoutputs for learner * move wrapper showblock! implementation * make Makie dep optional * Add text-based visualization for LR finder * Fix method registry search * update docs * update Changelog
1 parent a907ea0 commit bc74f3e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+1941
-1089
lines changed

CHANGELOG.md

+16
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,22 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## Unreleased
9+
10+
### Added
11+
12+
- A new API for visualizing data. See [this issue](https://github.com/FluxML/FastAI.jl/issues/154) for motivation. This includes:
13+
14+
- High-level functions for visualizing data related to a learning method: `showsample`, `showsamples`, `showencodedsample`, `showencodedsamples`, `showbatch`, `showprediction`, `showpredictions`, `showoutput`, `showoutputs`, `showoutputbatch`
15+
- Support for multiple backends, including a new text-based show backend that you can use to visualize data in a non-graphical environment. This is also the default unless `Makie` is imported.
16+
- Functions for showing blocks directly: `showblock`, `showblocks`
17+
- Interfaces for extension: `ShowBackend`, `showblock!`, `showblocks!`
18+
19+
### Removed
20+
21+
- The old visualization API incl. all its `plot*` methods: `plotbatch`, `plotsample`, `plotsamples`, `plotpredictions`
22+
23+
824
## 0.2.0
925

1026
### Added

Project.toml

+5-3
Original file line numberDiff line numberDiff line change
@@ -19,22 +19,25 @@ FixedPointNumbers = "53c48c17-4a7d-5ca2-90c5-79b7896eea93"
1919
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
2020
FluxTraining = "7bf95e4d-ca32-48da-9824-f0dc5310474f"
2121
Glob = "c27321d9-0574-5035-807b-f59d2c89b15c"
22+
ImageInTerminal = "d8c32880-2388-543b-8c61-d9f865259254"
2223
IndirectArrays = "9b13fd28-a010-5f03-acff-a1bbcff69959"
24+
InlineTest = "bd334432-b1e7-49c7-a2dc-dd9149e4ebd6"
2325
JLD2 = "033835bb-8acc-5ee8-8aae-3f567f8a3819"
2426
LearnBase = "7f8f8fb0-2700-5f03-b4bd-41f8cfc144b6"
2527
MLDataPattern = "9920b226-0b2a-5f5f-9153-9aa70a013f8b"
26-
Makie = "ee78f7c6-11fb-53f2-987a-cfe4a2b5a57a"
2728
Markdown = "d6f4376e-aef5-505a-96c1-9c027394607a"
2829
MosaicViews = "e94cdb99-869f-56ef-bcf0-1ae2bcbe0389"
2930
Parameters = "d96e819e-fc66-5662-9728-84c9c7592b0a"
3031
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
3132
Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
33+
Requires = "ae029012-a4dd-5104-9daa-d747884805df"
3234
Setfield = "efcf1570-3423-57d1-acb7-fd33fddbac46"
3335
ShowCases = "605ecd9f-84a6-4c9e-81e2-4798472b76a3"
3436
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
3537
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
3638
Tables = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
3739
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
40+
UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228"
3841
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
3942

4043
[compat]
@@ -57,10 +60,9 @@ IndirectArrays = "0.5, 1"
5760
JLD2 = "0.4"
5861
LearnBase = "0.3, 0.4"
5962
MLDataPattern = "0.5"
60-
Makie = "0.15"
6163
MosaicViews = "0.2, 0.3"
6264
Parameters = "0.12"
63-
PrettyTables = "1"
65+
PrettyTables = "1.2"
6466
Reexport = "1.0"
6567
Setfield = "0.7, 0.8"
6668
ShowCases = "0.1"

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ data, blocks = loaddataset("imagenette2-160", (Image, Label))
2222
method = ImageClassificationSingle(blocks)
2323
learner = methodlearner(method, data, callbacks=[ToGPU()])
2424
fitonecycle!(learner, 10)
25-
plotpredictions(method, learner)
25+
showoutputs(method, learner)
2626
```
2727

2828
Please read [the documentation](https://fluxml.github.io/FastAI.jl/dev) for more information and see the [setup instructions](docs/setup.md).

docs/howto/augmentvision.md

+11-16
Original file line numberDiff line numberDiff line change
@@ -4,27 +4,22 @@ Data augmentation is important to train models with good generalization ability,
44

55
By default, the only augmentation that will be used in computer vision tasks is a random crop, meaning that after images, keypoints and masks are resized to a similar size a random portion will be cropped during training. We can demonstrate this on the image classification task.
66

7-
{cell=main result=false output=false}
7+
{cell=main output=false}
88
```julia
99
using FastAI
1010
import CairoMakie; CairoMakie.activate!(type="png")
1111

12-
path = datasetpath("imagenette2-160")
13-
data = Datasets.loadfolderdata(
14-
path,
15-
filterfn=isimagefile,
16-
loadfn=(loadfile, parentname))
17-
classes = unique(eachobs(data[2]))
12+
data, blocks = loaddataset("imagenette2-160", (Image, Label))
1813
method = BlockMethod(
19-
(Image{2}(), Label(classes)),
14+
blocks,
2015
(
2116
ProjectiveTransforms((128, 128)),
2217
ImagePreprocessing(),
2318
OneHot()
2419
)
2520
)
26-
xs, ys = FastAI.makebatch(method, data, fill(4, 9))
27-
FastAI.plotbatch(method, xs, ys)
21+
xs, ys = FastAI.makebatch(method, data, fill(4, 3))
22+
showbatch(method, (xs, ys))
2823
```
2924

3025

@@ -33,15 +28,15 @@ Most learning methods let you pass additional augmentations as keyword arguments
3328
{cell=main}
3429
```julia
3530
method2 = BlockMethod(
36-
(Image{2}(), Label(classes)),
31+
blocks,
3732
(
3833
ProjectiveTransforms((128, 128), augmentations=augs_projection()),
3934
ImagePreprocessing(),
4035
OneHot()
4136
)
4237
)
43-
xs2, ys2 = FastAI.makebatch(method2, data, fill(4, 9))
44-
f = FastAI.plotbatch(method2, xs2, ys2)
38+
xs2, ys2 = FastAI.makebatch(method2, data, fill(4, 3))
39+
showbatch(method2, (xs2, ys2))
4540
```
4641

4742

@@ -50,13 +45,13 @@ Likewise, there is an [`augs_lighting`](#) helper that adds contrast and brightn
5045
{cell=main}
5146
```julia
5247
method3 = BlockMethod(
53-
(Image{2}(), Label(classes)),
48+
blocks,
5449
(
5550
ProjectiveTransforms((128, 128), augmentations=augs_projection()),
5651
ImagePreprocessing(augmentations=augs_lighting()),
5752
OneHot()
5853
)
5954
)
60-
xs3, ys3 = FastAI.makebatch(method3, data, fill(4, 9))
61-
FastAI.plotbatch(method3, xs3, ys3)
55+
xs3, ys3 = FastAI.makebatch(method3, data, fill(4, 3))
56+
showbatch(method3, (xs3, ys3))
6257
```

docs/introduction.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ data, blocks = loaddataset("imagenette2-160", (Image, Label))
1515
method = ImageClassificationSingle(blocks)
1616
learner = methodlearner(method, data, callbacks=[ToGPU()])
1717
fitonecycle!(learner, 10)
18-
plotpredictions(method, learner)
18+
showoutputs(method, learner)
1919
```
2020

2121
Each of the five lines encapsulates one part of the deep learning pipeline to give a high-level API while still allowing customization. Let's have a closer look.
@@ -111,7 +111,7 @@ Training now is quite simple. You have several options for high-level training s
111111
## Visualization
112112

113113
```julia
114-
plotpredictions(method, learner)
114+
showoutputs(method, learner)
115115
```
116116

117117
Finally, the last line visualizes the predictions of the trained model. It takes some samples from the training data loader, runs them through the model and decodes the outputs. How each piece of data is visualized is also inferred through the blocks in the learning method.

docs/make.jl

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
using CairoMakie
1+
import CairoMakie
22
using Pollen
33
using FastAI
44
using FluxTraining

docs/serve.jl

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
using CairoMakie
1+
import CairoMakie
22
using Pollen
33
using FastAI
44
using FluxTraining

docs/setup.md

+6-7
Original file line numberDiff line numberDiff line change
@@ -7,20 +7,19 @@ using Pkg
77
Pkg.add("FastAI")
88
```
99

10-
**Plotting** FastAI.jl also defines [Makie.jl](https://github.com/JuliaPlots/Makie.jl) plotting recipes to visualize data. If you want to use them, you'll have to install and one of the Makie.jl backends [CairoMakie.jl](https://github.com/JuliaPlots/CairoMakie.jl), [GLMakie.jl](https://github.com/JuliaPlots/GLMakie.jl) or [WGLMakie.jl](https://github.com/JuliaPlots/WGLMakie.jl). For example:
10+
**Visualization** Aside from text-based visualizations, FastAI.jl also defines [Makie.jl](https://github.com/JuliaPlots/Makie.jl) plotting recipes to visualize data. If you want to use them, you'll have to install and one of the Makie.jl backends [CairoMakie.jl](https://github.com/JuliaPlots/CairoMakie.jl), [GLMakie.jl](https://github.com/JuliaPlots/GLMakie.jl) or [WGLMakie.jl](https://github.com/JuliaPlots/WGLMakie.jl) and load the package.
1111

1212
```julia
13+
# Install backend package once
1314
using Pkg
1415
Pkg.add("CairoMakie")
16+
17+
# Then load it therafter
18+
import CairoMakie
19+
using FastAI
1520
```
1621

1722
**Colab** If you don't have access to a GPU or want to try out FastAI.jl without installing Julia, try out [this FastAI.jl Colab notebook](https://colab.research.google.com/gist/lorenzoh/2fdc91f9e42a15e633861c640c68e5e8). We're working on adding a "Launch Colab" button to every documentation page based off a notebook file, but for now you can copy the code over manually.
1823

19-
**Pretrained models** To use pretrained vision models, you currently have to install a WIP branch of Metalhead.jl:
20-
21-
```julia
22-
using Pkg
23-
Pkg.add(Pkg.PackageSpec(url="https://github.com/darsnack/Metalhead.jl", rev="darsnack/vision-refactor")
24-
```
2524

2625
**Threaded data loading** To make use of multi-threaded data loading, you need to start Julia with multiple threads, either with the `-t auto` commandline flag or by setting the environment variable `JULIA_NUM_THREADS`. See the [IJulia.jl documentation](https://julialang.github.io/IJulia.jl/dev/manual/installation/#Installing-additional-Julia-kernels) for instructions on setting these for Jupyter notebook kernels.

notebooks/imagesegmentation.ipynb

+166-213
Large diffs are not rendered by default.

notebooks/keypointregression.ipynb

+84-172
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)