-
Notifications
You must be signed in to change notification settings - Fork 16
/
Copy pathREADME.filters
158 lines (116 loc) · 5.18 KB
/
README.filters
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
Filters in MathMap 1.3 and above
================================
MathMap 1.3 features a new addition to the language: The ability of
filters to call other filters, and to apply filters to an image to
create a new "virtual" image which can in turn be passed to other
filters. This text is an introduction to this new feature.
Calling filters for pixels
--------------------------
Let's define a filter which flips an image horizontally:
filter flip_x (image in)
in(xy:[-x,y])
end
We might now want to define a filter which blends an image with its
mirror image. Previously, we would have to do something like this:
filter blend_flip_x (image in, float alpha: 0 - 1 (0.5))
pixel = in(xy);
flipped_pixel = in(xy:[-x,y]);
pixel * alpha + flipped_pixel * (1 - alpha)
end
In other words, it was not possible to abstract the flipping
operation. It had to be inlined manually instead. With MathMap 1.3
we can express the same filter using flip_x as a building block like
this:
filter blend_flip_x (image in, float alpha: 0 - 1 (0.5))
pixel = in(xy);
flipped_pixel = flip_x(in, xy);
pixel * alpha + flipped_pixel * (1 - alpha)
end
As you can see we can call the filter "flip_x" by giving it the
arguments it expects (in this case a single input image) plus the
coordinates of the pixel we are asking it to compute.
Applying filters to images
--------------------------
We can even go one step further and also abstract the blending
operation, by first defining a blending filter:
filter blend (image i1, image i2, float alpha: 0 - 1)
p1 = i1(xy);
p2 = i2(xy);
p1 * alpha + p2 * (1 - alpha)
end
Now we can apply the "flip_x" filter to the whole input image to
create a flipped image which we can then pass to "blend":
filter blend_flip_x (image in, float alpha: 0 - 1 (0.5))
blend(in, flip_x(in), alpha, xy)
end
Note that we do not pass the coordinates to "flip_x" this time,
because we don't just want a color - we want a whole image instead.
We do, on the other hand, have to pass xy to "blend", because our
filter is expected to return a color.
When you declare multiple filters in a MathMap script then the last
one is the one that is actually run. A filter can only use filters
which are declared above it, including itself, i.e. filters can be
recursive.
That's everything as far as syntax and basic functionality goes. What
follows is a discussion of the finer points.
Applying filters doesn't really generate images
-----------------------------------------------
I've said above that applying "flip_x" to an image creates a new image
containing a flipped copy of the original image. This is not true in
the sense that a new bitmap is generated for this image. The new
image is instead "virtual". A virtual image is not represented by a
bitmap but instead by a piece of code (the filter) and some data (the
arguments to the filter) that specify exactly what to do in order to
get the color value for some point of the image.
Such a virtual image has two properties which bitmap images do not
have: It extends infinitely in both dimensions, i.e. it is infinitely
large, and it can have arbitrarily small levels of detail.
As a simple illustration of these two properties let's play with these
two filters:
filter zoom_in (image in)
in(xy / 2)
end
filter zoom_out (image in)
in(xy * 2)
end
The filter "zoom_in" enlarges the image by a factor of 2, while
"zoom_out" scales the image down. If you were to apply first
"zoom_in" to an image, and then "zoom_out" to the result, you would
not get the image you started with, but instead only the middle part
of it with black borders around, because applying "zoom_in" discards
those parts. If you applied the filters the other way around you
would get the whole image, but its quality would be reduced, because
scaling down an image inevitably destroys information.
Let's try to combine the filters in MathMap now:
filter ident (image in)
zoom_out(zoom_in(in), xy)
end
This corresponds to our first case: Zooming in, then zooming out.
Because the virtual image "zoom_in(in)" is not limited in size the
borders do not get lost and "zoom_out" recovers everything.
This is the second case:
filter ident (image in)
zoom_in(zoom_out(in), xy)
end
We do not lose information here, because virtual images have unlimited
resolution.
Nit-picking note: While virtual images do have very high resolution
and extend very far, their resolution and size is actually limited by
floating point precision. In practice this should never be an issue,
though.
Colors
------
Another nice property of virtual images is that they also have a much
larger range of colors than ordinary images. Color values are not
clamped for virtual images, so they can extend beyond the range of 0
to 1.
Performance
-----------
The MathMap compiler implements filters and virtual images in such a
way that their overhead should in most cases be zero, i.e. there
should be no difference in performance between using a filter and
doing the inlining manually. That's because MathMap does in fact
always inline calls to filters and virtual images, with the exception
of recursive filters.
--
Karl Stevens <[email protected]>