forked from visualmachines/visualmachines.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathvisualphysicstutorial.htm
284 lines (237 loc) · 16 KB
/
visualphysicstutorial.htm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
<!DOCTYPE html>
<!-- saved from url=(0059)http://web.media.mit.edu/~achoo/iccvtoftutorial/#organizers -->
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Visual Physics Tutorial @ CVPR 2020</title>
<!-- Bootstrap Core CSS -->
<link href="./Files//bootstrap.min.css" rel="stylesheet">
<!-- Custom CSS -->
<!--<link href="css/round-about.css" rel="stylesheet">-->
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body data-gr-c-s-loaded="true">
<!-- Navigation -->
<nav class="navbar navbar-inverse" role="navigation">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#">Visual Physics @ CVPR 2020</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav">
<li>
<a href="#schedule">Schedule</a>
</li>
<li>
<a href="#outline">Materials</a>
</li>
<li>
<a href="#organizers">Organizers</a>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container -->
</nav>
<!-- Page Content -->
<div class="container">
<div class="row">
<div class="col-lg-12">
<h1>Visual Physics <small>@ CVPR 2020</small></h1>
<img src="./Files/pano_seattle.jpg" class="img-responsive img-rounded center-block" alt="Responsive image">
<p class="text-right small"><a href="https://www.flickr.com/photos/thgeorge/">Credit: Still Vision</a></p>
<p><b>Abstract:</b> The early days of computer vision used ‘’physics-inspired algorithms” to detect
contours, edges, faces, and other features. Over the past decade, these structured algorithms
have been superseded by deep learning algorithms with superior performance. This tutorial
discusses an increasingly popular class of hybrid methods that blend physics and learning. We
discuss three technical modules of visual physics, dealing with the blending of physics/learning,
the discovery of physical laws from imagery/video, and the design of imaging systems. Our
fourth and final module is a didactic hands-on session on implementing physics-based learning.
</p>
<p><b>Takeaways:</b> Attendees will obtain an overview of the field of physics-based learning. Topics
include blending (of physical priors into blackbox neural networks), discovery (of physical laws
from images and video), and design (of imaging systems using physics-guided machine
learning). Instructors will present recent papers in physics-based learning that achieve
state-of-the-art performance and/or open up new pathways. Finally, attendees are provided a
primer on designing a physics-aware neural network.
</p>
</div>
</div>
<a name="schedule"></a>
<div class="row">
<div class="col-lg-12">
<h2 class="page-header">Tutorial Schedule</h2>
<dl class="dl-horizontal">
<dt>00:00-00:25</dt>
<dd><a href="">Introduction: Past and Present of Physics in Vision [Slides]</a> <small> [Bill Freeman, MIT] </small></dd>
<dt>00:25-01:20</dt>
<dd><a href="">Module I: Blending Physics and Learning [Slides]</a> <small> [Katerina Fragkiadaki, CMU and
Laura Waller, UC Berkeley] </small></dd>
<dt>01:20-01:30</dt>
<dd><em>Q&A and Break</em></dd>
<dt>01:30-02:00</dt>
<dd><a href="">Module II: Discovering Physics from Video [Slides]</a> <small> [Achuta Kadambi, UCLA] </small></dd>
<dt>02:00-03:00</dt>
<dd><a href="">Module III: Designing Imaging Systems using Physics-based Machine Learning [Slides]</a> <small> [Ayan Chakrabarti, WUSTL and Laura Waller, UC Berkeley] </small></dd>
<dt>03:00-03:10</dt>
<dd><em>Q&A and Break</em></dd>
<dt>03:10-03:35</dt>
<dd><a href="">Module IV: Hands-on Session: Implementing a Physics-aware Neural Network [Slides]</a> <small> [Achuta Kadambi, UCLA] </small></dd>
<dt>03:35-04:00</dt>
<dd>Panel Discussion of Open Problems<small> [All Organizers]</small></dd>
</dl></div>
</div>
<!-- Course Outline -->
<a name="outline"></a>
<div class="row">
<div class="col-lg-12">
<h2 class="page-header">Materials</h2>
<!-- <div class="row">
<div class="col-lg-10 col-lg-offset-1">
-->
<div class="alert alert-warning" role="alert">
<span class="sr-only">Warning:</span>
Dear tutorial attendees, <br><br>
At this moment, a preliminary schedule is currently available. Slides will be available shortly before the tutorial.
<br><br>
Thank you, and looking forward to seeing you at CVPR 2020!
<br><br>
--- the organizers
</div>
<!--
<a name="intro"></a>
<h4>1. A brief history of photography</h4>
<ul>
<li>From camera obscura to the computational camera</li>
</ul>
<a class="btn btn-primary btn-sm" href="materials/ComputationalPhotography-Module1-JFL.pdf" role="button">PDF</a>
<a class="btn btn-info btn-sm" href="materials/ComputationalPhotography-Module1-JFL.pptx" role="button">PPT</a>
<a class="btn btn-success btn-sm" href="materials/ComputationalPhotography-Module1-JFL.key.zip" role="button">Keynote</a>
<a name="sec-mohit"></a>
<h4>2. Coded photography <small>Novel camera designs and functionalities</small></h4>
<ul>
<li>Optical coding approaches: aperture, image plane, and illumination coding; camera arrays;</li>
<li>Novel functionalities: light field cameras, extended DOF cameras, hyperspectral cameras, ultra high-resolution cameras (Gigapixel), HDR cameras, post-capture refocusing and post-capture resolution trade-offs,</li>
<li>Depth cameras: structured light, time-of-flight,</li>
<li>Compressive sensing: single pixel and high speed cameras;</li>
</ul>
<a class="btn btn-primary btn-sm" href="materials/ComputationalPhotography-Module2-MG.pdf" role="button">PDF</a>
<a class="btn btn-info btn-sm" href="materials/ComputationalPhotography-Module2-MG.pptx" role="button">PPT</a>
<a name="sec-jf"></a>
<h4>3. Augmented photography <small>Algorithmic tools for novel visual experiences</small></h4>
<ul>
<li>Mobile photography: trends, goals</li>
<li>Inverting the imaging pipeline: deconvolution, PSF estimation, demosaicking;</li>
<li>Capturing bursts of photos: denoising, deblurring, sources of camera noise;</li>
<li>Advanced image editing: automatic cropping, contrast/tone/color adjustments or transfer, distractor removal, and shallow depth of field;</li>
<li>2D image, 3D scene: advanced image editing beyond the image plane, scene geometry, materials, and light estimation for "behind-the-image" editing; </li>
</ul>
<a class="btn btn-primary btn-sm" href="materials/ComputationalPhotography-Module3-JFL.pdf" role="button">PDF</a>
<a class="btn btn-info btn-sm" href="materials/ComputationalPhotography-Module3-JFL.pptx" role="button">PPT</a>
<a class="btn btn-success btn-sm" href="materials/ComputationalPhotography-Module3-JFL.key.zip" role="button">Keynote</a>
<a name="conclu"></a>
<h4>4. Future and impact of photography</h4>
<ul>
<li>"Social/collaborative photography" or the Internet of Cameras;</li>
<li>Wearable and flexible cameras;</li>
<li>Seeing the invisible: seeing around corners, through walls, laser speckle photography;</li>
<li>Image forensics;</li>
<li>Next generation applications (personalized health monitoring, robotic surgery, self-driving cars, astronomy).</li>
</ul>
<a class="btn btn-primary btn-sm" href="materials/ComputationalPhotography-Module4-MG.pdf" role="button">PDF</a>
<a class="btn btn-info btn-sm" href="materials/ComputationalPhotography-Module4-MG.pptx" role="button">PPT</a>
<a name="extras"></a>
<h4>5. Extras <small>What to do in Qu�bec</small></h4>
<a class="btn btn-primary btn-sm" href="materials/ComputationalPhotography-Module5-JFL.pdf" role="button">PDF</a>
<h4>Bibliography</h4>
<a class="btn btn-primary btn-sm" href="materials/ComputationalPhotography-References.pdf" role="button">PDF</a>
<a class="btn btn-warning btn-sm" href="materials/ComputationalPhotography-References.zip" role="button">BibTeX</a>
</div>
</div>
</div>
-->
</div>
<!-- Team Members Row -->
<a name="organizers"></a>
<div class="row">
<div class="col-lg-12">
<h2 class="page-header">Organizers</h2>
</div>
<div class="col-lg-6 col-sm-6 text-left">
<img class="img-circle img-responsive center-block" src="./Files/kadambi_headshot.jpg" width="200" alt="Achuta Kadambi">
<h3><a href="https://www.ee.ucla.edu/achuta-kadambi/">Achuta Kadambi</a><br>
<small>Assistant Professor, UCLA</small>
</h3>
<p><b>Achuta Kadambi</b> received his PhD from MIT and joined UCLA as an Assistant Professor of Electrical and Computer Engineering. His team studies how artificial intelligence (AI) can discover physical laws and use these insights to design new camera systems. At the intersection of AI, computer vision, and optics, Achuta’s research has wide applications to autonomous systems and digital health. Achuta’s research has been selected for the best paper award at ICCP 2018, best papers of ICCV 2015 special issue, the NSF Research Initiation Award. Relevant to the proposed tutorial, he has published several papers in physics-based vision and learning.</a>.</p>
</div>
<div class="col-lg-6 col-sm-6 text-left">
<img class="img-circle img-responsive center-block" src="./Files/image6.png" width="200" height="200" alt="Bill Freeman">
<h3><a href="https://billf.mit.edu/">Bill Freeman</a><br>
<small>Thomas and Gerd Perkins Professor, MIT</small>
</h3>
<p><b>Bill Freeman</b> is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT. His current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision and machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time awards for papers from 1990, 1995 and 2005. He is active in the program for organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.
</p>
</div>
</div>
<br>
<div class="row">
<div class="col-lg-6 col-sm-6 text-left">
<img class="img-circle img-responsive center-block" src="./Files/Katerina.png" width="200" alt="Katerina Fragkiadaki">
<h3><a href="https://www.cs.cmu.edu/~katef/">Katerina Fragkiadaki</a><br>
<small>Assistant Professor, CMU</small>
</h3>
<p><b>Katerina Fragkiadaki</b> is an Assistant Professor in the Machine Learning Department at Carnegie Mellon. Prior to joining MLD's faculty Katerina spent three years as a post-doctoral researcher, first at UC Berkeley working with Jitendra Malik, and then at Google Research in Mountain View working with the video group. Katerina completed the Ph.D. degree at GRASP, UPenn with Jianbo Shi and undergraduate studies at the National Technical University of Athens. Relevant to the proposed tutorial, she has published several vision papers that implicitly or explicitly incorporate physics. </p>
</div>
<div class="col-lg-6 col-sm-6 text-left">
<img class="img-circle img-responsive center-block" src="./Files/Laura.png" width="200" alt="Laura Waller">
<h3><a href="https://www2.eecs.berkeley.edu/Faculty/Homepages/waller.html">Laura Waller</a><br>
<small>Professor, UC Berkeley</small>
</h3>
<p><b>Laura Waller</b> leads the Computational Imaging Lab, which develops new methods for optical imaging, with optics and computational algorithms designed jointly. She holds the Ted Van Duzer Endowed Professorship and is a Senior Fellow at the Berkeley Institute of Data Science (BIDS), with affiliations in Bioengineering and Applied Sciences & Technology. Laura received her BS, MEng and PhD degrees from MIT in 2004, 2005 and 2010, respectively. She is a Moore Foundation Data-Driven Investigator, Bakar fellow, Distinguished Graduate Student Mentoring awardee, NSF CAREER awardee, Chan-Zuckerberg Biohub Investigator, SPIE Early Career Achievement Awardee and Packard Fellow. Relevant to the proposed tutorial, she has published papers on using physics-guided machine learning to enable smart microscopy.</p>
</div>
</div>
<br>
<div class="row">
<div class="col-lg-6 col-sm-6 text-left">
<img class="img-circle img-responsive center-block" src="./Files/Ayan.png" width="200" alt="Ayan Chakrabarti">
<h3><a href="https://projects.ayanc.org/">Ayan Chakrabarti</a><br>
<small>Assistant Professor, WUSTL</small>
</h3>
<p><b>Ayan Chakrabarti</b> is an Assistant Professor in Computer Science and Engineering at Washington University in St. Louis, where he directs the Vision and Learning Group. He received a PhD in Engineering Sciences from Harvard University, and prior to starting at WashU, was a Research Assistant Professor at the Toyota Technological Institute at Chicago. His research interests are in the fields of computer vision, computational photography, and machine learning. Relevant to the proposed tutorial, Ayan’s research focuses on learning and exploiting the structure of natural images and scenes, to design efficient and accurate inference algorithms, as well as new kinds of </p>
</div>
</div>
<br>
<hr>
<!-- Footer -->
<footer>
<div class="row">
<div class="col-lg-12">
<p>Webpage design courtesy of J-F Lalonde and M Gupta</p>
</div>
<!-- /.col-lg-12 -->
</div>
<!-- /.row -->
</footer>
</div>
<!-- /.container -->
<!-- jQuery -->
<script src="./Files//jquery.js"></script>
<!-- Bootstrap Core JavaScript -->
<script src="./Files/bootstrap.min.js"></script>
</div></body></html>