Skip to content

Commit

Permalink
CLEVR dataset generation code
Browse files Browse the repository at this point in the history
  • Loading branch information
Justin Johnson authored and rbgirshick committed Sep 20, 2017
0 parents commit 9742828
Show file tree
Hide file tree
Showing 42 changed files with 2,612 additions and 0 deletions.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
.DS_Store
__pycache__/
*.swp
*.pyc
output/
35 changes: 35 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Contributing to clevr-dataset-gen
We want to make contributing to this project as easy and transparent as
possible.

## Pull Requests
We actively welcome your pull requests.

1. Fork the repo and create your branch from `master`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the documentation.
4. Ensure the test suite passes.
5. Make sure your code lints.
6. If you haven't already, complete the Contributor License Agreement ("CLA").

## Contributor License Agreement ("CLA")
In order to accept your pull request, we need you to submit a CLA. You only need
to do this once to work on any of Facebook's open source projects.

Complete your CLA here: <https://code.facebook.com/cla>

## Issues
We use GitHub issues to track public bugs. Please ensure your description is
clear and has sufficient instructions to be able to reproduce the issue.

Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
disclosure of security bugs. In those cases, please go through the process
outlined on that page and do not file a public issue.

## Coding Style
* 2 spaces for indentation rather than tabs
* 80 character line length

## License
By contributing to __________, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree.
30 changes: 30 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
BSD License

For clevr-dataset-gen software

Copyright (c) 2017-present, Facebook, Inc. All rights reserved.

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

* Neither the name Facebook nor the names of its contributors may be used to
endorse or promote products derived from this software without specific
prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
33 changes: 33 additions & 0 deletions PATENTS
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
Additional Grant of Patent Rights Version 2

"Software" means the clevr-dataset-gen software contributed by Facebook, Inc.

Facebook, Inc. ("Facebook") hereby grants to each recipient of the Software
("you") a perpetual, worldwide, royalty-free, non-exclusive, irrevocable
(subject to the termination provision below) license under any Necessary
Claims, to make, have made, use, sell, offer to sell, import, and otherwise
transfer the Software. For avoidance of doubt, no license is granted under
Facebook’s rights in any patent claims that are infringed by (i) modifications
to the Software made by you or any third party or (ii) the Software in
combination with any software or other technology.

The license granted hereunder will terminate, automatically and without notice,
if you (or any of your subsidiaries, corporate affiliates or agents) initiate
directly or indirectly, or take a direct financial interest in, any Patent
Assertion: (i) against Facebook or any of its subsidiaries or corporate
affiliates, (ii) against any party if such Patent Assertion arises in whole or
in part from any software, technology, product or service of Facebook or any of
its subsidiaries or corporate affiliates, or (iii) against any party relating
to the Software. Notwithstanding the foregoing, if Facebook or any of its
subsidiaries or corporate affiliates files a lawsuit alleging patent
infringement against you in the first instance, and you respond by filing a
patent infringement counterclaim in that lawsuit against that party that is
unrelated to the Software, the license granted hereunder will not terminate
under section (i) of this paragraph due to such counterclaim.

A "Necessary Claim" is a claim of a patent owned by Facebook that is
necessarily infringed by the Software standing alone.

A "Patent Assertion" is any lawsuit or other action alleging direct, indirect,
or contributory infringement or inducement to infringe any patent, including a
cross-claim or counterclaim.
118 changes: 118 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
# CLEVR Dataset Generation

This is the code used to generate the [CLEVR dataset](http://cs.stanford.edu/people/jcjohns/clevr/) as described in the paper:

**[CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning](http://cs.stanford.edu/people/jcjohns/clevr/)**
<br>
<a href='http://cs.stanford.edu/people/jcjohns/'>Justin Johnson</a>,
<a href='http://home.bharathh.info/'>Bharath Hariharan</a>,
<a href='https://lvdmaaten.github.io/'>Laurens van der Maaten</a>,
<a href='http://vision.stanford.edu/feifeili/'>Fei-Fei Li</a>,
<a href='http://larryzitnick.org/'>Larry Zitnick</a>,
<a href='http://www.rossgirshick.info/'>Ross Girshick</a>
<br>
Presented at [CVPR 2017](http://cvpr2017.thecvf.com/)

Code and pretrained models for the baselines used in the paper [can be found here](https://github.com/facebookresearch/clevr-iep).

You can use this code to render synthetic images and compositional questions for those images, like this:

<div align="center">
<img src="images/example1080.png" width="800px">
</div>

**Q:** How many small spheres are there? <br>
**A:** 2

**Q:** What number of cubes are small things or red metal objects? <br>
**A:** 2

**Q:** Does the metal sphere have the same color as the metal cylinder? <br>
**A:** Yes

**Q:** Are there more small cylinders than metal things? <br>
**A:** No

**Q:** There is a cylinder that is on the right side of the large yellow object behind the blue ball; is there a shiny cube in front of it? <br>
**A:** Yes

If you find this code useful in your research then please cite

```
@inproceedings{johnson2017clevr,
title={CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning},
author={Johnson, Justin and Hariharan, Bharath and van der Maaten, Laurens
and Fei-Fei, Li and Zitnick, C Lawrence and Girshick, Ross},
booktitle={CVPR},
year={2017}
}
```

All code was developed and tested on OSX and Ubuntu 16.04.

## Step 1: Generating Images
First we render synthetic images using [Blender](https://www.blender.org/), outputting both rendered images as well as a JSON file containing ground-truth scene information for each image.

Blender ships with its own installation of Python which is used to execute scripts that interact with Blender; you'll need to add the `image_generation` directory to Python path of Blender's bundled Python. The easiest way to do this is by adding a `.pth` file to the `site-packages` directory of Blender's Python, like this:

```bash
echo $PWD/image_generation >> $BLENDER/$VERSION/python/lib/python3.5/site-packages/clevr.pth
```

where `$BLENDER` is the directory where Blender is installed and `$VERSION` is your Blender version; for example on OSX you might run:

```bash
echo $PWD/image_generation >> /Applications/blender/blender.app/Contents/Resources/2.78/python/lib/python3.5/site-packages/clevr.pth
```

You can then render some images like this:

```bash
cd image_generation
blender --background --python render_images.py -- --num_images 10
```

On OSX the `blender` binary is located inside the blender.app directory; for convenience you may want to
add the following alias to your `~/.bash_profile` file:

```bash
alias blender='/Applications/blender/blender.app/Contents/MacOS/blender'
```

If you have an NVIDIA GPU with CUDA installed then you can use the GPU to accelerate rendering like this:

```bash
blender --background --python render_images.py -- --num_images 10 --use_gpu 1
```

After this command terminates you should have ten freshly rendered images stored in `output/images` like these:

<div align="center">
<img src="images/img1.png" width="260px">
<img src="images/img2.png" width="260px">
<img src="images/img3.png" width="260px">
<br>
<img src="images/img4.png" width="260px">
<img src="images/img5.png" width="260px">
<img src="images/img6.png" width="260px">
</div>

The file `output/CLEVR_scenes.json` will contain ground-truth scene information for all newly rendered images.

You can find [more details about image rendering here](image_generation/README.md).

## Step 2: Generating Questions
Next we generate questions, functional programs, and answers for the rendered images generated in the previous step.
This step takes as input the single JSON file containing all ground-truth scene information, and outputs a JSON file
containing questions, answers, and functional programs for the questions in a single JSON file.

You can generate questions like this:

```bash
cd question_generation
python generate_questions.py
```

The file `output/CLEVR_questions.json` will then contain questions for the generated images.

You can [find more details about question generation here](question_generation/README.md).
1 change: 1 addition & 0 deletions image_generation/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
output/
89 changes: 89 additions & 0 deletions image_generation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# CLEVR Image Generation

Images are generated by using Blender to invoke the script `render_images.py` like this:

```
blender --background --python render_images.py -- [args]
```

Any arguments following the `--` will be captured by `render_images.py`.

This command should be run from the `image_generation` directory, since by default the script will load resources from the `data` directory.

When rendering on cluster machines without audio drivers installed you may need to add the `-noaudio` flag to the Blender invocation like this:

```
blender --background -noaudio --python render_images.py -- [args]
```

You can also run `render_images.py` as a standalone script to view help on all command line flags like this:

```
python render_images.py --help
```

## Setup
You will need to download and install [Blender](https://www.blender.org/); code has been developed and tested using Blender version 2.78c but other versions may work as well.

Blender ships with its own version of Python 3.5, and it uses its bundled Python to execute scripts. You'll need to add this directory to the Python path of Blender's bundled Python with a command like this:

```
echo $PWD >> $BLENDER/$VERSION/python/lib/python3.5/site-packages/clevr.pth
```

where `$BLENDER` is the directory where Blender is installed and `$VERSION` is your Blender version; for example on OSX you might run:

```
echo $PWD >> /Applications/blender/blender.app/Contents/Resources/2.78/python/lib/python3.5/site-packages/clevr.pth
```

## Rendering Overview
The file `data/base_scene.blend` contains a Blender scene used for the basis of all CLEVR images. This scene contains a ground plane, a camera, and several light sources. After loading the base scene, the positions of the camera and lights are randomly jittered (controlled with the `--key_light_jitter`, `--fill_light_jitter`, `--back_light_jitter`, and `--camera_jitter` flags).

After the base scene has been loaded, objects are placed one by one into the scene. The number of objects for each scene is a random integer between `--min_objects` (default 3) and `--max_objects` (default 10), and each object has a random shape, size, color, and material.

After placing all objects, we ensure that no objects are fully occluded; in particular each object must occupy at least 100 pixels in the rendered image (customizable using `--min_pixels_per_object`). To accomplish this, we assign each object a unique color and render a version of the scene with lighting and shading disabled, writing it to a temporary file; we can then count the number of pixels of each color in this pre-render to check the number of visible pixels for each object.

Each invocation of `render_images.py` will render `--num_images` images, and they will be numbered starting at `--start_idx` (default 0). Using non-default values for `--start_idx` allows you to distribute rendering across many workers and recombine their results later without filename conflicts.

### Object Placement
Each object is positioned randomly, but before actually adding the object to the scene we ensure that its center is at least `--min_dist` units away from the centers of all other objects. We also ensure that between each pair of objects, the left/right and front/back distance along the ground plane is at least `--margin` units; this helps to minimize ambiguous spatial relationships. If after `--max_retries` attempts we are unable to find a suitable position for an object, then all objects are deleted and placed again from scratch.

### Image Resolution
By default images are rendered at `320x240`, but the resolution can be customized using the `--height` and `--width` flags.

### GPU Acceleration
Rendering uses CPU by default, but if you have an NVIDIA GPU with CUDA installed then you can use the GPU to accelerate rendering by adding the flag `--use_gpu 1`. Blender also supports acceleration using OpenCL which allows the use of non-NVIDIA GPUs; however this is not currently supported by `render_images.py`.

### Rendering Quality
You can control the quality of rendering with the `--render_num_samples` flag; using fewer samples will run more quickly but will result in grainy images. I've found that 64 samples is a good number to use for development; all released CLEVR images were rendered using 512 samples. The `--render_min_bounces` and `--render_max_bounces` control the number of bounces for transparent objects; I've found the default of 8 to work well for these options.

When rendering, Blender breaks up the output image into tiles and renders tiles sequentialy; the `--render_tile_size` flag controls the size of these tiles. This should not affect the output image, but may affect the speed at which it is rendered. For CPU rendering smaller tile sizes may be optimal, while for GPU rendering larger tiles may be faster.

With default settings, rendering a 320x240 image takes about 4 seconds on a Pascal Titan X. It's very likely that these rendering times could be drastically reduced by someone more familiar with Blender, but this rendering speed was acceptable for our purposes.

### Saving Blender Scene Files
You can save a Blender `.blend` file for each rendered image by adding the flag `--save_blendfiles 1`. These files can be more than 5 MB each, so they are not saved by default.

### Output Files
Rendered images are stored in the `--output_image_dir` directory, which is created if it does not exist. The filename of each rendered image is constructed from the `--filename_prefix`, the `--split`, and the image index.

A JSON file for each scene containing ground-truth object positions and attributes is saved in the `--output_scene_dir` directory, which is created if it does not exist. After all images are rendered the JSON files for each individual scene are combined into a single JSON file and written to `--output_scene_file`. This single file will also store the `--split`, `--version` (default 1.0), `--license` (default CC-BY 4.0), and `--date` (default today).

When rendering large numbers of images, I have sometimes experienced random Blender crashes; saving JSON files for each scene as they are rendered ensures that you do not lose information for scenes already rendered in the event of a crash.

If saving Blender scene files for each image (`--save_blendfiles 1`) then they are stored in the `--output_blend_dir` directory, which is created if it does not exist.

### Object Properties
The file `--properties_json` file (default `data/properties.json`) defines the allowed shapes, sizes, colors, and materials used for objects, making it easy to extend CLEVR with new object properties.

Each shape (cube, sphere, cylinder) is stored in its own `.blend` file in the `--shape_dir` (default `data/shapes`); the file `X.blend` contains a single object named `X` centered at the origin with unit size. The `shapes` field of the JSON properties file maps human-readable shape names to `.blend` files in the `--shape_dir`.

The `colors` field of the JSON properties file maps human-readable color names to RGB values between 0 and 255; most of our colors are adapted from [Wad's Optimum 16 Color Palette](http://alumni.media.mit.edu/~wad/color/palette.html).

The `sizes` field of the JSON properties file maps human-readable size names to scaling factors used to scale the object models from the `--shape_dir`.

Each material is stored in its own `.blend` file in the `--material_dir` (default `data/materials`). The file `X.blend` should contain a single NodeTree item named X, and this NodeTree item must have a single `Color` input that accepts an RGBA value so that each material can be used with any color. The `materials` field of the JSON properties file maps human-readable material names to `.blend` files in the `--material_dir`.

### Restricting Shape / Color Combinations
The optional `--shape_color_combos_json` flag can be used to restrict the colors of each shape. If provided, this should give a path to a JSON file mapping shape names to lists of allowed color names. This option can be used to render CLEVR-CoGenT images using the files `data/CoGenT_A.json` and `data/CoGenT_B.json`.
62 changes: 62 additions & 0 deletions image_generation/collect_scenes.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Copyright 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. An additional grant
# of patent rights can be found in the PATENTS file in the same directory.

import argparse, json, os

"""
During rendering, each CLEVR scene file is dumped to disk as a separate JSON
file; this is convenient for distributing rendering across multiple machines.
This script collects all CLEVR scene files stored in a directory and combines
them into a single JSON file. This script also adds the version number, date,
and license to the output file.
"""

parser = argparse.ArgumentParser()
parser.add_argument('--input_dir', default='output/scenes')
parser.add_argument('--output_file', default='output/CLEVR_misc_scenes.json')
parser.add_argument('--version', default='1.0')
parser.add_argument('--date', default='7/8/2017')
parser.add_argument('--license',
default='Creative Commons Attribution (CC-BY 4.0')


def main(args):
input_files = os.listdir(args.input_dir)
scenes = []
split = None
for filename in os.listdir(args.input_dir):
if not filename.endswith('.json'):
continue
path = os.path.join(args.input_dir, filename)
with open(path, 'r') as f:
scene = json.load(f)
scenes.append(scene)
if split is not None:
msg = 'Input directory contains scenes from multiple splits'
assert scene['split'] == split, msg
else:
split = scene['split']
scenes.sort(key=lambda s: s['image_index'])
for s in scenes:
print(s['image_filename'])
output = {
'info': {
'date': args.date,
'version': args.version,
'split': split,
'license': args.license,
},
'scenes': scenes
}
with open(args.output_file, 'w') as f:
json.dump(output, f)


if __name__ == '__main__':
args = parser.parse_args()
main(args)

Loading

0 comments on commit 9742828

Please sign in to comment.