diff --git a/README.md b/README.md
index 388c810..9abbf87 100644
--- a/README.md
+++ b/README.md
@@ -1,234 +1,19 @@
--------------------------------------------------------------------------------
-CIS565: Project 4: Image Processing/Vertex Shading
--------------------------------------------------------------------------------
-Fall 2012
--------------------------------------------------------------------------------
-Due Friday 11/09/2012
--------------------------------------------------------------------------------
-
--------------------------------------------------------------------------------
-NOTE:
--------------------------------------------------------------------------------
-This project requires any graphics card with support for a modern OpenGL pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work fine, and every machine in the SIG Lab and Moore 100 is capable of running this project.
-
-The first part of this project requires Visual Studio 2010 or newer, and the second part of this project requires a WebGL capable browser, such as the latest versions of Chrome, Firefox, or Safari.
-
--------------------------------------------------------------------------------
-INTRODUCTION:
--------------------------------------------------------------------------------
-In this project, you will get introduced to the world of GLSL in two parts: fragment shading, and vertex shading. The first part of this project is the Image Processor, and the second part of this project is a Wave Vertex Shader.
-
-In the first part of this project, you will implement various image processing filters using GLSL fragment shaders by rendering a viewport-aligned fullscreen quad where each fragment corresponds to one texel in a texture that stores an image. As you can guess, these filters are embarassingly parallel, as each fragment can be processed in parallel. Although we apply filters to static images in this project, the same technique can be and is widely used in post-processing dynamic scenes.
-
-In the second part of this project, you will implement a GLSL vertex shader as part of a WebGL demo. You will create a dynamic wave animation using code that runs entirely on the GPU; before vertex shading, dynamic vertex buffers could only be implemented on the CPU, and each frame would then be uploaded to the GPU.
-
--------------------------------------------------------------------------------
-CONTENTS:
--------------------------------------------------------------------------------
-The Project4 root directory contains the following subdirectories:
-
-* part1/ contains the base code for the Image Processing half of the assignment.
-* part1/ImageProcessing contains a Visual Studio 2010 project for the Image Processor
-* part1/shared32 contains libraries that are required to build and run the Image Processor
-* part2/ contains the base code for the Wave Vertex Shader in the form of a .html file and several .js files
-
-The Image Processor builds and runs like any other standard Visual Studio project. The Wave Vertex Shader does not require building and can be run by opening the .html file in the web browser of your choice, such as Chrome.
-
--------------------------------------------------------------------------------
-PART 1 REQUIREMENTS:
--------------------------------------------------------------------------------
-In Part 1, you are given code for:
-
-* Reading and loading an image
-* Code for passing a quad to OpenGL
-* Passthrough fragment and vertex shaders that take in an input and output the exact original input
-* An example box blur filter fragment shader
-
-You are required to implement the following filters:
-
-* Image negative
-* Gaussian blur
-* Grayscale
-* Edge Detection
-* Toon shading
-
-You are also required to implement at least three of the following filters:
-
-* Pixelate (http://wtomandev.blogspot.com/2010/01/pixelize-effect.html)
-* CMYK conversion (http://en.wikipedia.org/wiki/CMYK_color_model)
-* Gamma correction (http://en.wikipedia.org/wiki/Gamma_correction)
-* Brightness (http://en.wikipedia.org/wiki/Brightness)
-* Contrast (http://en.wikipedia.org/wiki/Contrast_(vision))
-* Night vision (http://wtomandev.blogspot.com/2009/09/night-vision-effect.html)
-* Open-ended: make up your own filter!
-
-You will also have to bind each filter you write to a keyboard key, such that hitting that key will trigger the associated filter. In the base code, hitting "1" will turn off filtering and use the passthrough filter, and hitting "2" will switch to the example box blur filter.
-
-For this project, the console window will show what shaders are loaded, and will report shader compile/link warnings and errors. The base code does not have any build or shader warnings or errors; neither should your code submission!
-
-
-
-IMPORTANT: You MAY NOT copy/paste code from other sources for any of your filters! If you choose to make your own filter(s), please document what they are, how they work, and how your implementation works.
-
--------------------------------------------------------------------------------
-PART 1 WALKTHROUGH:
--------------------------------------------------------------------------------
-**Image Negative**
-
-Create a new fragment shader starting with `passthroughFS.glsl`, that performans an image negative when the user presses `3`. Compute the negative color as:
-
-`vec3(1.0) - rgb`
-
-
-
-**Gaussian Blur**
-
-Create a new fragment shader starting with `boxBlurFS.glsl`, that performs a 3x3 Gaussian blur when the user presses `4`. This is similar to the box blur, except we are using a smaller kernel (3x3 instead of 7x7), and instead of weighting each texel evenly, they are weighted using a 2D Gaussian function, making the blur more subtle:
-
-`1/16 * [[1 2 1][2 4 2][1 2 1]]`
-
-
-
-**Grayscale**
-
-Create a new fragment shader starting with `passthroughFS.glsl`, that displays the image in grayscale when the user presses `5`. To create a grayscale filter, determine the luminance
-
-`const vec3 W = vec3(0.2125, 0.7154, 0.0721);`
-
-`float luminance = dot(rgb, W);`
-
-Then set the output `r`, `g`, and `b` components to the lumiance.
-
-
-
-**Edge Detection**
-
-Build on both our Gaussian blur and Grayscale filters to create an edge detection filter when the user presses `6` using a horizontal and vertical sobel filter. Use two 3x3 kernels:
-
-`Sobel-horizontal = [[-1 -2 -1][0 0 0][1 2 1]]`
-
-`Sobel-vertical = [[-1 0 1][-2 0 2][-1 0 1]]`
-
-Run the kernels on the luminance of each texel, instead of the `rgb` components used in the Gaussian blur. The result is two floating-point values, one from each kernel. Create a `vec2` from these values, and set the output `rgb` components to the vector’s length.
-
-
-
-**Toon Shading**
-
-Toon shading is part of non-photorealistic rendering (NPR). Instead of trying to produce a photorealistic image, the goal is to create an image with a certain artistic style. In this case, we will build on our edge detection filter to create a cartoon filter when the user presses `7`.
-
-First, perform edge detection. If the length of the vector is above a threshold (you determine), then output black. Otherwise, quantize the texel’s color and output it. Use the following code to quantize:
-
-`float quantize = // determine it`
-
-`rgb *= quantize;`
-
-`rgb += vec3(0.5);`
-
-`ivec3 irgb = ivec3(rgb);`
-
-`rgb = vec3(irgb) / quantize;`
-
-
-
--------------------------------------------------------------------------------
-PART 2 REQUIREMENTS:
--------------------------------------------------------------------------------
-In Part 2, you are given code for:
-
-* Drawing a VBO through WebGL
-* Javascript code for interfacing with WebGL
-* Functions for generating simplex noise
-
-You are required to implement the following:
-
-* A sin-wave based vertex shader:
-
-
-
-* A simplex noise based vertex shader:
-
-
-
-* One interesting vertex shader of your choice
-
--------------------------------------------------------------------------------
-PART 2 WALKTHROUGH:
--------------------------------------------------------------------------------
-**Sin Wave**
-
-* For this assignment, you will need the latest version of either Chrome, Firefox, or Safari.
-* Begin by opening index.html. You should see a flat grid of black and white lines on the xy plane:
-
-
-
-* In this assignment, you will animate the grid in a wave-like pattern using a vertex shader, and determine each vertex’s color based on its height, as seen in the example in the requirements.
-* The vertex and fragment shader are located in script tags in `index.html`.
-* The JavaScript code that needs to be modified is located in `index.js`.
-* Required shader code modifications:
- * Add a float uniform named u_time.
- * Modify the vertex’s height using the following code:
-
- `float s_contrib = sin(position.x*2.0*3.14159 + u_time);`
-
- `float t_contrib = cos(position.y*2.0*3.14159 + u_time);`
-
- `float height = s_contrib*t_contrib;`
-
- * Use the GLSL mix function to blend together two colors of your choice based on the vertex’s height. The lowest possible height should be assigned one color (for example, `vec3(1.0, 0.2, 0.0)`) and the maximum height should be another (`vec3(0.0, 0.8, 1.0)`). Use a varying variable to pass the color to the fragment shader, where you will assign it `gl_FragColor`.
-
-* Required JavaScript code modifications:
- * A floating-point time value should be increased every animation step. Hint: the delta should be less than one.
- * To pass the time to the vertex shader as a uniform, first query the location of `u_time` using `context.getUniformLocation` in `initializeShader()`. Then, the uniform’s value can be set by calling `context.uniform1f` in `animate()`.
-
-**Simplex Wave**
-
-* Now that you have the sin wave working, create a new copy of `index.html`. Call it `index_simplex.html`, or something similar.
-* Open up `simplex.vert`, which contains a compact GLSL simplex noise implementation, in a text editor. Copy and paste the functions included inside into your `index_simplex.html`'s vertex shader.
-* Try changing s_contrib and t_contrib to use simplex noise instead of sin/cos functions with the following code:
-
- `vec2 simplexVec = vec2(u_time, position);`
-
- `float s_contrib = snoise(simplexVec);`
-
- `float t_contrib = snoise(vec2(s_contrib,u_time));`
-
-**Wave Of Your Choice**
-
-* Create another copy of `index.html`. Call it `index_custom.html`, or something similar.
-* Implement your own interesting vertex shader! In your README.md with your submission, describe your custom vertex shader, what it does, and how it works.
-
--------------------------------------------------------------------------------
-BLOG
--------------------------------------------------------------------------------
-As mentioned in class, all students should have student blogs detailing progress on projects. If you already have a blog, you can use it; otherwise, please create a blog using www.blogger.com or any other tool, such as www.wordpress.org. Blog posts on your project are due on the SAME DAY as the project, and should include:
-
-* A brief description of the project and the specific features you implemented.
-* A link to your github repo if the code is open source.
-* At least one screenshot of your project running.
-* A 30 second or longer video of your project running. To create the video, use http://www.microsoft.com/expression/products/Encoder4_Overview.aspx
-
--------------------------------------------------------------------------------
-THIRD PARTY CODE POLICY
--------------------------------------------------------------------------------
-* Use of any third-party code must be approved by asking on Piazza. If it is approved, all students are welcome to use it. Generally, we approve use of third-party code that is not a core part of the project. For example, for the ray tracer, we would approve using a third-party library for loading models, but would not approve copying and pasting a CUDA function for doing refraction.
-* Third-party code must be credited in README.md.
-* Using third-party code without its approval, including using another student's code, is an academic integrity violation, and will result in you receiving an F for the semester.
-
--------------------------------------------------------------------------------
-SELF-GRADING
--------------------------------------------------------------------------------
-* On the submission date, email your grade, on a scale of 0 to 100, to Karl, yiningli@seas.upenn.edu, with a one paragraph explanation. Be concise and realistic. Recall that we reserve 30 points as a sanity check to adjust your grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We hope to only use this in extreme cases when your grade does not realistically reflect your work - it is either too high or too low. In most cases, we plan to give you the exact grade you suggest.
-* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as the path tracer. We will determine the weighting at the end of the semester based on the size of each project.
-
--------------------------------------------------------------------------------
-SUBMISSION
--------------------------------------------------------------------------------
-As with the previous project, you should fork this project and work inside of your fork. Upon completion, commit your finished project back to your fork, and make a pull request to the master repository.
-You should include a README.md file in the root directory detailing the following
-
-* A brief description of the project and specific features you implemented
-* At least one screenshot of your project running, and at least one screenshot of the final rendered output of your raytracer
-* Instructions for building and running your project if they differ from the base code
-* A link to your blog post detailing the project
-* A list of all third-party code used
\ No newline at end of file
+-------------------------------------------------------------------------------
+CIS565: Project 4: Image Processing/Vertex Shading
+-------------------------------------------------------------------------------
+Fall 2012
+-------------------------------------------------------------------------------
+Due Friday 11/09/2012
+-------------------------------------------------------------------------------
+
+Extra Features Implemented:
+• Pixelate (Key ‘0’)
+• Appearance on old faulty TV (Key ‘9’)
+• Brightness (Key ‘8’)
+
+In WebGl:
+• Ellipsoid wave
+
+
+Blog:
+http://aparajithsairam.blogspot.com/
diff --git a/part1/ImageProcessing/ImageProcessing.sdf b/part1/ImageProcessing/ImageProcessing.sdf
new file mode 100644
index 0000000..a0e3414
Binary files /dev/null and b/part1/ImageProcessing/ImageProcessing.sdf differ
diff --git a/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj b/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj
index 554291c..d363625 100644
--- a/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj
+++ b/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj
@@ -12,8 +12,16 @@
+
+
+
+
+
+
+
+
diff --git a/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj.user b/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj.user
new file mode 100644
index 0000000..ace9a86
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj.user
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/part1/ImageProcessing/ImageProcessing/brightnessFS.glsl b/part1/ImageProcessing/ImageProcessing/brightnessFS.glsl
new file mode 100644
index 0000000..e06163a
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/brightnessFS.glsl
@@ -0,0 +1,9 @@
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+
+void main(void)
+{
+ gl_FragColor = texture2D(u_image, v_Texcoords) * 1.2;
+ gl_FragColor.a = 1.0;
+}
diff --git a/part1/ImageProcessing/ImageProcessing/edgeDetectionFS.glsl b/part1/ImageProcessing/ImageProcessing/edgeDetectionFS.glsl
new file mode 100644
index 0000000..4189ced
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/edgeDetectionFS.glsl
@@ -0,0 +1,35 @@
+#version 120
+
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+uniform vec2 u_step;
+
+const int KERNEL_WIDTH = 3; // Odd
+const float offset = 1.0;
+float sobelHorizontal[9] = float[9](-1.0, -2.0, -1.0, 0.0, 0.0, 0.0, 1.0, 2.0, 1.0);
+float sobelVertical[9] = float[9](-1.0, 0.0, 1.0, -2.0, 0.0, 2.0, -1.0, 0.0, 1.0);
+
+void main(void)
+{
+ vec2 accum = vec2(0.0);
+
+ for (int i = 0; i < KERNEL_WIDTH; ++i)
+ {
+ for (int j = 0; j < KERNEL_WIDTH; ++j)
+ {
+ vec2 coord = vec2(v_Texcoords.s + ((float(i) - offset) * u_step.s), v_Texcoords.t + ((float(j) - offset) * u_step.t));
+ vec4 original = texture2D(u_image, coord).xyzw;
+ float luminance = (0.2125 * original.r + 0.7154 * original.g + 0.0721 * original.b);
+ accum.x += luminance * sobelHorizontal[j + i * KERNEL_WIDTH];
+ accum.y += luminance * sobelVertical[j + i * KERNEL_WIDTH];
+ }
+ }
+
+ float accumLength = length(accum);
+ if(accumLength > 1.0)
+ {
+ accumLength = 1.0;
+ }
+ gl_FragColor = vec4(accumLength, accumLength, accumLength, 1.0);
+}
diff --git a/part1/ImageProcessing/ImageProcessing/gaussianBlurFS.glsl b/part1/ImageProcessing/ImageProcessing/gaussianBlurFS.glsl
new file mode 100644
index 0000000..33dd128
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/gaussianBlurFS.glsl
@@ -0,0 +1,26 @@
+#version 120
+
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+uniform vec2 u_step;
+
+const int KERNEL_WIDTH = 3; // Odd
+const float offset = 1.0;
+float mask[9] = float[9](1.0 / 16.0, 2.0 / 16.0, 1.0 / 16.0, 2.0 / 16.0, 4.0 / 16.0, 2.0 / 16.0, 1.0 / 16.0, 2.0 / 16.0, 1.0 / 16.0);
+
+void main(void)
+{
+ vec3 accum = vec3(0.0);
+
+ for (int i = 0; i < KERNEL_WIDTH; ++i)
+ {
+ for (int j = 0; j < KERNEL_WIDTH; ++j)
+ {
+ vec2 coord = vec2(v_Texcoords.s + ((float(i) - offset) * u_step.s), v_Texcoords.t + ((float(j) - offset) * u_step.t));
+ accum += texture2D(u_image, coord).rgb * mask[j + i * KERNEL_WIDTH];
+ }
+ }
+
+ gl_FragColor = vec4(accum, 1.0);
+}
diff --git a/part1/ImageProcessing/ImageProcessing/grayscaleFS.glsl b/part1/ImageProcessing/ImageProcessing/grayscaleFS.glsl
new file mode 100644
index 0000000..da83b5b
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/grayscaleFS.glsl
@@ -0,0 +1,10 @@
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+
+void main(void)
+{
+ vec4 original = texture2D(u_image, v_Texcoords).xyzw;
+ gl_FragColor = (0.2125 * original.r + 0.7154 * original.g + 0.0721 * original.b) * vec4(1, 1, 1, 1);
+ gl_FragColor.a = 1.0;
+}
diff --git a/part1/ImageProcessing/ImageProcessing/imageNegativeFS.glsl b/part1/ImageProcessing/ImageProcessing/imageNegativeFS.glsl
new file mode 100644
index 0000000..fd9aa76
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/imageNegativeFS.glsl
@@ -0,0 +1,10 @@
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+
+void main(void)
+{
+ vec4 original = texture2D(u_image, v_Texcoords);
+ gl_FragColor = vec4(1, 1, 1, 1) - original;
+ gl_FragColor.a = original.a;
+}
diff --git a/part1/ImageProcessing/ImageProcessing/main.cpp b/part1/ImageProcessing/ImageProcessing/main.cpp
index 3b1444d..865db05 100644
--- a/part1/ImageProcessing/ImageProcessing/main.cpp
+++ b/part1/ImageProcessing/ImageProcessing/main.cpp
@@ -6,19 +6,30 @@
int width = 640;
int height = 480;
+float tim = 0.0f;
GLuint positionLocation = 0;
GLuint texcoordsLocation = 1;
+
const char *attributeLocations[] = { "Position", "Tex" };
GLuint passthroughProgram;
GLuint boxBlurProgram;
+GLuint imageNegativeProgram;
+GLuint grayscaleProgram;
+GLuint gaussianBlurProgram;
+GLuint edgeDetectionProgram;
+GLuint toonProgram;
+GLuint brightnessProgram;
+GLuint nightVisionProgram;
+GLuint pixelateProgram;
+GLuint program;
GLuint initShader(const char *vertexShaderPath, const char *fragmentShaderPath)
{
- GLuint program = Utility::createProgram(vertexShaderPath, fragmentShaderPath, attributeLocations, 2);
+ program = Utility::createProgram(vertexShaderPath, fragmentShaderPath, attributeLocations, 2);
GLint location;
-
+
glUseProgram(program);
if ((location = glGetUniformLocation(program, "u_image")) != -1)
@@ -31,6 +42,11 @@ GLuint initShader(const char *vertexShaderPath, const char *fragmentShaderPath)
glUniform2f(location, 1.0f / (float)width, 1.0f / (float)height);
}
+ if ((location = glGetUniformLocation(program, "u_time")) != -1)
+ {
+ glUniform1f(location, 0.0);
+ }
+
return program;
}
@@ -85,6 +101,13 @@ void initVAO(void)
void display(void)
{
+ tim += 0.01f;
+ GLint location;
+ if ((location = glGetUniformLocation(program, "u_time")) != -1)
+ {
+ glUniform1f(location, tim);
+ }
+
glClear(GL_COLOR_BUFFER_BIT);
// VAO, shader program, and texture already bound
@@ -104,6 +127,30 @@ void keyboard(unsigned char key, int x, int y)
case '2':
glUseProgram(boxBlurProgram);
break;
+ case '3':
+ glUseProgram(imageNegativeProgram);
+ break;
+ case '4':
+ glUseProgram(gaussianBlurProgram);
+ break;
+ case '5':
+ glUseProgram(grayscaleProgram);
+ break;
+ case '6':
+ glUseProgram(edgeDetectionProgram);
+ break;
+ case '7':
+ glUseProgram(toonProgram);
+ break;
+ case '8':
+ glUseProgram(brightnessProgram);
+ break;
+ case '9':
+ glUseProgram(nightVisionProgram);
+ break;
+ case '0':
+ glUseProgram(pixelateProgram);
+ break;
}
}
@@ -133,6 +180,14 @@ int main(int argc, char* argv[])
initTextures();
passthroughProgram = initShader("passthroughVS.glsl", "passthroughFS.glsl");
boxBlurProgram = initShader("passthroughVS.glsl", "boxBlurFS.glsl");
+ imageNegativeProgram = initShader("passthroughVS.glsl", "imageNegativeFS.glsl");
+ grayscaleProgram = initShader("passthroughVS.glsl", "grayscaleFS.glsl");
+ gaussianBlurProgram = initShader("passthroughVS.glsl", "gaussianBlurFS.glsl");
+ edgeDetectionProgram = initShader("passthroughVS.glsl", "edgeDetectionFS.glsl");
+ toonProgram = initShader("passthroughVS.glsl", "toonFS.glsl");
+ brightnessProgram = initShader("passthroughVS.glsl", "brightnessFS.glsl");
+ nightVisionProgram = initShader("passthroughVS.glsl", "nightVisionFS.glsl");
+ pixelateProgram = initShader("passthroughVS.glsl", "pixelateFS.glsl");
glutDisplayFunc(display);
glutReshapeFunc(reshape);
diff --git a/part1/ImageProcessing/ImageProcessing/nightVisionFS.glsl b/part1/ImageProcessing/ImageProcessing/nightVisionFS.glsl
new file mode 100644
index 0000000..a4db4b5
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/nightVisionFS.glsl
@@ -0,0 +1,14 @@
+#version 150
+
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+uniform float u_time;
+
+void main(void)
+{
+ gl_FragColor = texture2D(u_image, v_Texcoords);
+ float yDistortion = sin( float(int((v_Texcoords.y + u_time) * 10000) % 2129) / 2129.0) / 4.0;
+ gl_FragColor += vec4(0, yDistortion, 0, 1.0);
+ gl_FragColor.a = 1.0;
+}
\ No newline at end of file
diff --git a/part1/ImageProcessing/ImageProcessing/pixelateFS.glsl b/part1/ImageProcessing/ImageProcessing/pixelateFS.glsl
new file mode 100644
index 0000000..8368c90
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/pixelateFS.glsl
@@ -0,0 +1,27 @@
+#version 150
+
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+uniform vec2 u_step;
+
+const int KERNEL_WIDTH = 3; // Odd
+const float offset = 1.0;
+float mask[9] = float[9](1.0 / 16.0, 2.0 / 16.0, 1.0 / 16.0, 2.0 / 16.0, 4.0 / 16.0, 2.0 / 16.0, 1.0 / 16.0, 2.0 / 16.0, 1.0 / 16.0);
+
+void main(void)
+{
+ float x = v_Texcoords.s / u_step.s;
+ float y = v_Texcoords.t / u_step.t;
+ int xDiff = int(x) % 4;
+ int yDiff = int(y) % 4;
+ if(xDiff != 0)
+ {
+ x -= (float(xDiff));
+ }
+ if(yDiff != 0)
+ {
+ y -= (float(yDiff));
+ }
+ gl_FragColor = texture2D(u_image, vec2(x * u_step.s, y * u_step.t));
+}
diff --git a/part1/ImageProcessing/ImageProcessing/toonFS.glsl b/part1/ImageProcessing/ImageProcessing/toonFS.glsl
new file mode 100644
index 0000000..a24b8de
--- /dev/null
+++ b/part1/ImageProcessing/ImageProcessing/toonFS.glsl
@@ -0,0 +1,48 @@
+#version 120
+
+varying vec2 v_Texcoords;
+
+uniform sampler2D u_image;
+uniform vec2 u_step;
+
+const int KERNEL_WIDTH = 3; // Odd
+const float offset = 1.0;
+float sobelHorizontal[9] = float[9](-1.0, -2.0, -1.0, 0.0, 0.0, 0.0, 1.0, 2.0, 1.0);
+float sobelVertical[9] = float[9](-1.0, 0.0, 1.0, -2.0, 0.0, 2.0, -1.0, 0.0, 1.0);
+
+void main(void)
+{
+ vec2 accum = vec2(0.0);
+
+ for (int i = 0; i < KERNEL_WIDTH; ++i)
+ {
+ for (int j = 0; j < KERNEL_WIDTH; ++j)
+ {
+ vec2 coord = vec2(v_Texcoords.s + ((float(i) - offset) * u_step.s), v_Texcoords.t + ((float(j) - offset) * u_step.t));
+ vec4 original = texture2D(u_image, coord).xyzw;
+ float luminance = (0.2125 * original.r + 0.7154 * original.g + 0.0721 * original.b);
+ accum.x += luminance * sobelHorizontal[j + i * KERNEL_WIDTH];
+ accum.y += luminance * sobelVertical[j + i * KERNEL_WIDTH];
+ }
+ }
+
+ float accumLength = length(accum);
+ if(accumLength > 1.0)
+ {
+ accumLength = 1.0;
+ }
+ if(accumLength > 0.3)
+ {
+ gl_FragColor.rgb = vec3(0, 0, 0);
+ }
+ else
+ {
+ gl_FragColor = texture2D(u_image, v_Texcoords);
+ float quantize = 4.0;
+ gl_FragColor.rgb *= quantize;
+ gl_FragColor.rgb += vec3(0.5);
+ ivec3 irgb = ivec3(gl_FragColor.rgb);
+ gl_FragColor.rgb = vec3(irgb) / quantize;
+ }
+ gl_FragColor.a = 1.0;
+}
diff --git a/part1/ImageProcessing/SOIL/SOIL.vcxproj.user b/part1/ImageProcessing/SOIL/SOIL.vcxproj.user
new file mode 100644
index 0000000..ace9a86
--- /dev/null
+++ b/part1/ImageProcessing/SOIL/SOIL.vcxproj.user
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/part2/index _custom.html b/part2/index _custom.html
new file mode 100644
index 0000000..6e3b99e
--- /dev/null
+++ b/part2/index _custom.html
@@ -0,0 +1,51 @@
+
+
+
+
+Vertex Wave
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/part2/index.html b/part2/index.html
index 1e3f7e2..4d1dc83 100644
--- a/part2/index.html
+++ b/part2/index.html
@@ -15,20 +15,30 @@
attribute vec2 position;
uniform mat4 u_modelViewPerspective;
+
+ uniform float u_time;
+ varying vec4 fs_color;
+
void main(void)
{
- float height = 0.0;
+ float s_contrib = sin(position.x * 2.0 * 3.14159 + u_time);
+ float t_contrib = cos(position.x * 2.0 * 3.14159 + u_time);
+ float height = s_contrib * t_contrib;
gl_Position = u_modelViewPerspective * vec4(vec3(position, height), 1.0);
+ //Interpolating manually, from green (max height) to red (min height)
+ fs_color = height * (vec4(0, 1, 0, 0) - vec4(1, 0, 0, 0)) + vec4(1, 0, 0, 0);
}
diff --git a/part2/index.js b/part2/index.js
index b5df3fb..cf1445b 100644
--- a/part2/index.js
+++ b/part2/index.js
@@ -1,4 +1,4 @@
-(function() {
+(function () {
"use strict";
/*global window,document,Float32Array,Uint16Array,mat4,vec3,snoise*/
/*global getShaderSource,createWebGLContext,createProgram*/
@@ -31,15 +31,19 @@
var positionLocation = 0;
var heightLocation = 1;
var u_modelViewPerspectiveLocation;
+ var u_timeLocation;
+ var timeValue = 0.0;
(function initializeShader() {
var program;
var vs = getShaderSource(document.getElementById("vs"));
var fs = getShaderSource(document.getElementById("fs"));
- var program = createProgram(context, vs, fs, message);
- context.bindAttribLocation(program, positionLocation, "position");
- u_modelViewPerspectiveLocation = context.getUniformLocation(program,"u_modelViewPerspective");
+ var program = createProgram(context, vs, fs, message);
+ context.bindAttribLocation(program, positionLocation, "position");
+ u_modelViewPerspectiveLocation = context.getUniformLocation(program, "u_modelViewPerspective");
+
+ u_timeLocation = context.getUniformLocation(program, "u_time");
context.useProgram(program);
})();
@@ -56,8 +60,7 @@
context.vertexAttribPointer(positionLocation, 2, context.FLOAT, false, 0, 0);
context.enableVertexAttribArray(positionLocation);
- if (heights)
- {
+ if (heights) {
// Heights
var heightsName = context.createBuffer();
context.bindBuffer(context.ARRAY_BUFFER, heightsName);
@@ -84,49 +87,45 @@
var indicesIndex = 0;
var length;
- for (var j = 0; j < NUM_WIDTH_PTS; ++j)
- {
- positions[positionsIndex++] = j /(NUM_WIDTH_PTS - 1);
+ for (var j = 0; j < NUM_WIDTH_PTS; ++j) {
+ positions[positionsIndex++] = j / (NUM_WIDTH_PTS - 1);
positions[positionsIndex++] = 0.0;
- if (j>=1)
- {
+ if (j >= 1) {
length = positionsIndex / 2;
indices[indicesIndex++] = length - 2;
indices[indicesIndex++] = length - 1;
}
}
- for (var i = 0; i < HEIGHT_DIVISIONS; ++i)
- {
- var v = (i + 1) / (NUM_HEIGHT_PTS - 1);
- positions[positionsIndex++] = 0.0;
- positions[positionsIndex++] = v;
-
- length = (positionsIndex / 2);
- indices[indicesIndex++] = length - 1;
- indices[indicesIndex++] = length - 1 - NUM_WIDTH_PTS;
-
- for (var k = 0; k < WIDTH_DIVISIONS; ++k)
- {
- positions[positionsIndex++] = (k + 1) / (NUM_WIDTH_PTS - 1);
- positions[positionsIndex++] = v;
-
- length = positionsIndex / 2;
- var new_pt = length - 1;
- indices[indicesIndex++] = new_pt - 1; // Previous side
- indices[indicesIndex++] = new_pt;
-
- indices[indicesIndex++] = new_pt - NUM_WIDTH_PTS; // Previous bottom
- indices[indicesIndex++] = new_pt;
- }
+ for (var i = 0; i < HEIGHT_DIVISIONS; ++i) {
+ var v = (i + 1) / (NUM_HEIGHT_PTS - 1);
+ positions[positionsIndex++] = 0.0;
+ positions[positionsIndex++] = v;
+
+ length = (positionsIndex / 2);
+ indices[indicesIndex++] = length - 1;
+ indices[indicesIndex++] = length - 1 - NUM_WIDTH_PTS;
+
+ for (var k = 0; k < WIDTH_DIVISIONS; ++k) {
+ positions[positionsIndex++] = (k + 1) / (NUM_WIDTH_PTS - 1);
+ positions[positionsIndex++] = v;
+
+ length = positionsIndex / 2;
+ var new_pt = length - 1;
+ indices[indicesIndex++] = new_pt - 1; // Previous side
+ indices[indicesIndex++] = new_pt;
+
+ indices[indicesIndex++] = new_pt - NUM_WIDTH_PTS; // Previous bottom
+ indices[indicesIndex++] = new_pt;
+ }
}
uploadMesh(positions, heights, indices);
numberOfIndices = indices.length;
})();
- (function animate(){
+ (function animate() {
///////////////////////////////////////////////////////////////////////////
// Update
@@ -138,14 +137,20 @@
var mvp = mat4.create();
mat4.multiply(persp, mv, mvp);
+ timeValue += 0.001;
+ //if (timeValue >= 1.0) {
+ // timeValue = 0.0;
+ //}
+ context.uniform1f(u_timeLocation, timeValue);
+
///////////////////////////////////////////////////////////////////////////
// Render
context.clear(context.COLOR_BUFFER_BIT | context.DEPTH_BUFFER_BIT);
context.uniformMatrix4fv(u_modelViewPerspectiveLocation, false, mvp);
- context.drawElements(context.LINES, numberOfIndices, context.UNSIGNED_SHORT,0);
+ context.drawElements(context.LINES, numberOfIndices, context.UNSIGNED_SHORT, 0);
- window.requestAnimFrame(animate);
+ window.requestAnimFrame(animate);
})();
-}());
+} ());
diff --git a/part2/index_simplex.html b/part2/index_simplex.html
new file mode 100644
index 0000000..507d57f
--- /dev/null
+++ b/part2/index_simplex.html
@@ -0,0 +1,90 @@
+
+
+
+
+Vertex Wave
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/part2/simplex.vert b/part2/simplex.vert
index 13dd1ca..89c4e1a 100644
--- a/part2/simplex.vert
+++ b/part2/simplex.vert
@@ -1,3 +1,4 @@
+
vec3 permute(vec3 x) {
x = ((x*34.0)+1.0)*x;
return x - floor(x * (1.0 / 289.0)) * 289.0;
diff --git a/readmeFiles/Thumbs.db b/readmeFiles/Thumbs.db
new file mode 100644
index 0000000..1716c68
Binary files /dev/null and b/readmeFiles/Thumbs.db differ