diff --git a/README.md b/README.md index 388c810..eb4870b 100644 --- a/README.md +++ b/README.md @@ -5,230 +5,59 @@ Fall 2012 ------------------------------------------------------------------------------- Due Friday 11/09/2012 ------------------------------------------------------------------------------- - -------------------------------------------------------------------------------- -NOTE: -------------------------------------------------------------------------------- -This project requires any graphics card with support for a modern OpenGL pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work fine, and every machine in the SIG Lab and Moore 100 is capable of running this project. - -The first part of this project requires Visual Studio 2010 or newer, and the second part of this project requires a WebGL capable browser, such as the latest versions of Chrome, Firefox, or Safari. - -------------------------------------------------------------------------------- -INTRODUCTION: -------------------------------------------------------------------------------- -In this project, you will get introduced to the world of GLSL in two parts: fragment shading, and vertex shading. The first part of this project is the Image Processor, and the second part of this project is a Wave Vertex Shader. - -In the first part of this project, you will implement various image processing filters using GLSL fragment shaders by rendering a viewport-aligned fullscreen quad where each fragment corresponds to one texel in a texture that stores an image. As you can guess, these filters are embarassingly parallel, as each fragment can be processed in parallel. Although we apply filters to static images in this project, the same technique can be and is widely used in post-processing dynamic scenes. - -In the second part of this project, you will implement a GLSL vertex shader as part of a WebGL demo. You will create a dynamic wave animation using code that runs entirely on the GPU; before vertex shading, dynamic vertex buffers could only be implemented on the CPU, and each frame would then be uploaded to the GPU. - +BLOG Link: http://seunghoon-cis565.blogspot.com/2012/11/project-4-image-processingvertex.html ------------------------------------------------------------------------------- -CONTENTS: +A brief description ------------------------------------------------------------------------------- -The Project4 root directory contains the following subdirectories: - -* part1/ contains the base code for the Image Processing half of the assignment. -* part1/ImageProcessing contains a Visual Studio 2010 project for the Image Processor -* part1/shared32 contains libraries that are required to build and run the Image Processor -* part2/ contains the base code for the Wave Vertex Shader in the form of a .html file and several .js files +The goal of this project is to implement a couple of image processing algorithms and +wave effects by using GLSL(OpenGL Shading Language). -The Image Processor builds and runs like any other standard Visual Studio project. The Wave Vertex Shader does not require building and can be run by opening the .html file in the web browser of your choice, such as Chrome. ------------------------------------------------------------------------------- -PART 1 REQUIREMENTS: +PART 1: Image Processing ------------------------------------------------------------------------------- -In Part 1, you are given code for: - -* Reading and loading an image -* Code for passing a quad to OpenGL -* Passthrough fragment and vertex shaders that take in an input and output the exact original input -* An example box blur filter fragment shader - -You are required to implement the following filters: - +- Basic * Image negative * Gaussian blur * Grayscale * Edge Detection * Toon shading -You are also required to implement at least three of the following filters: - -* Pixelate (http://wtomandev.blogspot.com/2010/01/pixelize-effect.html) -* CMYK conversion (http://en.wikipedia.org/wiki/CMYK_color_model) -* Gamma correction (http://en.wikipedia.org/wiki/Gamma_correction) -* Brightness (http://en.wikipedia.org/wiki/Brightness) -* Contrast (http://en.wikipedia.org/wiki/Contrast_(vision)) -* Night vision (http://wtomandev.blogspot.com/2009/09/night-vision-effect.html) -* Open-ended: make up your own filter! - -You will also have to bind each filter you write to a keyboard key, such that hitting that key will trigger the associated filter. In the base code, hitting "1" will turn off filtering and use the passthrough filter, and hitting "2" will switch to the example box blur filter. - -For this project, the console window will show what shaders are loaded, and will report shader compile/link warnings and errors. The base code does not have any build or shader warnings or errors; neither should your code submission! - -![Example console window](Project4-IntroGLSL/raw/master/readmeFiles/consoleExample.png) - -IMPORTANT: You MAY NOT copy/paste code from other sources for any of your filters! If you choose to make your own filter(s), please document what they are, how they work, and how your implementation works. - -------------------------------------------------------------------------------- -PART 1 WALKTHROUGH: -------------------------------------------------------------------------------- -**Image Negative** - -Create a new fragment shader starting with `passthroughFS.glsl`, that performans an image negative when the user presses `3`. Compute the negative color as: - -`vec3(1.0) - rgb` - -![Example negative filter](Project4-IntroGLSL/raw/master/readmeFiles/negativeFilter.png) - -**Gaussian Blur** - -Create a new fragment shader starting with `boxBlurFS.glsl`, that performs a 3x3 Gaussian blur when the user presses `4`. This is similar to the box blur, except we are using a smaller kernel (3x3 instead of 7x7), and instead of weighting each texel evenly, they are weighted using a 2D Gaussian function, making the blur more subtle: - -`1/16 * [[1 2 1][2 4 2][1 2 1]]` - -![Example gaussian filter](Project4-IntroGLSL/raw/master/readmeFiles/gaussianFilter.png) - -**Grayscale** - -Create a new fragment shader starting with `passthroughFS.glsl`, that displays the image in grayscale when the user presses `5`. To create a grayscale filter, determine the luminance - -`const vec3 W = vec3(0.2125, 0.7154, 0.0721);` - -`float luminance = dot(rgb, W);` - -Then set the output `r`, `g`, and `b` components to the lumiance. - -![Example grayscale filter](Project4-IntroGLSL/raw/master/readmeFiles/grayscaleFilter.png) - -**Edge Detection** - -Build on both our Gaussian blur and Grayscale filters to create an edge detection filter when the user presses `6` using a horizontal and vertical sobel filter. Use two 3x3 kernels: - -`Sobel-horizontal = [[-1 -2 -1][0 0 0][1 2 1]]` +- Additional +* Brightness -`Sobel-vertical = [[-1 0 1][-2 0 2][-1 0 1]]` +![Example brightness filter](Project4-IntroGLSL/raw/master/readmeFiles/brightness.png) -Run the kernels on the luminance of each texel, instead of the `rgb` components used in the Gaussian blur. The result is two floating-point values, one from each kernel. Create a `vec2` from these values, and set the output `rgb` components to the vector’s length. +* Night Vision -![Example edge filter](Project4-IntroGLSL/raw/master/readmeFiles/edgeFilter.png) +![Example night vision filter](Project4-IntroGLSL/raw/master/readmeFiles/nightVision.png) -**Toon Shading** +* Pixelization -Toon shading is part of non-photorealistic rendering (NPR). Instead of trying to produce a photorealistic image, the goal is to create an image with a certain artistic style. In this case, we will build on our edge detection filter to create a cartoon filter when the user presses `7`. +![Example pixelization filter](Project4-IntroGLSL/raw/master/readmeFiles/pixelization.png) -First, perform edge detection. If the length of the vector is above a threshold (you determine), then output black. Otherwise, quantize the texel’s color and output it. Use the following code to quantize: - -`float quantize = // determine it` - -`rgb *= quantize;` - -`rgb += vec3(0.5);` - -`ivec3 irgb = ivec3(rgb);` - -`rgb = vec3(irgb) / quantize;` - -![Example toon filter](Project4-IntroGLSL/raw/master/readmeFiles/toonFilter.png) ------------------------------------------------------------------------------- -PART 2 REQUIREMENTS: +PART 2: Vertex Shading ------------------------------------------------------------------------------- -In Part 2, you are given code for: - -* Drawing a VBO through WebGL -* Javascript code for interfacing with WebGL -* Functions for generating simplex noise - -You are required to implement the following: - +- Basic * A sin-wave based vertex shader: - -![Example sin wave grid](Project4-IntroGLSL/raw/master/readmeFiles/sinWaveGrid.png) - * A simplex noise based vertex shader: -![Example simplex noise wave grid](Project4-IntroGLSL/raw/master/readmeFiles/oceanWave.png) - -* One interesting vertex shader of your choice - -------------------------------------------------------------------------------- -PART 2 WALKTHROUGH: -------------------------------------------------------------------------------- -**Sin Wave** - -* For this assignment, you will need the latest version of either Chrome, Firefox, or Safari. -* Begin by opening index.html. You should see a flat grid of black and white lines on the xy plane: - -![Example boring grid](Project4-IntroGLSL/raw/master/readmeFiles/emptyGrid.png) - -* In this assignment, you will animate the grid in a wave-like pattern using a vertex shader, and determine each vertex’s color based on its height, as seen in the example in the requirements. -* The vertex and fragment shader are located in script tags in `index.html`. -* The JavaScript code that needs to be modified is located in `index.js`. -* Required shader code modifications: - * Add a float uniform named u_time. - * Modify the vertex’s height using the following code: +- Additional +* A sin-wave varying on the radius from the center - `float s_contrib = sin(position.x*2.0*3.14159 + u_time);` +What it does: generate a sine wave based on the radius from the center - `float t_contrib = cos(position.y*2.0*3.14159 + u_time);` +How it works: compute the radius from the center(0.5, 0.5) for each vertex and compute the hight by passing this radius to sine function. - `float height = s_contrib*t_contrib;` - - * Use the GLSL mix function to blend together two colors of your choice based on the vertex’s height. The lowest possible height should be assigned one color (for example, `vec3(1.0, 0.2, 0.0)`) and the maximum height should be another (`vec3(0.0, 0.8, 1.0)`). Use a varying variable to pass the color to the fragment shader, where you will assign it `gl_FragColor`. - -* Required JavaScript code modifications: - * A floating-point time value should be increased every animation step. Hint: the delta should be less than one. - * To pass the time to the vertex shader as a uniform, first query the location of `u_time` using `context.getUniformLocation` in `initializeShader()`. Then, the uniform’s value can be set by calling `context.uniform1f` in `animate()`. - -**Simplex Wave** - -* Now that you have the sin wave working, create a new copy of `index.html`. Call it `index_simplex.html`, or something similar. -* Open up `simplex.vert`, which contains a compact GLSL simplex noise implementation, in a text editor. Copy and paste the functions included inside into your `index_simplex.html`'s vertex shader. -* Try changing s_contrib and t_contrib to use simplex noise instead of sin/cos functions with the following code: - - `vec2 simplexVec = vec2(u_time, position);` - - `float s_contrib = snoise(simplexVec);` - - `float t_contrib = snoise(vec2(s_contrib,u_time));` - -**Wave Of Your Choice** - -* Create another copy of `index.html`. Call it `index_custom.html`, or something similar. -* Implement your own interesting vertex shader! In your README.md with your submission, describe your custom vertex shader, what it does, and how it works. +![Example radial sine](Project4-IntroGLSL/raw/master/readmeFiles/radialSine.png) ------------------------------------------------------------------------------- -BLOG +How to build ------------------------------------------------------------------------------- -As mentioned in class, all students should have student blogs detailing progress on projects. If you already have a blog, you can use it; otherwise, please create a blog using www.blogger.com or any other tool, such as www.wordpress.org. Blog posts on your project are due on the SAME DAY as the project, and should include: +I developed the part1 on Visual Studio 2010. +Its solution file is located in "part1/ImageProcessing/ImageProcessing.sln". +You should be able to build it without modification. -* A brief description of the project and the specific features you implemented. -* A link to your github repo if the code is open source. -* At least one screenshot of your project running. -* A 30 second or longer video of your project running. To create the video, use http://www.microsoft.com/expression/products/Encoder4_Overview.aspx - -------------------------------------------------------------------------------- -THIRD PARTY CODE POLICY -------------------------------------------------------------------------------- -* Use of any third-party code must be approved by asking on Piazza. If it is approved, all students are welcome to use it. Generally, we approve use of third-party code that is not a core part of the project. For example, for the ray tracer, we would approve using a third-party library for loading models, but would not approve copying and pasting a CUDA function for doing refraction. -* Third-party code must be credited in README.md. -* Using third-party code without its approval, including using another student's code, is an academic integrity violation, and will result in you receiving an F for the semester. - -------------------------------------------------------------------------------- -SELF-GRADING -------------------------------------------------------------------------------- -* On the submission date, email your grade, on a scale of 0 to 100, to Karl, yiningli@seas.upenn.edu, with a one paragraph explanation. Be concise and realistic. Recall that we reserve 30 points as a sanity check to adjust your grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We hope to only use this in extreme cases when your grade does not realistically reflect your work - it is either too high or too low. In most cases, we plan to give you the exact grade you suggest. -* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as the path tracer. We will determine the weighting at the end of the semester based on the size of each project. - -------------------------------------------------------------------------------- -SUBMISSION -------------------------------------------------------------------------------- -As with the previous project, you should fork this project and work inside of your fork. Upon completion, commit your finished project back to your fork, and make a pull request to the master repository. -You should include a README.md file in the root directory detailing the following - -* A brief description of the project and specific features you implemented -* At least one screenshot of your project running, and at least one screenshot of the final rendered output of your raytracer -* Instructions for building and running your project if they differ from the base code -* A link to your blog post detailing the project -* A list of all third-party code used \ No newline at end of file +For part2, just open html files on the latest web browsers. \ No newline at end of file diff --git a/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj b/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj index 554291c..3fd7af6 100644 --- a/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj +++ b/part1/ImageProcessing/ImageProcessing/ImageProcessing.vcxproj @@ -12,8 +12,16 @@ + + + + + + + + diff --git a/part1/ImageProcessing/ImageProcessing/brightnessFS.glsl b/part1/ImageProcessing/ImageProcessing/brightnessFS.glsl new file mode 100644 index 0000000..c5d6b2f --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/brightnessFS.glsl @@ -0,0 +1,10 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; + +void main(void) +{ + vec4 originColor = texture2D(u_image, v_Texcoords); + float brightness = (originColor.r + originColor.g + originColor.b) / 3.0; + gl_FragColor = vec4(brightness, brightness, brightness, originColor.a); +} diff --git a/part1/ImageProcessing/ImageProcessing/gaussBlurFS.glsl b/part1/ImageProcessing/ImageProcessing/gaussBlurFS.glsl new file mode 100644 index 0000000..016668b --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/gaussBlurFS.glsl @@ -0,0 +1,26 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; +uniform vec2 u_step; + +const int KERNEL_WIDTH = 3; // Odd +const float offset = 1.0; +const mat3 gaussKernel = mat3(1.0, 2.0, 1.0, + 2.0, 4.0, 2.0, + 1.0, 2.0, 1.0); + +void main(void) +{ + vec3 accum = vec3(0.0); + + for (int i = 0; i < KERNEL_WIDTH; ++i) + { + for (int j = 0; j < KERNEL_WIDTH; ++j) + { + vec2 coord = vec2(v_Texcoords.s + ((float(i) - offset) * u_step.s), v_Texcoords.t + ((float(j) - offset) * u_step.t)); + accum += texture2D(u_image, coord).rgb * gaussKernel[i][2-j]; + } + } + + gl_FragColor = vec4(accum / 16.0, 1.0); +} diff --git a/part1/ImageProcessing/ImageProcessing/grayscaleFS.glsl b/part1/ImageProcessing/ImageProcessing/grayscaleFS.glsl new file mode 100644 index 0000000..7b4431b --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/grayscaleFS.glsl @@ -0,0 +1,12 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; + +const vec3 W = vec3(0.2125, 0.7154, 0.0721); + +void main(void) +{ + vec4 originColor = texture2D(u_image, v_Texcoords); + float luminance = dot(originColor.rgb, W); + gl_FragColor = vec4(luminance, luminance, luminance, originColor.a); +} diff --git a/part1/ImageProcessing/ImageProcessing/main.cpp b/part1/ImageProcessing/ImageProcessing/main.cpp index 3b1444d..d797988 100644 --- a/part1/ImageProcessing/ImageProcessing/main.cpp +++ b/part1/ImageProcessing/ImageProcessing/main.cpp @@ -13,6 +13,14 @@ const char *attributeLocations[] = { "Position", "Tex" }; GLuint passthroughProgram; GLuint boxBlurProgram; +GLuint negativeProgram; +GLuint gaussBlurProgram; +GLuint grayscaleProgram; +GLuint sobelProgram; +GLuint toonShadingProgram; +GLuint brightnessProgram; +GLuint nightVisionProgram; +GLuint pixelizeProgram; GLuint initShader(const char *vertexShaderPath, const char *fragmentShaderPath) { @@ -104,6 +112,30 @@ void keyboard(unsigned char key, int x, int y) case '2': glUseProgram(boxBlurProgram); break; + case '3': + glUseProgram(negativeProgram); + break; + case '4': + glUseProgram(gaussBlurProgram); + break; + case '5': + glUseProgram(grayscaleProgram); + break; + case '6': + glUseProgram(sobelProgram); + break; + case '7': + glUseProgram(toonShadingProgram); + break; + case '8': + glUseProgram(brightnessProgram); + break; + case '9': + glUseProgram(nightVisionProgram); + break; + case '0': + glUseProgram(pixelizeProgram); + break; } } @@ -133,6 +165,14 @@ int main(int argc, char* argv[]) initTextures(); passthroughProgram = initShader("passthroughVS.glsl", "passthroughFS.glsl"); boxBlurProgram = initShader("passthroughVS.glsl", "boxBlurFS.glsl"); + negativeProgram = initShader("passthroughVS.glsl", "negativeFS.glsl"); + gaussBlurProgram = initShader("passthroughVS.glsl", "gaussBlurFS.glsl"); + grayscaleProgram = initShader("passthroughVS.glsl", "grayscaleFS.glsl"); + sobelProgram = initShader("passthroughVS.glsl", "sobelFS.glsl"); + toonShadingProgram = initShader("passthroughVS.glsl", "toonShadingFS.glsl"); + brightnessProgram = initShader("passthroughVS.glsl", "brightnessFS.glsl"); + nightVisionProgram = initShader("passthroughVS.glsl", "nightVisionFS.glsl"); + pixelizeProgram = initShader("passthroughVS.glsl", "pixelizeFS.glsl"); glutDisplayFunc(display); glutReshapeFunc(reshape); diff --git a/part1/ImageProcessing/ImageProcessing/negativeFS.glsl b/part1/ImageProcessing/ImageProcessing/negativeFS.glsl new file mode 100644 index 0000000..0a4d796 --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/negativeFS.glsl @@ -0,0 +1,8 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; + +void main(void) +{ + gl_FragColor = vec4(vec3(1.0) - texture2D(u_image, v_Texcoords).rgb, 1.0); +} diff --git a/part1/ImageProcessing/ImageProcessing/nightVisionFS.glsl b/part1/ImageProcessing/ImageProcessing/nightVisionFS.glsl new file mode 100644 index 0000000..60a5e70 --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/nightVisionFS.glsl @@ -0,0 +1,39 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; + +const vec3 green = vec3(0.0, 1.0, 0.0); + +// Reference: rasterizeKernels.cu by Yining Karl Li in Project3 +uint hash(uint a){ + a = (a+0x7ed55d16) + (a<<12); + a = (a^0xc761c23c) ^ (a>>19); + a = (a+0x165667b1) + (a<<5); + a = (a+0xd3a2646c) ^ (a<<9); + a = (a+0xfd7046c5) + (a<<3); + a = (a^0xb55a4f09) ^ (a>>16); + return a; +} + +// Reference: http://freespace.virgin.net/hugo.elias/models/m_perlin.htm +float noiseFloat1(uint x) +{ + x = (x<<13) ^ x; + return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0); +} + + +void main(void) +{ + const vec4 originRBGA = texture2D(u_image, v_Texcoords); + uint randSeed = hash(floatBitsToUint(v_Texcoords.s)) + hash(floatBitsToUint(v_Texcoords.t)); + + float noiseColor = (noiseFloat1(randSeed) + 1) * 0.5; + + if (length(vec2(v_Texcoords.s - 0.5, v_Texcoords.t - 0.5)) >= 0.5) { + gl_FragColor.rgb = vec3(0.0); + } else { + gl_FragColor.rgb = green * (originRBGA.rgb + vec3(noiseColor)) * 0.5; + } + gl_FragColor.a =originRBGA.a; +} diff --git a/part1/ImageProcessing/ImageProcessing/pixelizeFS.glsl b/part1/ImageProcessing/ImageProcessing/pixelizeFS.glsl new file mode 100644 index 0000000..0f85d6b --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/pixelizeFS.glsl @@ -0,0 +1,18 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; + +// Algorithm reference: http://wtomandev.blogspot.com/2010/01/pixelize-effect.html +const vec2 dimensions = vec2(80.0, 60.0); +const vec2 size = vec2(1.0, 1.0) / dimensions; + +void main(void) +{ + vec2 pixel_base = vec2(0.0); + pixel_base.s = v_Texcoords.s - mod(v_Texcoords.s, size.s); + pixel_base.t = v_Texcoords.t - mod(v_Texcoords.t, size.t); + + pixel_base += 0.5 * size; + + gl_FragColor = texture2D(u_image, pixel_base); +} diff --git a/part1/ImageProcessing/ImageProcessing/sobelFS.glsl b/part1/ImageProcessing/ImageProcessing/sobelFS.glsl new file mode 100644 index 0000000..a5cb797 --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/sobelFS.glsl @@ -0,0 +1,37 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; +uniform vec2 u_step; + +const int KERNEL_WIDTH = 3; // Odd +const float offset = 1.0; +const mat3 sobel_horiz = mat3(-1.0, -2.0, -1.0, + 0.0, 0.0, 0.0, + 1.0, 2.0, 1.0); +const mat3 sobel_vert = mat3(-1.0, 0.0, 1.0, + -2.0, 0.0, 2.0, + -1.0, 0.0, 1.0); + +const vec3 W = vec3(0.2125, 0.7154, 0.0721); + +void main(void) +{ + float horiz_accum = 0.0; + float vert_accum = 0.0; + + for (int i = 0; i < KERNEL_WIDTH; ++i) + { + for (int j = 0; j < KERNEL_WIDTH; ++j) + { + vec2 coord = vec2(v_Texcoords.s + ((float(i) - offset) * u_step.s), v_Texcoords.t + ((float(j) - offset) * u_step.t)); + float luminance = dot(texture2D(u_image, coord).rgb, W); + + horiz_accum += luminance * sobel_horiz[i][2-j]; + vert_accum += luminance * sobel_vert[i][2-j]; + } + } + + float gradient_magnitude = clamp(length(vec2(horiz_accum, vert_accum)), 0.0, 1.0); + + gl_FragColor = vec4(gradient_magnitude, gradient_magnitude, gradient_magnitude, 1.0); +} diff --git a/part1/ImageProcessing/ImageProcessing/toonShadingFS.glsl b/part1/ImageProcessing/ImageProcessing/toonShadingFS.glsl new file mode 100644 index 0000000..0598347 --- /dev/null +++ b/part1/ImageProcessing/ImageProcessing/toonShadingFS.glsl @@ -0,0 +1,47 @@ +varying vec2 v_Texcoords; + +uniform sampler2D u_image; +uniform vec2 u_step; + +const int KERNEL_WIDTH = 3; // Odd +const float offset = 1.0; +const mat3 sobel_horiz = mat3(-1.0, -2.0, -1.0, + 0.0, 0.0, 0.0, + 1.0, 2.0, 1.0); +const mat3 sobel_vert = mat3(-1.0, 0.0, 1.0, + -2.0, 0.0, 2.0, + -1.0, 0.0, 1.0); + +const vec3 W = vec3(0.2125, 0.7154, 0.0721); +const float thresh = 0.4; +const float quantize = 7; + +void main(void) +{ + float horiz_accum = 0.0; + float vert_accum = 0.0; + + for (int i = 0; i < KERNEL_WIDTH; ++i) + { + for (int j = 0; j < KERNEL_WIDTH; ++j) + { + vec2 coord = vec2(v_Texcoords.s + ((float(i) - offset) * u_step.s), v_Texcoords.t + ((float(j) - offset) * u_step.t)); + float luminance = dot(texture2D(u_image, coord).rgb, W); + + horiz_accum += luminance * sobel_horiz[i][2-j]; + vert_accum += luminance * sobel_vert[i][2-j]; + } + } + + float gradient_magnitude = clamp(length(vec2(horiz_accum, vert_accum)), 0.0, 1.0); + + if (gradient_magnitude > thresh) { + gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); + } else { + vec3 tempColor = texture2D(u_image, v_Texcoords).rgb; + tempColor *= quantize; + tempColor += vec3(0.5); + tempColor = vec3(ivec3(tempColor)) / quantize; + gl_FragColor = vec4(tempColor, 1.0); + } +} diff --git a/part2/index.html b/part2/index.html index 1e3f7e2..6142d8a 100644 --- a/part2/index.html +++ b/part2/index.html @@ -15,20 +15,29 @@ attribute vec2 position; uniform mat4 u_modelViewPerspective; + uniform float u_time; + + varying vec3 v_Color; void main(void) { - float height = 0.0; - gl_Position = u_modelViewPerspective * vec4(vec3(position, height), 1.0); + float s_contrib = sin(position.x*2.0*3.14159 + u_time); + float t_contrib = cos(position.y*2.0*3.14159 + u_time); + float height = s_contrib*t_contrib; + gl_Position = u_modelViewPerspective * vec4(vec3(position, + height), 1.0); + v_Color = mix(vec3(1.0, 0.2, 0.0), vec3(0.0, 0.8, 1.0), (height+1.0)*0.5); } @@ -37,4 +46,4 @@ - \ No newline at end of file + diff --git a/part2/index.js b/part2/index.js index b5df3fb..89252e5 100644 --- a/part2/index.js +++ b/part2/index.js @@ -31,15 +31,18 @@ var positionLocation = 0; var heightLocation = 1; var u_modelViewPerspectiveLocation; + var u_time_location; + var animation_time = 0.0; (function initializeShader() { var program; var vs = getShaderSource(document.getElementById("vs")); var fs = getShaderSource(document.getElementById("fs")); - var program = createProgram(context, vs, fs, message); - context.bindAttribLocation(program, positionLocation, "position"); - u_modelViewPerspectiveLocation = context.getUniformLocation(program,"u_modelViewPerspective"); + var program = createProgram(context, vs, fs, message); + context.bindAttribLocation(program, positionLocation, "position"); + u_modelViewPerspectiveLocation = context.getUniformLocation(program,"u_modelViewPerspective"); + u_time_location = context.getUniformLocation(program, "u_time"); context.useProgram(program); })(); @@ -138,14 +141,17 @@ var mvp = mat4.create(); mat4.multiply(persp, mv, mvp); + animation_time += 0.01; + /////////////////////////////////////////////////////////////////////////// // Render context.clear(context.COLOR_BUFFER_BIT | context.DEPTH_BUFFER_BIT); context.uniformMatrix4fv(u_modelViewPerspectiveLocation, false, mvp); + context.uniform1f(u_time_location, animation_time); context.drawElements(context.LINES, numberOfIndices, context.UNSIGNED_SHORT,0); - window.requestAnimFrame(animate); + window.requestAnimFrame(animate); })(); }()); diff --git a/part2/index_custom.html b/part2/index_custom.html new file mode 100644 index 0000000..b9c3061 --- /dev/null +++ b/part2/index_custom.html @@ -0,0 +1,47 @@ + + + + +Vertex Wave + + + + + +
+ + + + + + + + + + + + diff --git a/part2/index_simplex.html b/part2/index_simplex.html new file mode 100644 index 0000000..847009e --- /dev/null +++ b/part2/index_simplex.html @@ -0,0 +1,90 @@ + + + + +Vertex Wave + + + + + +
+ + + + + + + + + + + + diff --git a/readmeFiles/brightness.png b/readmeFiles/brightness.png new file mode 100644 index 0000000..36fa333 Binary files /dev/null and b/readmeFiles/brightness.png differ diff --git a/readmeFiles/nightVision.png b/readmeFiles/nightVision.png new file mode 100644 index 0000000..1c3358a Binary files /dev/null and b/readmeFiles/nightVision.png differ diff --git a/readmeFiles/pixelization.png b/readmeFiles/pixelization.png new file mode 100644 index 0000000..44c0829 Binary files /dev/null and b/readmeFiles/pixelization.png differ diff --git a/readmeFiles/radialSine.png b/readmeFiles/radialSine.png new file mode 100644 index 0000000..9d6e41c Binary files /dev/null and b/readmeFiles/radialSine.png differ diff --git a/video/radial_wave.wmv b/video/radial_wave.wmv new file mode 100644 index 0000000..9843582 Binary files /dev/null and b/video/radial_wave.wmv differ diff --git a/video/simplex_wave.wmv b/video/simplex_wave.wmv new file mode 100644 index 0000000..94d90ed Binary files /dev/null and b/video/simplex_wave.wmv differ diff --git a/video/sine_wave.wmv b/video/sine_wave.wmv new file mode 100644 index 0000000..b7da96b Binary files /dev/null and b/video/sine_wave.wmv differ