diff --git a/docs/exercise1.md b/docs/exercise1.md index a9fe75e..73396f6 100644 --- a/docs/exercise1.md +++ b/docs/exercise1.md @@ -26,7 +26,7 @@ The pipeline shown in the Pipeline browser in the upper-left corner of the ParaV ▶ Another handy thing to check is the metadata in the __Information__ tab, directly below the pipeline browser. It shows you the type of the dataset, the number of points and cells, data ranges, spatial extent, data memory size, etc. -▶ Try to get a bit of a feel for the mouse interaction in the 3D view (LMB = rotate, MMB = drag, RMB = zoom). +▶ Try to get a bit of a feel for the mouse interaction in the 3D view (LMB = rotate, MMB = drag, RMB or wheel = zoom). Right now, your data is visible only as an outline: a box made from a few white lines representing the spatial extent of your data. This does not give us much insight in the data, so we want to switch to something more suitable for the moment. @@ -38,8 +38,7 @@ Right now, your data is visible only as an outline: a box made from a few white ![](images/volume-warning.png) - When enabling Volume representation a dialog box may pop up asking if you are sure. For large datasets volume rendering can be problematic - if your system doesn't have a lot of memory and/or has a slow GPU. For the datasets used here volume rendering on most systems should not be a problem. + When enabling Volume representation for the first time in a session, a dialog box may pop up asking if you are sure, which you can answer with Yes. For large datasets volume rendering can be problematic if your system doesn't have a lot of memory and/or has a slow GPU. For the datasets used here volume rendering on most systems should not be a problem. Especially in Volume representation you can clearly see that indeed we are looking at the scan of a (boy's) head. @@ -49,18 +48,21 @@ Especially in Volume representation you can clearly see that indeed we are looki You should now see a color scale in the view area that shows how scalar values in the data are mapped to colors. +▶Vary the slice position and orientation within the dataset using the Slicing properties (__Slice Direction__ and __Slice__). + ## Isosurfacing -Our next job is to see what this dataset contains. To be more precise, we want to explore certain features of the dataset in order to look for things out of the ordinary. For example, we are interested in physical structures such as bone and skin. +Our next job is to look into what this dataset contains. To be more precise, we want to explore certain features of the dataset in order to look for things out of the ordinary. For example, we are interested in physical structures such as bone and skin. ▶ To do this, we first need to add a __Contour__ filter to the pipeline: select the data source (__headsq_masked.vtk__) in the pipeline browser and add a Contour filter. You can do this by either using the icon symbol in the toolbar (half a sphere) or use __Filters → Common → Contour__ in the menu bar: ![](images/contour-filter.png) -!!! Tip "Quick way to add a filter" +!!! Tip "Quick search to add a filter" When you know the name of the filter you want to add, a quicker way than navigating the Filter menu is to use **Ctrl+Spacebar**. This will show a search dialog where you can type (part of) the filter name and matching filters will get listed. Select the filter you want and press **Enter** to add it. - Of course, make sure to have selected the filter whose inputs you want to use before adding the downstream filter. + + Of course, make sure to have selected the filter whose input you want to use before adding the downstream filter. ▶ Next, we want to see what this filter does: in the __Properties__ tab, set the value of the contour under __Isosurfaces__ to 500 and click on __Apply__ and you should see a surface more-or-less representing the boy's skin. diff --git a/docs/exercise2.md b/docs/exercise2.md index 9eb238a..368adfe 100644 --- a/docs/exercise2.md +++ b/docs/exercise2.md @@ -12,7 +12,9 @@ The data contained in the CSV file is not automatically identified and understoo ![](images/disconnect.png) -▶ Open the __wervel.csv__ file and click __Apply__. ParaView will ask which reader to use, pick __CSV Reader__ and click __Ok__. Next, use __Apply__ to actually load the data. A new view pane with a table representation of the data will get added next to existing the 3D view. The table is similar to an Excel sheet. +▶ Open the __wervel.csv__ file and click __Apply__. ParaView will ask which reader to use, pick __CSV Reader__ and click __Ok__. Next, use __Apply__ to actually load the data. A new view pane with a table representation of the data will get added next to existing the 3D view. The table is similar to an Excel sheet: + +![](images/tornado-spreadsheet-view.png) If you would open the CSV file in a text editor you would see the file contains 7 values per line, with each line representing one grid point. @@ -56,11 +58,11 @@ Next, we need to combine the three separate scalar values __vx__, __vy__ and __v You will probably have understood the name __iHat__ to represent the vector __î__, i.e. (1, 0, 0). Using the Calculator filter fairly complex expressions can be used to augment existing datasets with new and useful values, both scalars and vectors. -▶ Select the Calculator filter in the pipeline and add another Calculator filter, for creating an array __VelocityMag__ and expression `mag(Velocity)`. +▶ Select the Calculator filter in the pipeline and add *another* Calculator filter, for creating an array __VelocityMag__ and expression `mag(Velocity)`. ![](images/2ndcalculator.png) -▶ The pipeline we built up to this point (shown above) creates a Polygonal Mesh dataset which contains only points: the input point positions with their respective 3D velocity vectors. You can see this on the __Information__ tab of the Calculator filter in the pipeline browser. Note that there's only 1 cell, and it contains all the 25,000 points. We also added two quantities derived from the input data, a velocity vector and its magnitude. +▶ The pipeline we built up to this point, shown above, creates a Polygonal Mesh dataset which contains only points: the input point positions with their respective 3D velocity vectors. You can see this on the __Information__ tab of the Calculator filter in the pipeline browser. Note that there's only 1 cell, and it contains all the 25,000 points. We also added two quantities derived from the input data, a velocity vector and its magnitude. ## Visualizing the flow field @@ -90,9 +92,9 @@ Now lets do some initial particle tracing through the flow field using the strea ![](images/streamtracerparams.png) -This will give you a set of lines, each representing a trace of a particles as it follows the flow in the tornado. +This will give you a set of lines, each representing a trace of a particle as it follows the flow in the tornado. -▶ Note the small white 3D axis and large sphere at the bottom of the tornado: this is the seed point (12,12,0) plus given radius around which the traced particles start. You could experiment with different locations of the seed point to see how this influences the streamlines. +▶ Note the small white 3D axis and large sphere at the bottom of the tornado: this is the seed center (12,12,0) and given radius around which the traced particles start. You could experiment with different locations of the seed point to see how this influences the streamlines. !!! Hint "Accidental sphere changes / Resetting a filter to last executed values" @@ -104,32 +106,36 @@ This will give you a set of lines, each representing a trace of a particles as i ▶ To make the streamlines more visually appealing, we add another filter on the output of the StreamTracer filter, namely a __Tube__ filter (__Filters → Alphabetical → Tube__). In the Tube filter's __Properties__ tab, set the radius of the tubes to 0.1 and click __Apply__. Notice how this changes the appearance of the streamlines. -▶ If you like, at this point you can experiment with different colorings of the tubes, based on e.g. velocity, angular velocity or rotation. Use the coloring controls under __Coloring__ for this. +▶ At this point, we'd like to add the original data domain as an outline. For this, enable visibility of the __TableToPoints__ filter and switch its representation to __Outline__. + +▶ You can experiment with different colorings of the tubes, based on e.g. velocity, angular velocity or rotation. Use the coloring controls under __Coloring__ for this. ## Glyphs -Finally, we'll add a different representation instead of the streamlines, called _glyphs_. Glyphs are usually small and simple 3D objects, like arrows or spheres, that get placed at each point position in a dataset to show a particular value. The glyphs are then colored, scaled and/or oriented based on scalar or vector values at that position. We'll use arrow glyphs to show the flow velocity magnitude and direction in the tornado. +Finally, we'll add a different representation instead of the streamlines, called _glyphs_. Glyphs are simple and (usually) small 3D objects, like arrows or spheres. These glyphs get placed at each point position in a dataset to show a particular value. The glyphs are then colored, scaled and/or oriented based on scalar or vector values at the location. We'll use arrow glyphs to show the flow velocity magnitude and direction in the tornado. -▶ Hide all filter output by clicking the relevant eye icons. +▶ Hide all filter output, except the __TableToPoints__ filter (the domain), by clicking the relevant eye icons. ▶ Select the __Calculator2__ filter and add a __Glyph__ filter (__Filters → Common → Glyph__). Set the __Glyph Type__ to __Arrow__, set the __Orientation Array__ to __Velocity__ (i.e. our computed velocity vectors) and __Scale Mode__ to __No scale array__. Click __Apply__. +![](images/glypsettings.png) + !!! Info "Point versus cell input" - Note that there is no need to base the Glyph filter on the Delaunay 3D output, as the Glyph filter works on 3D *points*, as in the original data set. This is unlike the Streamtracer filter needing cells, which we added using the Delaunay 3D filter. + Note that there is no need to base the Glyph filter on the Delaunay 3D output, as the Glyph filter works on 3D *points*, as in the original data set. This is unlike the Streamtracer filter needing *cells*, which we added using the Delaunay 3D filter. ParaView does provide generic `Point Data to Cell Data` and `Cell Data to Point Data` filters, to convert between the two using interpolation. -![](images/glypsettings.png) - You should now see a large number of arrows nicely distributed over the tornado dataset, indicating the direction of wind flow. As we have set the Scale Mode to Off all arrows are the same size, obscuring the insides and giving less of a visual clue to wind speed. +Let's try to improve the overall visualization, to make it easier to interpret. + ▶ Set the __Scale Array__ to __Velocity__ and the __Scale Factor__ to __0.2__ and press __Apply__. ▶ Make sure the coloring is set to __VelocityMag__ and verify that the size and colors of a glyph arrow corresponds to its velocity value. -You might wonder about the the number of glyphs placed, compared to the 25,000 points in the dataset. There is quite a large number of glyphs, and perhaps still too many to be effective. This doesn't help in the overall visual interpretation of the data, but we do need to balance getting enough coverage of the full dataset. +You might wonder about the the number of glyphs placed, compared to the 25,000 points in the dataset. There is currently quite a large number of glyphs, and perhaps still too many to be effective. This doesn't help in the overall visual interpretation of the data, but we do need to balance getting enough coverage of the full dataset. -▶ The settings under __Masking__ control the number and distribution of the glyphs placed. See what happens to the resulting visualization when you show a glyph for every 10th point, or 500 glyphs uniformly distributed (and why that means you need choose these kinds of parameters with care). +▶ The settings under __Masking__ control the number and distribution of the glyphs placed. See what happens to the resulting visualization when you show a glyph for every 10th or 11th point, or 500 glyphs uniformly distributed (and why that means you need choose these kinds of parameters with care). -▶ A useful variant is to apply glyphs *to the output of the Stream Trace filter* (by creating a second Stream Trace filter). This is possible because the generated streamlines are themselves polygonal data, where each streamline consists of a Poly-Line cell that uses a set of 3D points. As a Glyph filter uses point positions to place glyphs we can add Glyphs to streamlines. Experiment with this, using different types of glyphs, like Sphere and Arrow. Also try coloring by IntegrationTime to verify the direction in which the streamlines where "grown". \ No newline at end of file +▶ A useful variant is to apply glyphs *to the output of the Stream Trace filter* (by creating a second Glyph filter). This is possible because the generated streamlines are themselves polygonal data, where each streamline consists of a Poly-Line cell that uses a set of 3D points. As a Glyph filter uses point positions to place glyphs we can place them for each streamline. Experiment with this, using different types of glyphs, like Sphere and Arrow. Also try coloring by IntegrationTime to verify the direction in which the streamlines where "grown". \ No newline at end of file diff --git a/docs/exercise3.md b/docs/exercise3.md index 8e34acc..4e5479b 100644 --- a/docs/exercise3.md +++ b/docs/exercise3.md @@ -13,15 +13,15 @@ In this exercise, we are going to look at time-dependent data. The data we use f You should now see a contour surface of one single time point in the growth of the coral. However, what we want is to have the contour plot change dynamically over time, showing us the growth of the coral over time. -▶ To achieve this, we need to open the __Animation View__, which can be enabled from the main menu with __View → Animation View__. In the animation toolbar, set __No. Frames__ to __100__, the __End time__ to __10__. This will set up the animation to be 10 seconds long, playing a total of 100 frames, and thus 10 frames per second. +▶ To achieve this, we need to open the _Time Manager_, which can be enabled from the main menu with __View → Time Manager__. In the time manager toolbar, set __Number of frames__ to __100__. -▶ Link the contour iso-surface value to the time sequence by using the blue __+__ button left of the __Contour1__ and __Isosurfaces__ dropdown menus. +▶ Link the contour iso-surface value to the time sequence by using the blue __+__ button right of the __Contour1__ and __Isosurfaces__ dropdown menus. ![](images/animation.png) -▶ Verify that this adds __Contour1__ to the timeline, directly under __TimeKeeper1__ as a second "strip" over the full length of the time bar. The values at the far left and right edges of the strip are the isosurface values used at those time points. These values are based on the min/max values from the input dataset (which can you verify using the __Information__ tab of __ALT_PRPB001A.vtk__). +▶ Verify that this adds __Contour1__ to the timeline, directly under __Animations__ as a second "strip" over the full length of the time bar. The values at the far left and right edges of the strip are the isosurface values used at those time points (1280 and 37120, respectively). These values are based on the min/max values from the input dataset, which can you verify using the __Information__ tab of __ALT_PRPB001A.vtk__. -▶ With the animation set up you can now use the playback buttons in the main toolbar to play through the growth of the simulation. The double arrow button controls if the playback loops back to the start. +▶ With the animation set up you can now use the playback buttons, either in the Time Manager or in the main toolbar, to play through the growth of the simulation. The double arrow button controls if the playback loops back to the start. When the animation is paused you can change the current time by clicking or dragging in the Time Sources bar. ![](images/playback.png) @@ -39,17 +39,17 @@ Since the coral is a three-dimensional structure, it is nice to look at all side Next, we will add a circular camera motion, a so-called _orbit_. -▶ In the Animation View select __Camera__ in the dropdown next to the blue __+__ button, and __Follow Path__ in the dropdown right to it. Then create a camera animation strip by pressing the blue __+__ button. If you play the animation you will see the scene rotates over time. +▶ In the Time Manager select __Camera__ in the Animations row, and __Follow Path__ in the dropdown right to it. Then create a camera animation strip by pressing the blue __+__ button. This will add a __Camera - RenderView1__ strip. If you play the animation you will see the scene rotates over time, however, *the data is shown upside down...* -The default camera orbit might need some tweaking, as it is set up based on the current view. For example, the camera might be a bit too close to the coral object and we want to move it back a bit. +So the default camera orbit needs some tweaking, as it is set up based on the current view. For example, the camera might be a bit too close to the coral object and we want to move it back a bit. -▶ Bring up the animation parameters by double-clicking on the Camera strip in the Animation View, this will show the __Animation Keyframes__ dialog. +▶ Bring up the animation parameters by double-clicking on the Camera strip in the Time Manager, this will show the __Animation Keyframes__ dialog. ![](images/animation-keyframes.png) -There are currently on two key frames defined, for time 0 and time 10. We will change the values for time 0 to tweak the camera animation. +There are currently only two key frames defined, for time 0 and time 1. We will change the values for time 0 to tweak the camera animation. -▶ Select the row for time 0 in the dialog and click __Create Orbit__. This will show the Create Orbit dialog. The __Center__ value is the point around which the camera is rotated, __Normal__ is the vector used for the rotation and the __Origin__ is the initial camera position. Set the __Normal__ value to be -Y up __(0, -1, 0)__, or else the might be some weird rotation during the orbit. Normally, you will need to experiment in your own scenes to figure out correct values, but use the ones given below here and press __Ok__. +▶ Select the row for time 0 in the dialog and click __Create Orbit__. This will show the Create Orbit dialog. The __Center__ value is the point around which the camera is rotated, __Normal__ is the vector used for the rotation and the __Origin__ is the initial camera position. Set the __Normal__ value to be -Y up __(0, -1, 0)__, or else the might be some weird rotation during the orbit. Normally, you will need to experiment in your own scenes to figure out correct values, **but use the ones given below** and press __Ok__. ![](images/orbitparameters.png) @@ -59,4 +59,4 @@ There are currently on two key frames defined, for time 0 and time 10. We will c !!! Tip "Saving the animation to a video" - Although not part of this exercise, it is really easy at this point to save the animated view to a movie file from ParaView. Use **File → Save Animation** for this. You can either save to a sequence of images, or directly to a video file. \ No newline at end of file + Although not part of this exercise, it is really easy at this point to save the animated view to a movie file from ParaView. Use **File → Save Animation** for this. You can either save to a sequence of images, or directly to a video file, such as a .avi or .ogv. In this case it's best to set the __Frame Rate__ value to something like 20 (leading to a 5 second animation, as we have 100 frames). \ No newline at end of file diff --git a/docs/images/playback.png b/docs/images/playback.png index 6897142..2e2fe0f 100644 Binary files a/docs/images/playback.png and b/docs/images/playback.png differ diff --git a/docs/images/tornado-spreadsheet-view.png b/docs/images/tornado-spreadsheet-view.png new file mode 100644 index 0000000..82d6d56 Binary files /dev/null and b/docs/images/tornado-spreadsheet-view.png differ