Tuesday, November 26, 2013

Transfer functions

DOWNLOAD CODE


Understanding Transfer Functions

I explored both the x-mas and engine example sets in ImageVis3D.  It was more difficult to create a nice image with the x-mas set. Below is an image with the alpha adjusted to hide portions of the image that obscure the tree.
With the alpha adjusted I attempted to modify the colors. Below represents one of the more successful attempts.  With the adjustment I was able to apply different colors to the tree itself and the ornaments.

  • What did you like about the transfer function editor?
The ability to drag your mouse around to adjust the channels made it easy to play around with different values. The histogram with the values was also nice.
  • What is difficult about this editor / widget?
It would have been nice to have a reset button.  Understanding the checkboxes for the color components took some time figure out.

  • How would you improve the 1D transfer function editor?
I would require each channel to be set separately. 


Running the Volume Renderer

Volume Visualization and Control Panel

I explored several datasets in the processing volume visualization application.  Below is the bucky ball dataset.
  • What were you able to find from your volume data set?
The buckyballs isovalues are between 233 and below 128.  Setting the center out of this range limits the ability to view the volume rendering.
  • What is useful about the step function?
The major attribute that makes it possible to create a good view of any of the datasets is the CENTER value. 
  • What makes this particular function limited?
 My assumption is that the CENTER essentially replaces the need to set in alpha for the dataset.
With this type of input you cannot set the alpha value to include different isovalue ranges.

Code Structure 
Designing Your Own Transfer Function Widget 

It seems to me that the most useful feature of a transfer function widget is a histogram that provides some information about the data values.  With the histogram present the ability to visualize how the four channels are mapped on top provides the most useful ability for adjustment.

Below are my sketches.  My first idea was really simple.  It was just to add range sliders for each of the channels.  These could be adjusted and the image change would allow the user to evaluate the correct values to use.


The second idea was to add a histogram and a range slider. This is very similar to the first simple widget however the histogram would display where the channels are currently mapped.


My final design choice and the one that I will proceed with involves 4 histograms, one for each channel.  Each histogram would have a vertical slider and a horizontal range slider that would adjust the intensity and range to map respectively.  In addition the user will have the ability to add additional sliders and thus the ability to map additional values to a single channel.  This design choice is more limited than the widget provided in ImageVis3D but It will provide similar functionality and allow me to explore the Controls library.






My final control panel is below.  The panel has four histograms for each channel.  The color of the controls c indicate which channel is being adjusted.  Below each histogram is a range slider that can be adjusted on each end and can be moved by clicking the middle.  The rectangle on the histogram indicates the current range selected to add that channels values. A slider on the histogram allows the intensity of the channel to be adjusted.  Clicking the 'plus' button under each range slider allows the user to set additional values for a single channel.  This interaction can be seen in the alpha channel. The rectangle is numbered to make it easier differentiate multiple channel settings.

The histogram represents the count of values from 1-255.  The counts are log transformed to reduce compress the data.
Finding Good Transfer Functions 

I first evaluated the bonsai tree using my transfer function widget.  The image is below. The above control panel was used to generate the image.  Like most of the datasets the values at the beginning are overrepresented noise.  The leaves of the bonsai have values from 0-54, where the pot has values from 199-210.  This allowed me to color them different.
 I then looked at the foot dataset.  The flesh had a lower isovalue which I was able to render as red.  The bone isovalues ranged from 50 - 255 and the more dense portions had higher isovalues.



  • What are the strengths and weaknesses of your design?
I like my design, I think it is easy to use and allows for a lot of adjustment.  My choice of using sliders does not allow the user to create nuanced  adjustments. They are forced to use rectangles.  This is a weakness. I attempted to overcome this by allowing multiple boxes
  • What would you change to make your widget more effective?
I would allow for even more rectangles to be drawn. I would remove the vertical slider and allow the user to drag the highlighting rectangle instead.  They would be able to adjust the middle-top, which would move the the box up and down or grab a corner to move the corner down or up creating an irregular shape.
  • What are the pros and cons for volume rendering as a technique? What are the challenges?
Volume rendering allows you to focus on different densities of the image and move through the image, which can lead to greater insight.  The con is that it is hard to automate a good rendering and the finding a good representation is often done empirically.

Sunday, November 10, 2013

Scalar_data

DOWNLOAD CODE



DATA READER
I started the assignment by loading the data into a 1-D array and then mapping the highest and lowest pixel values as 1 and 255, respectively.  This should create a grey scale image.  I then displayed this image on the screen using the width and height values parsed from the NRRD file. Below is the image as it appeared on the screen and mapped to the same coordinates as the NRRD file.

COLOR MAP
I then changed the grey scale to a color mapped image. I did this by using colorLerp() and choosing between two different saturation levels of the same color (below) and also two different colors, red and green (below).

Questions
  • Where did you get your color map?
    • I used color sphere linked under the color lecture to choose my colors.  For the red and green I just picked them to see what they looked like.
  • What makes it an appropriate color map for this data?
    • The colors need to show the contrast in order to see the image details

INTERPOLATE THE GRID
Next I adjusted the grid size to have a fixed height of 800 px.  The major change in drawing the image was to use rect() instead of point(), so that I could fill in the white space that was created by stretching the image. Below is the image stretched.
To do the bilinear interpolation I changed to the test set so that I could have a smaller data set.  Here is an image of the test set without interpolation.
While in the end I found the bilinear interpolation to be straight forward. I did have a difficult time conceptualizing it.  To make it a little easier I modified my code and put the data into a 2-D array.  Then I looped through each data points getting the color values of each of the four corners. I used the map function to convert them to values between 1-255.  Then I used lerp() to get the first x and 2nd x values.  Then those two values were used to get the corresponding y value.  This was done for every pixel position.  The following image was produced after implementing the bilinear interpolation.
Now I looked at the brain data and produced the following interpolated image.  The image does look smoother than the first stretched image.

Questions
  • What, if anything, makes interpolation of your data tricky?
    • The interpolation is made a little tricky by not having only integers.  
  • Do you notice anything odd about the data? Do any values stick out? 
    • I did not notice anything odd the data seemed to work fine once the algorithm was implemented.

ISOCONTORS - MARCHING CUBES
I went back to the test data to test the marching cubes.  I first implemented the algorithm without interpolating the data but just by figuring out the cell binary values and then using those values to draw the possible cubes.  I drew the following image using this approach.
 This matched the example on the assignment page with the exception that some of my ambiguous cube cases were flipped.  I then used the map() function to help mapping the values to the grid range. The following image is achieved after this adjustment.
Next I loaded the brain data set with an isovalue of 176.
To explore the isovalues, I added an up-down-arrow interaction which will increment the isovalue by 2.  On the low end the image will disappear when the isovalue is set around 100 on the high end it disappears around 230.  Below is an image with an isovalue of 210 and 144.


I also implement the 'c' keystroke to switch between marching cubes and the bilinear interpolation image.

Questions
  • Are there any problems with your marching squares algorithm?
    • it works as described
  • What is an interesting isovalue on the brain data set? Why?
    • the isovalue 210 is interesting because it highlights the skull rather than the brain.
  • Compared to a color map, are there any tasks that isocontours seem more effective for? Why or why not? Which technique do you think is better?
    • I don't think one technique is better that the other.  The ability to change the isovalues is useful for the isocontors, this would be useful to focus attention to a particular feature.  
DATA EXPLORATION
Below is the Mt. Hood with the color values I used for the brain data.
The following is the Mt. Hood data with a three toned color map.  Negative values in the file make a clear demarcation between the high contrast areas.
For marching squares the image did not show up until the isovalue was set to 126 or lower.  The highest peaks are not seen until the isovalue is less than zero.

Questions
  • How did you adjust your color map for the mt Hood data set?
    • I added a third tone to the color map to make the peak area less saturated.
  • Did the isocontors in the mt Hood dataset differ from the brain data set? Why?
    • Yes there are far less points that have similar values. The effect is only a few parts of the image are contoured at any one isovalue.
  • For the brain and mt Hood data sets, were either color maps or isocontors more effective for either one of these data sets? Why?
    • I think the brain set work much better with the contors because of the issue mentioned in the previous question. the color map that I had set for the brain was not as effective with the mount hood data. Once adjusted the color map seemed to be similar for both.
the mt. hood adjusted color map on the brain data.