Skip to content

a3-SophiaStrano1-JustinMoyMember2-CarleyGilmoreMember3- #15

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
121 changes: 36 additions & 85 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,106 +1,57 @@
Assignment 3 - Replicating a Classic Experiment
===

For the scope of this project, assume the role of a scientist who runs experiments for a living.

Q: How do we know that bar charts are "better" than pie charts?
A: Controlled experiments!
Assignment 3 - Justin Moy, Sophia Strano, Carley Gilmore
===

In this assignment you'll implement a simple controlled experiment using some of the visualizations you’ve been building in this class.
You'll need to develop support code for the experiment sequence, results file output, and other experiment components.
(These are all simple with Javascript buttons and forms.)
The main goals for you are to a) test three competing visualizations, b) implement data generation and error calculation functions inspired by Cleveland and McGill's 1984 paper, c) run the experiment with 10 participants (or a trial equivalent), and d) do some basic analysis and reporting of the results.
Link to [experiment](https://users.wpi.edu/~jmmoy/BCB4002/a3/a3.html). Here's a screenshot of the page it brings you to:

For this assignment you should aim to write everything from scratch. For experimentation it is often necessary to control all elements of the chart.
You should definitely *reference* demo programs from books or the web, and if you do please provide a References section with links at the end of your Readme.
![image](https://user-images.githubusercontent.com/64103447/154109871-df09d4fc-7a6c-4516-9d55-0a8da7b1dc8a.png)

Going Beyond Cleveland-McGill
Experiment
---
We are testing the effectiveness of perceiving different visualizations of data- bar chart, pie chart, and circle chart.
In the experiment, we generate random numbers, 5 random numbers between 0 and 100

Several have expressed interest in conducting surveys of various sorts. I encourage you go move beyond Cleveland and McGill if you can think of other interesting visualization experiment designs and corresponding analyses.
Each visualization represents the data points in different ways:

You might study how people interpret COVID visualizations, for example.
If you decide to go in a custom route, simply contact staff so we can help you set acceptable parameters.
Basically, we still want you to do a multi-trial study with each participant, to raise the chance that you get solid results.
How you measure "error" and similar facets also matter. But you can't go wrong with finding a visualization study online to start from :)
Bar chart- height of the bars
Pie chart- area of the pie
Circle chart- radius of the circle

Requirements
---
During the experiment, we labeled each of the sections with letters to indicate which portions of the chart are being compared.
In a random order, we show each participant the visualizations and have them guess in percentages the difference between two outlined sections.

- Look it over Cleveland and McGill's original experiment (see the section below) and [watch this video](experiment-example.mp4) to get a sense of the experiment structure and where your visualizations will go.
- When viewing the example experiment video, note the following:
- Trials are in random order.
- Each trial has a randomly generated set of 5-10 data points.
- Two of these data points are marked.
- (Note: the experiment UI and User Experience could be better. Plenty of design achievements here).
- Implement the data generation code **as described in the Cleveland & McGill paper**.
- The goal is to generate a set of random datapoints (usually 5 or 10, with values be between 0 and 100) and to mark two of them for comparison in the trial.
- Add 3 visualizations (i.e. conditions) to your experiment. When you are adding these visualizations, think about *why* these visualizations are interesting to test. In other words, keep in mind a *testable hypothesis* for each of the added visualization. Some good options include bar charts, pie charts, stacked-bar charts, and treemaps. You can also rotate your bar chart to be horizontal or upside-down as one of your conditions. You are encouraged to test unorthodox charts -- radar charts come to mind, but there are MANY possibilities here-- feel free to be creative!
- Follow the style from Cleveland and McGill closely (e.g. no color, simple lines) unless you are specifically testing a hypothesis (e.g. color versus no color). Pay attention to spacing between elements like bars. Do not mark bars for comparison using color-- this makes the perceptual task too easy.
- After each trial, implement code that grades and stores participant’s responses.
- At the end of the experiment, to get the data, one easy option use Javascript to show the data from the current experiment\* (i.e. a comma separated list in a text box) and copy it into your master datafile. See the Background section below for an example of what this file should look like. (\*Alternately implement a server, if you're experienced with that sort of thing.)

- Figure out how to calculate "Error", the difference between the true percentage and the reported percentage.
- Scale this error using Cleveland and McGill’s log-base-2 error equation. For details, see the background section (there’s a figure with the equation). This becomes your “Error” column in the output. Make sure you use whole percentages (not decimal) in the log-base-2 equation. Make sure you handle the case of when a person gets the exact percentage correct (log-base-2 of 1/8 is -3, it is better to set this to 0).
- Run your experiment with 10 or more participants, or-- make sure you get at least 200 trials **per visualization type** in total.
- Grab friends or people in the class.
- Run at least 20 trials per visualization type, per participant. This is to ensure that you cover the range of possible answers (e.g. 5%, 10%, ..., 95%)
- Make sure to save the resulting CSV after each participant. Compile the results into a master csv file (all participants, all trials).
- Produce a README with figures that shows the visualizations you tested and results, ordered by best performance to worst performance. Follow the modern Cleveland-McGill figure below -- though note that using names instead of icons is fine.
- To obtain the ranking, calculate and report the average log2Error for each visualization across all trials and participants. This should be straightforward to do in a spreadsheet.
- Use Bootstrapped 95\% confidence intervals for your error upper and lower bounds. Include these in your figures. Bootstrapped confidence intervals are easily implemented in R + ggplot2 using the `stat_summary` geom. You can also use Excel, Python, or many many other tools. Bootstrapped 95% CIs are **very** useful in modern experiment practice.
- Include example images of each visualization as they appeared in your experiment (i.e. if you used a pie chart show the actual pie chart you used in the experiment along with the markings, not an example from Google Images).

## General Requirements

0. Your code should be forked from the GitHub repo and linked using GitHub pages.
2. Your project should use d3 to build visualizations.
3. Your writeup (readme.md in the repo) should contain the following:

- Working link to the experiment hosted on gh-pages or some other site.
- Concise description and screenshot of your experiment.
- Description of the technical achievements you attempted with this project.
- Description of the design achievements you attempted with this project.

Background
---
We hypothesized that in agreeance with the Cleveland-Mcgill paper, when users are presented with different data visualizations, the data values can be most easily judged and therefore will be best understood when presented in a bar chart.

In 1984, William Cleveland and Robert McGill published the results of several controlled experiments that pitted bar charts against pies and stacked-bar variants.
Their paper (http://www.cs.ubc.ca/~tmm/courses/cpsc533c-04-spr/readings/cleveland.pdf) (http://info.slis.indiana.edu/~katy/S637-S11/cleveland84.pdf) is considered a seminal paper in data visualization.
In particular, they ran a psychology-style experiment where users were shown a series of randomly-generated charts with two graphical elements marked like this:
Design choice- made visualizations black and white to make it easier to see and distinguish which sections to compare so that color variance wouldn't skew the results.

![cleveland bar chart](img/cleveland-bar.png)
We only decided to show them one trial at a time because we did not want to lose our participant's focus.
Since we did not want to show the charts repeatedly, we have the option for the participant to take the trials again as many times as they would like for flexibility by easily reloading the page.

Participants were then asked, "What percentage is the smaller of the larger?".
This was repeated hundreds of time with varying data and charts.
By the end of the study, Cleveland and McGill had amassed a large dataset that looked like this:
As far as implementation was concerned, cookie code was based off of [these instructions](https://github.com/levinunnink/html-form-to-google-sheet).
Cookie implementation was adapted from [this tutorial](https://linuxhint.com/set-check-read-cookie-javascript/).
Lastly, the pie chart was adapted from [this tutorial](https://www.geeksforgeeks.org/d3-js-pie-function/).

![cleveland table](img/cleveland-table.png)
After obtaining the results from various participants, we were able to create the following visualization using a 95% bootstrapped confidence interval:

__Log-base-2 or "cm-error"__: The true percent is the actual percentage of the smaller to the larger, while the reported percent is what participants reported.
Cleveland and McGill recognized that their analyses would be biased if they took `abs(ReportedPercent – TruePercent)` as their score for error.
To compensate, they came up with a logarithmic scale for error with this equation:
![thumbnail_Capture](https://user-images.githubusercontent.com/64103447/154147199-453b93d1-b32a-4384-a9a3-b5c6949b59fe.png)

![cleveland equation](img/cleveland-equation.png)
Our resulting graph was created using tableau to model the box and whisker plot shown in the Cleveland-Mcgill experiment. Despite the use of the absolute value, there were few outlying guesses that resulted in negative values, however the results of the experiment largely reflect that of Cleveland-Mcgill, that Bar charts are the easiest metric for participants to accurately judge distances. This proves our hypothesis that users will find the bar chart easiest to comprehend.

You’ll be implementing this error score as part of the lab.
(Hint: it’s not a trick question, this is just to familiarize you with the experiment protocol).
With this Cleveland-McGill error score you can better compare the performance of the charts you test to figure out which one performs the best.
Technical Achievements
---
-Cookie Feature- Cookies are stored properly so that you have to fill out the cookie on desktop
-Google Sheet Integration- Google Sheet populates with Results: have two different forms, a hidden form for the data we want, the other form is the form you see: Google Sheets make the CSV file a lot more accessible to those wanting to access it.
-Calculations for the expected vs actual user response are included within code

As a baseline, compare your average Error scores to the following chart, which include both Cleveland and McGill’s results as well as more recent extensions of this experiment (lower error indicates better performance, and error bars are bootstrapped 95% confidence intervals (`http://en.wikipedia.org/wiki/Confidence_interval#Meaning_and_interpretation`)):
Design Achievements
---
Our visualizations have a prompt where you can input a username that gives the interface a more personal feel. When you log back in, you are prompted with a welcome message and are able to take the trials as many times as you'd like.
You're able to seamlessly click from visualization to visualization after responding all on one web page. The data is randomly regenerated each time to ensure that the user's ability to judge differences is accurately measured each time.

![cleveland results](img/cleveland-results.png)
a3-SophiaStrano1-JustinMoy-CarleyGilmore

GitHub Details
Thanks for watching!
---

- Fork the GitHub Repository. You now have a copy associated with your username.
- Make changes to index.html to fulfill the project requirements.
- Make sure your "master" branch matches your "gh-pages" branch. See the GitHub Guides referenced above if you need help.
- Edit this README.md with a link to your gh-pages site: e.g. http://YourUsernameGoesHere.github.io/Experiment/index.html
- Replace this file (README.md) with your writeup and Design/Technical achievements.
- To submit, make a [Pull Request](https://help.github.com/articles/using-pull-requests/) on the original repository.
- Name your submission using the following scheme:
```
a3-FirstLastnameMember1-FirstLastnameMember2-FirstLastnameMember3-...
```
![image](https://user-images.githubusercontent.com/64103447/154149224-9f50405d-a526-4e7f-96db-805ae4b3047a.png)

Binary file added bioVisA3-20220215T210835Z-001.zip
Binary file not shown.