Skip to content

Commit

Permalink
Typo fixes for week 8 material
Browse files Browse the repository at this point in the history
  • Loading branch information
NickCH-K authored Aug 25, 2020
1 parent efe7027 commit fbedef3
Show file tree
Hide file tree
Showing 8 changed files with 4,770 additions and 1 deletion.
26 changes: 26 additions & 0 deletions Week_08/Week_08_Experiments.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,13 @@ knitr::include_graphics('opera_study.png')
- As well as the procedural stuff we'll talk about here
- They're not automatically better than other methods. The problems they face are just different

---

# Concept Checks

- Even if we have an experiment that *statistically* picks up the exact effects we're looking for, what are some ways in which we might expect that it's picking up *unnatural behavior*?
- Design a hypothetical experiment. It can be about anything you like. But also, select *what you want the outcome to be*. How might you be able to tip the scales in favor of that outcome by selecting a non-representative group to do the experiment on?
- What are some things that would need to be true for us to assume that the results from an experiment apply to the whole population?

---

Expand Down Expand Up @@ -267,6 +274,15 @@ knitr::include_graphics('opera_study.png')

---

# Concept Checks

- How many observations, roughly, would we need to get 90% power to detect a one-unit effect when the standard deviation is 10 and we have an experiment where half the sample is randomized into treatment and control?
- Intuitively, why do we need more observations to have power to detect a small effect?
- Which of these analyses is likely to have higher statistical power: detecting the effect of a job-training program on whether you find a job in the next month, or detecting the effect of that same job-training program on the proportion of the rest of your life that you spend unemployed?

---


# How to Randomize

- The point of randomization is that the treatment and control groups are basically the same on everything, even the stuff we can't measure
Expand Down Expand Up @@ -497,6 +513,16 @@ export_summs(itt, twosls, statistics = c(N = 'nobs'))

---

# Concept Checks

- Why do attrition and compliance only cause *really bad* problems when they're non-random?
- How do we know that random non-compliance *shrinks* the effect, as opposed to estimating it to be too big?
- Say we have 50 covariates, and we create a balance table for them. We get 3 statistically significant differences. Should we be concerned about our randomization procedure? Should we add controls to our analysis?
- Say we are doing an experiment where we have no covariates. We get some attrition. What should we do?
- Say we are doing an experiment where we don't have any information on whether someone was actually treated, just whether we assigned them to treatment. What should we be concerned about?

---

# Next!

To complete the module (due by the end of the second-to-last week of class):
Expand Down
28 changes: 27 additions & 1 deletion Week_08/Week_08_Experiments.html
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@

# Experiments
## You’re in Control!
### Updated 2020-08-01
### Updated 2020-08-21

---

Expand Down Expand Up @@ -493,6 +493,13 @@
- As well as the procedural stuff we'll talk about here
- They're not automatically better than other methods. The problems they face are just different

---

# Concept Checks

- Even if we have an experiment that *statistically* picks up the exact effects we're looking for, what are some ways in which we might expect that it's picking up *unnatural behavior*?
- Design a hypothetical experiment. It can be about anything you like. But also, select *what you want the outcome to be*. How might you be able to tip the scales in favor of that outcome by selecting a non-representative group to do the experiment on?
- What are some things that would need to be true for us to assume that the results from an experiment apply to the whole population?

---

Expand Down Expand Up @@ -585,6 +592,15 @@

---

# Concept Checks

- How many observations, roughly, would we need to get 90% power to detect a one-unit effect when the standard deviation is 10 and we have an experiment where half the sample is randomized into treatment and control?
- Intuitively, why do we need more observations to have power to detect a small effect?
- Which of these analyses is likely to have higher statistical power: detecting the effect of a job-training program on whether you find a job in the next month, or detecting the effect of that same job-training program on the proportion of the rest of your life that you spend unemployed?

---


# How to Randomize

- The point of randomization is that the treatment and control groups are basically the same on everything, even the stuff we can't measure
Expand Down Expand Up @@ -973,6 +989,16 @@
</table>


---

# Concept Checks

- Why do attrition and compliance only cause *really bad* problems when they&#39;re non-random?
- How do we know that random non-compliance *shrinks* the effect, as opposed to estimating it to be too big?
- Say we have 50 covariates, and we create a balance table for them. We get 3 statistically significant differences. Should we be concerned about our randomization procedure? Should we add controls to our analysis?
- Say we are doing an experiment where we have no covariates. We get some attrition. What should we do?
- Say we are doing an experiment where we don&#39;t have any information on whether someone was actually treated, just whether we assigned them to treatment. What should we be concerned about?

---

# Next!
Expand Down
Loading

0 comments on commit fbedef3

Please sign in to comment.