Skip to content

Commit

Permalink
fix: make paths relative and update readme and test
Browse files Browse the repository at this point in the history
  • Loading branch information
jaanphare committed Apr 10, 2024
1 parent 2d6ed41 commit c7b337e
Show file tree
Hide file tree
Showing 113 changed files with 363 additions and 302 deletions.
60 changes: 47 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,14 +53,14 @@ A typical Framework project looks like this:

## Command reference

| Command | Description |
| ----------------- | -------------------------------------------------------- |
| `yarn install` | Install or reinstall dependencies |
| `yarn dev` | Start local preview server |
| `yarn build` | Build your static site, generating `./dist` |
| `yarn deploy` | Deploy your project to Observable |
| `yarn clean` | Clear the local data loader cache |
| `yarn observable` | Run commands like `observable help` |
| Command | Description |
| ----------------- | ------------------------------------------- |
| `yarn install` | Install or reinstall dependencies |
| `yarn dev` | Start local preview server |
| `yarn build` | Build your static site, generating `./dist` |
| `yarn deploy` | Deploy your project to Observable |
| `yarn clean` | Clear the local data loader cache |
| `yarn observable` | Run commands like `observable help` |

## GPT-4 reference

Expand Down Expand Up @@ -123,22 +123,56 @@ brew install duckdb
## Usage for 2022 ACS Public Use Microdata Sample (PUMS) Data

To retrieve the list of URLs from the Census Bureau's server and download and extract the archives for all of the 50 states' PUMS files, run the following:

```
cd data_processing
dbt run --exclude "public_use_microdata_sample.generated+" --vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2022/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2022.csv", "output_path": "~/data/american_community_survey"}'
dbt run --select "public_use_microdata_sample.list_urls" \
--vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2021/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2021.csv", "output_path": "~/data/american_community_survey"}'
```

Then save the URLs:

```
dbt run --select "public_use_microdata_sample.urls" \
--vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2021/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2021.csv", "output_path": "~/data/american_community_survey"}' \
--threads 8
```

Then execute the dbt model for downloading and extract the archives of the microdata (takes ~2min on a Macbook):

```
dbt run --select "public_use_microdata_sample.download_and_extract_archives" \
--vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2022/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2022.csv", "output_path": "~/data/american_community_survey"}' \
--threads 8
```

Then generate the CSV paths:

```
dbt run --select "public_use_microdata_sample.csv_paths" \
--vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2021/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2022.json", "output_path": "~/data/american_community_survey"}' \
--threads 8
```

Then parse the data dictionary:

```
dbt run --select "public_use_microdata_sample.parse_data_dictionary" \
--vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2021/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2021.csv", "output_path": "~/data/american_community_survey"}' \
--threads 8
```

Then generate the SQL commands needed to map every state's individual people or housing unit variables to the easier to use (and read) names:

```
python scripts/generate_sql_data_dictionary_mapping_for_extracted_csv_files.py \
~/data/american_community_survey/public_use_microdata_sample_csv_paths.parquet \
~/data/american_community_survey/PUMS_Data_Dictionary_2022.json
python scripts/generate_sql_with_enum_types_and_mapped_values_renamed.py ~/data/american_community_survey/csv_paths.parquet ~/data/american_community_survey/PUMS_Data_Dictionary_2022.json
```

Then execute these generated SQL queries using 1 thread (you can adjust this number to be higher depending on the available processor cores on your system):
```
dbt run --select "public_use_microdata_sample.generated+" --vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2022/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2022.csv", "output_path": "~/data/american_community_survey"}' --threads 1
dbt run --select "public_use_microdata_sample.generated+" \
--vars '{"public_use_microdata_sample_url": "https://www2.census.gov/programs-surveys/acs/data/pums/2022/1-Year/", "public_use_microdata_sample_data_dictionary_url": "https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2022.csv", "output_path": "~/data/american_community_survey"}' \
--threads 8
```

Inspect the output folder to see what has been created in the `output_path` specified in the previous command:
Expand Down
52 changes: 28 additions & 24 deletions data_processing/models/public_use_microdata_sample/config.yml
Original file line number Diff line number Diff line change
@@ -1,27 +1,31 @@
version: 2

models:
- name: list_urls
config:
public_use_microdata_sample_url: "{{ var('public_use_microdata_sample_url') }}"
output_path: "{{ var('output_path') }}"
- name: download_and_extract_archives
config:
public_use_microdata_sample_url: "{{ var('public_use_microdata_sample_url') }}"
output_path: "{{ var('output_path') }}"
- name: parse_data_dictionary
config:
public_use_microdata_sample_data_dictionary_url: "{{ var('public_use_microdata_sample_data_dictionary_url') }}"
output_path: "{{ var('output_path') }}"
- name: list_shapefile_urls
config:
microdata_area_shapefile_url: "{{ var('microdata_area_shapefile_url') }}"
output_path: "{{ var('output_path') }}"
- name: download_and_extract_shapefiles
config:
microdata_area_shapefile_url: "{{ var('microdata_area_shapefile_url') }}"
output_path: "{{ var('output_path') }}"
- name: combine_shapefiles
config:
microdata_area_shapefile_url: "{{ var('microdata_area_shapefile_url') }}"
output_path: "{{ var('output_path') }}"
- name: list_urls
config:
public_use_microdata_sample_url: "{{ var('public_use_microdata_sample_url') }}"
output_path: "{{ var('output_path') }}"
- name: download_and_extract_archives
config:
public_use_microdata_sample_url: "{{ var('public_use_microdata_sample_url') }}"
output_path: "{{ var('output_path') }}"
- name: csv_paths
config:
public_use_microdata_sample_url: "{{ var('public_use_microdata_sample_url') }}"
output_path: "{{ var('output_path') }}"
- name: parse_data_dictionary
config:
public_use_microdata_sample_data_dictionary_url: "{{ var('public_use_microdata_sample_data_dictionary_url') }}"
output_path: "{{ var('output_path') }}"
- name: list_shapefile_urls
config:
microdata_area_shapefile_url: "{{ var('microdata_area_shapefile_url') }}"
output_path: "{{ var('output_path') }}"
- name: download_and_extract_shapefiles
config:
microdata_area_shapefile_url: "{{ var('microdata_area_shapefile_url') }}"
output_path: "{{ var('output_path') }}"
- name: combine_shapefiles
config:
microdata_area_shapefile_url: "{{ var('microdata_area_shapefile_url') }}"
output_path: "{{ var('output_path') }}"
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def model(dbt, session):
base_url = dbt.config.get('public_use_microdata_sample_url') # Assuming this is correctly set

# Fetch URLs from your table or view
query = "SELECT * FROM list_urls"
query = "SELECT * FROM list_urls "
result = session.execute(query).fetchall()
columns = [desc[0] for desc in session.description]
url_df = pd.DataFrame(result, columns=columns)
Expand All @@ -50,25 +50,4 @@ def model(dbt, session):
paths_df = pd.DataFrame(extracted_files, columns=['csv_path'])

# Return the DataFrame with paths to the extracted CSV files
return paths_df

# Mock dbt and session for demonstration; replace with actual dbt and session in your environment
class MockDBT:
def config(self, key):
return {
'public_use_microdata_sample_url': 'https://example.com/path/to/your/csv/files',
'output_path': '~/path/to/your/output/directory'
}.get(key, '')

class MockSession:
def execute(self, query):
# Mock response; replace with actual fetching logic
return [{"URL": "https://example.com/path/to/your/csv_file.zip"} for _ in range(10)]

dbt = MockDBT()
session = MockSession()

if __name__ == "__main__":
# Directly calling model function for demonstration; integrate properly within your dbt project
df = model(dbt, session)
print(df)
return paths_df
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hal/psam_h01.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hal/psam_h01.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hak/psam_h02.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hak/psam_h02.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_haz/psam_h04.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_haz/psam_h04.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_har/psam_h05.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_har/psam_h05.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hca/psam_h06.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hca/psam_h06.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hco/psam_h08.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hco/psam_h08.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hct/psam_h09.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hct/psam_h09.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hde/psam_h10.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hde/psam_h10.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hdc/psam_h11.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hdc/psam_h11.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hfl/psam_h12.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hfl/psam_h12.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hga/psam_h13.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hga/psam_h13.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hhi/psam_h15.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hhi/psam_h15.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hid/psam_h16.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hid/psam_h16.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hil/psam_h17.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hil/psam_h17.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hin/psam_h18.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hin/psam_h18.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hia/psam_h19.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hia/psam_h19.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hks/psam_h20.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hks/psam_h20.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hky/psam_h21.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hky/psam_h21.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ CASE FYRBLTP
WGTP78::VARCHAR AS "Housing Weight replicate 78",
WGTP79::VARCHAR AS "Housing Weight replicate 79",
WGTP80::VARCHAR AS "Housing Weight replicate 80",
FROM read_csv('/Users/me/data/american_community_survey/2022/1-Year/csv_hla/psam_h22.csv',
FROM read_csv('~/data/american_community_survey/2022/1-Year/csv_hla/psam_h22.csv',
parallel=False,
all_varchar=True,
auto_detect=True)
auto_detect=True)
Loading

0 comments on commit c7b337e

Please sign in to comment.