Download Pycharm: https://www.jetbrains.com/pycharm/download/?section=windows or https://www.jetbrains.com/pycharm/download/?section=mac
- Open pycharm and create new project
- Create Virtual env with python3.12
- Open Pycharm and create virtual environment using python3.12
pip install -r requirements.txt- Get OpenAI API key
- Create .env file inside project and paste
OPENAI_API_KEY="sk-..."in your .env streamlit run app/app.py- We will also perform fine-tuning of model today. Set
GEN_MODEL=ft:gpt-4o-mini-2024-07-18:personal:resume-cover-ft:C3HhrPnRpost finetuning job in .env - Post finetuning is done, we will run
streamlit run scripts/ab_test_UI.py: https://platform.openai.com/docs/guides/supervised-fine-tuning
-
Upload/Paste Job description and Resume Deatils
-
Provide Few-shot Examples ( Optional ): https://learnprompting.org/docs/basics/few_shot?srsltid=AfmBOopYaZb7zlugXPrUWgLXxuUTpOLuDn8zEJtmXHpYZqdu4o_OD_O9. https://platform.openai.com/docs/guides/prompt-engineering
-
Generate tailored bullet pointers for resume
-
Generate tailored cover letter
-
Enhace the ouptput with few-shot examples
-
Fine-tuning of GPT-4o-min
-
Compare performance and store the results
-
Key metrics Definition:
a. keyword_coverage Fraction of important JD terms that show up in the model’s output (0–1 scale). We extract likely keywords (TitleCase, long words, common skills) and check presence. Higher = better JD alignment.
b. quantify_score Measures how “quantified” the writing is by counting numbers, %, and impact verbs per line. Higher means more measurable outcomes (great for resume bullets).
c. length_ok (cover letters only) Boolean check that the letter lands in a sensible range (default 120–200 words). Helps keep letters concise yet substantial.
d. composite_score Single yardstick combining the above: for bullets → ~60% keyword coverage + 40% quantification; for cover letters → ~50% keyword + 40% quant + small bonus if length_ok. Capped at 1.0 for easy comparison.
Steps:
- python scripts/prep_dataset.py This step will create finetune.jsonl
- python scripts/run_finetune.py This will execute a finetuning job and will create data/tuned_model.txt