diff --git a/ch05/01_main-chapter-code/exercise-solutions.ipynb b/ch05/01_main-chapter-code/exercise-solutions.ipynb index dfc923c8..67cb3d40 100644 --- a/ch05/01_main-chapter-code/exercise-solutions.ipynb +++ b/ch05/01_main-chapter-code/exercise-solutions.ipynb @@ -660,7 +660,7 @@ "metadata": {}, "outputs": [], "source": [ - "from gpt_generate import assign, load_weights_into_gpt\n", + "from gpt_generate import load_weights_into_gpt\n", "\n", "\n", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", @@ -788,10 +788,10 @@ "NEW_CONFIG.update({\"context_length\": 1024, \"qkv_bias\": True})\n", "\n", "gpt = GPTModel(NEW_CONFIG)\n", - "gpt.eval();\n", + "gpt.eval()\n", "\n", "load_weights_into_gpt(gpt, params)\n", - "gpt.to(device);\n", + "gpt.to(device)\n", "\n", "torch.manual_seed(123)\n", "train_loss = calc_loss_loader(train_loader, gpt, device)\n", @@ -816,7 +816,7 @@ "source": [ "In the main chapter, we experimented with the smallest GPT-2 model, which has only 124M parameters. The reason was to keep the resource requirements as low as possible. However, you can easily experiment with larger models with minimal code changes. For example, instead of loading the 1558M instead of 124M model in chapter 5, the only 2 lines of code that we have to change are\n", "\n", - "```\n", + "```python\n", "settings, params = download_and_load_gpt2(model_size=\"124M\", models_dir=\"gpt2\")\n", "model_name = \"gpt2-small (124M)\"\n", "```\n", @@ -824,7 +824,7 @@ "The updated code becomes\n", "\n", "\n", - "```\n", + "```python\n", "settings, params = download_and_load_gpt2(model_size=\"1558M\", models_dir=\"gpt2\")\n", "model_name = \"gpt2-xl (1558M)\"\n", "```" @@ -907,8 +907,7 @@ "metadata": {}, "outputs": [], "source": [ - "from gpt_generate import generate, text_to_token_ids, token_ids_to_text\n", - "from previous_chapters import generate_text_simple" + "from gpt_generate import generate, text_to_token_ids, token_ids_to_text" ] }, { @@ -958,7 +957,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.6" + "version": "3.10.11" } }, "nbformat": 4, diff --git a/ch07/02_dataset-utilities/README.md b/ch07/02_dataset-utilities/README.md index df04422a..2a6b8aaf 100644 --- a/ch07/02_dataset-utilities/README.md +++ b/ch07/02_dataset-utilities/README.md @@ -18,7 +18,7 @@ The `find-near-duplicates.py` function can be used to identify duplicates and ne -```python +```bash python find-near-duplicates.py --json_file instruction-examples.json ``` diff --git a/setup/02_installing-python-libraries/README.md b/setup/02_installing-python-libraries/README.md index f46b8ffb..d951a773 100644 --- a/setup/02_installing-python-libraries/README.md +++ b/setup/02_installing-python-libraries/README.md @@ -6,14 +6,14 @@ I used the following libraries listed [here](https://github.com/rasbt/LLMs-from- To install these requirements most conveniently, you can use the `requirements.txt` file in the root directory for this code repository and execute the following command: -``` +```bash pip install -r requirements.txt ``` Then, after completing the installation, please check if all the packages are installed and are up to date using -``` +```bash python python_environment_check.py ```