Skip to content

zaqxs123456/Local-Code-Assistance-Install-Guide-For-macOS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Local Code Assistance Install Guide For macOS

Aimed Device

  • Base model of the new Mac Mini (M4)
    • Apple M4 chip
      • 10-core CPU
      • 10-core GPU
      • 16-core Neural Engine
    • 16GB unified memory
    • 256GB SSD storage

Packages to Install

  • Ollama
  • Visual Studio Code
  • Continue (Visual Studio Code extension)

Steps

1. Install Ollama

  1. Download the Ollama installer from the github repository:

  2. Installation steps:

    • Click on the Download for macOS button.

    • Once the download is complete, locate the .zip file in your ~/Downloads folder.

    • Double-click the .zip file to extract its contents. This should create Ollama.app.

    • Drag Ollama.app to your Applications folder.

    • Open the Applications folder and double-click on Ollama.app.

    • If you see a warning, click Open to proceed.

2. Run a model (optional, use as general chatbot for testing):

  • Open terminal.

  • Run the following command:

    ollama run [model_name]

    The model will be downloaded if it is not already downloaded.

    • For example: ollama run llama3.2
  • Ollama Model Library

  • For recommended models, see: Model Recommendations

3. Install Visual Studio Code

4. Install Continue (Visual Studio Code extension)

  1. Open Visual Studio Code.

  2. Click on the Extensions icon on the left sidebar.

  3. Search for "Continue" in the search bar.

  4. Click on the Install button.

  5. Once installed, there will be a Continue icon on the left sidebar.

5. Configurations for Continue

  1. Click on the Continue icon on the left sidebar.

  2. There will be a gear icon on the top right corner. Click on it.

  3. config.json file will open.

    • In the models key, you can add the models you want to use for the chatbot, for example:

      {
          "models": [
              {
              "title": "model_nickname",
              "model": "actual_model_name_from_ollama_library",
              "provider": "ollama"
              }
          ]
      }
    • To change the autocomplete model, you can change the "tabAutocompleteModel" key to the model you want to use, for example:

      {
          "tabAutocompleteModel": {
              "title": "model_nickname",
              "model": "actual_model_name_from_ollama_library",
              "provider": "ollama",
              "apiBase": "https://127.0.0.1:11434/v1"
          },
      }
    • "apiBase" is optional, sometimes you may need to add "/v1" to the end of the URL

    • Again, for recommended models, see: Model Recommendations

    WARNING: Do not use any models that require apiKey, it will connect to external servers and violate security policies.

  4. Save the file.

  5. Before using the chatbot, make sure the Ollama app is running, you can ensure it by running the following command in the terminal:

    ollama serve
  6. Before using the chatbot, make sure you downloaded the model you want to use for the ollama.

    ollama pull [model_name]
  7. Select the model you want to use from the dropdown menu on the bottom of the chatbot text box.

  8. Enjoy using the chatbot and code assistance!

Addional Notes

  1. Using external models will cause the chatbot to connect to external servers and violate security policies.

  2. Please disable GitHub Copilot when using Continue. As the shortcut conflicted with each other.

Model Recommendations

Recommended models to use with the base model of the new Mac Mini (M4):

Chatbot Models

ollama pull qwen2.5-coder:14b # larger option
ollama pull qwen2.5-coder:7b # if 14b is too heavy for mac mini
ollama pull opencoder:8b # also a very new model
ollama pull codeqwen:7b-code # similar to opencoder performance
ollama pull codeqwen:7b-code-v1.5-q4_1 # a bit bigger with different quantization

Code Autocomplete Models

ollama pull qwen2.5-coder:3b # small but relatively powerful
ollama pull deepseek-coder:1.3b-instruct-fp16 # somehow have a relatively high score on EvalPlus, too small so I chose the fp16 version
ollama pull phi3.5 # another very small model but high score on EvalPlus

About

Local Code Assistance Install Guide For macOS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published