To access the latest features keep your code editor plug-in up to date.
Starting with VS Code 1.91
and greater, and with an active GitHub Copilot subscription, you can use Navie with the Copilot Language Model as a supported backend model. This allows you to leverage the powerful runtime powered Navie AI Architect with your existing Copilot subscription. This is the recommended option for users in corporate environments where Copilot is the only approved and supported language model.
The following items are required to use the GitHub Copilot Language Model with Navie:
1.91
or greaterv0.123.0
or greaterOpen the VS Code Settings, and search for navie vscode
Click the box to use the VS Code language model...
After clicking the box to enable the VS Code LM, you’ll be instructed to reload your VS Code to enable these changes.
After VS Code finishes reloading, open the AppMap extension.
Select New Navie Chat
, and confirm the model listed is (via copilot)
You’ll need to allow the AppMap extension access to the Copilot Language Models. After asking your first question to Navie, click Allow
to the popup to allow the necessary access.
If you attempt to enable the Copilot language models without the Copilot Extension installed, you’ll see the following error in your code editor.
Click Install Copilot
to complete the installation for language model support.
If you have the Copilot extension installed, but have not signed in, you’ll see the following notice.
Click the Sign in to GitHub
and login with an account that has a valid paid or trial GitHub Copilot subscription.
GitHub Copilot supports a variety of different language models. Use the VS Code command “AppMap: Select Copilot Model” in the command palette.
To open the Command Palette.
You can use a hotkey to open the VS Code Command Palette
Cmd + Shift + P
Ctrl + Shift + P
Search for AppMap: Select Copilot Model
in the command palette.
Then select the specific model you’d like to use with AppMap Navie
After configuring your Google Cloud authentication keys and ensuring you have access to the Google Gemini services on your Google Cloud account, configure the following environment variables in your VS Code editor. Refer to the Navie documentation for more details on where to set the Navie environment variables.
GOOGLE_WEB_CREDENTIALS |
[contents of downloaded JSON] |
APPMAP_NAVIE_MODEL |
gemini-1.5-pro-002 |
APPMAP_NAVIE_COMPLETION_BACKEND |
vertex-ai |
NOTE: If your code editor previously used the default GitHub Copilot backend, open the “gear” icon in the Navie chat window to reset the language model setting to use the environment variables instead by selecting “Use AppMap Hosted Provider”. This will disable the GitHub Copilot Language Model backend and will by default use your environment variable configuration.
You can confirm your model and API endpoint after making this change in the Navie chat window, which will display the currently configured language model backend.
Note: We recommend configuring your OpenAI key using the code editor extension. Follow the Bring Your Own Key docs for instructions. The configuration options below are for advanced users.
Only OPENAI_API_KEY
needs to be set, other settings can stay default:
OPENAI_API_KEY |
sk-9spQsnE3X7myFHnjgNKKgIcGAdaIG78I3HZB4DFDWQGM |
When using your own OpenAI API key, you can also modify the OpenAI model for Navie to use. For example if you wanted to use gpt-3.5
or use an preview model like gpt-4-vision-preview
.
APPMAP_NAVIE_MODEL |
gpt-4-vision-preview |
AppMap supports the Anthropic suite of large language models such as Claude Sonnet or Claude Opus.
To use AppMap Navie with Anthropic LLMs you need to generate an API key for your account.
Login to your Anthropic dashboard, and choose the option to “Get API Keys”
Click the box to “Create Key”
In the next box, give your key an easy to recognize name.
In your VS Code or JetBrains editor, configure the following environment variables. For more details on configuring these environment variables in your VS Code or JetBrains editor, refer to the AppMap BOYK documentation.
ANTHROPIC_API_KEY |
sk-ant-api03-8SgtgQrGB0vTSsB_DeeIZHvDrfmrg |
APPMAP_NAVIE_MODEL |
claude-3-5-sonnet-20240620 |
When setting the APPMAP_NAVIE_MODEL
refer to the Anthropic documentation for the latest available models to chose from.
Assuming you created a navie
GPT-4 deployment on contoso.openai.azure.com
OpenAI instance:
AZURE_OPENAI_API_KEY |
e50edc22e83f01802893d654c4268c4f |
AZURE_OPENAI_API_VERSION |
2024-02-01 |
AZURE_OPENAI_API_INSTANCE_NAME |
contoso |
AZURE_OPENAI_API_DEPLOYMENT_NAME |
navie |
AnyScale Endpoints allows querying a selection of open-source LLMs. After you create an account you can use it by setting:
OPENAI_API_KEY |
esecret_myxfwgl1iinbz9q5hkexemk8f4xhcou8 |
OPENAI_BASE_URL |
https://api.endpoints.anyscale.com/v1 |
APPMAP_NAVIE_MODEL |
mistralai/Mixtral-8x7B-Instruct-v0.1 |
Consult AnyScale documentation for model names. Note we recommend using Mixtral models with Navie.
You can use Fireworks AI and their serverless or on-demand models as a compatible backend for AppMap Navie AI.
After creating an account on Fireworks AI you can configure your Navie environment settings:
OPENAI_API_KEY |
WBYq2mKlK8I16ha21k233k2EwzGAJy3e0CLmtNZadJ6byfpu7c |
OPENAI_BASE_URL |
https://api.fireworks.ai/inference/v1 |
APPMAP_NAVIE_MODEL |
accounts/fireworks/models/mixtral-8x22b-instruct |
Consult the Fireworks AI documentation for a full list of the available models they currently support.
You can use Ollama to run Navie with local models; after
you’ve successfully ran a model with ollama run
command, you can configure
Navie to use it:
OPENAI_API_KEY |
dummy |
OPENAI_BASE_URL |
http://127.0.0.1:11434/v1 |
APPMAP_NAVIE_MODEL |
mixtral |
Note: Even though it’s running locally a dummy placeholder API key is still required.
You can use LM Studio to run Navie with local models.
After downloading a model to run, select the option to run a local server.
In the next window, select which model you want to load into the local inference server.
After loading your model, you can confirm it’s successfully running in the logs.
NOTE: Save the URL it’s running under to use for OPENAI_BASE_URL
environment variable.
For example: http://localhost:1234/v1
In the Model Inspector
copy the name of the model and use this for the APPMAP_NAVIE_MODEL
environment variable.
For example: Meta-Llama-3-8B-Instruct-imatrix
Continue to configure your local environment with the following environment variables based on your LM Studio configuration. Refer to the documentation above for steps specific to your code editor.
OPENAI_API_KEY |
dummy |
OPENAI_BASE_URL |
http://localhost:1234/v1 |
APPMAP_NAVIE_MODEL |
Meta-Llama-3-8B-Instruct-imatrix |
Note: Even though it’s running locally a dummy placeholder API key is still required.