To access the latest features keep your code editor plug-in up to date.
By default, when asking a question to Navie, your code editor will interact with the AppMap hosted proxy for OpenAI. If you have a requirement to bring your own key or otherwise use your own OpenAI account you can specify your own OpenAI key; this will cause Navie to connect to OpenAI directly, without AppMap proxy acting as an intermediate.
You can also use any OpenAI API compatible LLM model either running locally or via a 3rd party provider. Finally, for VS Code users with an active GitHub Copilot subscription, you can leverage the Copilot Language Models as a supported Navie backend. Refer to the Navie docs for more examples of using alternative language models.
Starting with VS Code 1.91
and greater, and with an active GitHub Copilot subscription, you can use Navie with the Copilot Language Model as a supported LLM backend. This allows you to leverage the powerful runtime powered Navie AI Architect with your existing Copilot subscription. This is the recommended option for users in corporate environments where Copilot is the only approved and supported language model.
The following items are required to use the GitHub Copilot Language Model with Navie:
1.91
or greaterv0.123.0
or greaterOPENAI_API_KEY
or other environment variables these will override any settings chosen from within the code editor extension. Unset these environment variables before changing your LLM or API key in your code editorOpen the VS Code Settings, and search for navie vscode
Click the box to use the VS Code language model...
After clicking the box to enable the VS Code LM, you’ll be instructed to reload your VS Code to enable these changes.
For more details about using the GitHub Copilot Language Model as a supported Navie backend, refer to the Navie reference guide
Navie AI uses the AppMap hosted proxy with an AppMap managed OpenAI API key. If you have requirements to use your existing OpenAI API key, you can configure that within AppMap. This will ensure all Navie requests will be interacting with your own OpenAI account.
In your code editor, open the Navie Chat window. If the model displays (default)
, this means that Navie is configured to use the AppMap hosted OpenAI proxy. Click on the gear icon in the top of the Navie Chat window to change the model.
In the modal, select the option to Use your own OpenAI API key
After you enter your OpenAI API Key in the menu option, hit enter
and your code editor will be prompted to reload.
In VS Code:
In JetBrains:
NOTE: You can also use the environment variable in the configuration section to store your API key as an environment variable instead of using the gear
icon in the Navie chat window.
After your code editor reloads, you can confirm your requests are being routed to OpenAI directly in the Navie Chat window. It will list the model OpenAI
and the location, in this case via OpenAI
.
AppMap generally uses the latest OpenAI models as the default, but if you want to use an alternative model like gpt-3.5
or a preview model like gpt-4-vision-preview
you can modify the APPMAP_NAVIE_MODEL
environment variable after configuring your own OpenAI API key to use other OpenAI models.
After setting your APPMAP_NAVIE_MODEL
with your chosen model reload/restart your code editor and then confirm it’s configuration by opening a new Navie chat window. In this example I’ve configured my model to be gpt-4o
with my personal OpenAI API Key.
At any time, you can unset your OpenAI API Key and revert usage back to using the AppMap hosted OpenAI proxy. Select the gear icon in the Navie Chat window and select Use Navie Backend
in the modal.
AppMap supports the Anthropic suite of large language models such as Claude Sonnet or Claude Opus.
To use AppMap Navie with Anthropic LLMs you need to generate an API key for your account.
Login to your Anthropic dashboard, and choose the option to “Get API Keys”
Click the box to “Create Key”
In the next box, give your key an easy to recognize name.
In your VS Code or JetBrains editor, configure the following environment variables. For more details on configuring these environment variables in your VS Code or JetBrains editor, refer to the AppMap BOYK documentation.
ANTHROPIC_API_KEY=sk-ant-api03-12...
APPMAP_NAVIE_MODEL=claude-3-5-sonnet-20240620
When setting the APPMAP_NAVIE_MODEL
refer to the Anthropic documentation for the latest available models to chose from.
This feature is in early access. We recommend choosing a model that is trained on a large corpus of both human-written natural language and code.
Navie currently supports any OpenAI-compatible model running locally or remotely. When configured like this, as in the BYOK case, Navie won’t contact the AppMap hosted proxy and your conversations will stay private between you and the model provider.
In order to configure Navie for your own LLM, certain environment variables need to be set for AppMap services.
You can use the following variables to direct Navie to use any LLM with an OpenAI-compatible API. If only the API key is set, Navie will connect to OpenAI.com by default.
OPENAI_API_KEY
— API key to use with OpenAI API.OPENAI_BASE_URL
— base URL for OpenAI API (defaults to the OpenAI.com endpoint).APPMAP_NAVIE_MODEL
— name of the model to use (the default is GPT-4).APPMAP_NAVIE_TOKEN_LIMIT
— maximum context size in tokens (default 8000).For Azure OpenAI, you need to create a deployment and use these variables instead:
AZURE_OPENAI_API_KEY
— API key to use with Azure OpenAI API.AZURE_OPENAI_API_VERSION
— API version to use when communicating with Azure OpenAI, eg. 2024-02-01
AZURE_OPENAI_API_INSTANCE_NAME
— Azure OpenAI instance name (ie. the part of the URL before openai.azure.com
)AZURE_OPENAI_API_DEPLOYMENT_NAME
— Azure OpenAI deployment name.Configuring in JetBrains
Configuring in VS Code
In JetBrains, go to settings.
Go to Tools → AppMap.
Enter the environment editor.
Use the editor to define the relevant environment variables according to the BYOM documentation.
Reload your IDE for the changes to take effect.
After reloading you can confirm the model is configured correctly in the Navie Chat window.
In VS Code, go to settings.
Search for “appmap environment” to reveal “AppMap: Command Line Environment” setting.
Use Add Item to define the relevant environment variables according to the BYOM documentation.
Reload your VS Code for the changes to take effect.
After reloading you can confirm the model is configured correctly in the Navie Chat window.
Refer to the Navie Reference Guide for detailed examples of using Navie with your own LLM backend.