Bring Your Own LLM Model

By default, when asking a question to Navie, your code editor will interact with the AppMap hosted proxy for OpenAI. If you have a requirement to bring your own key or otherwise use your own OpenAI account you can specify your own OpenAI key; this will cause Navie to connect to OpenAI directly, without AppMap proxy acting as an intermediate.

You can also use any OpenAI API compatible LLM model either running locally or via a 3rd party provider. Finally, for VS Code users with an active GitHub Copilot subscription, you can leverage the Copilot Language Models as a supported Navie backend. Refer to the Navie docs for more examples of using alternative language models.

Bring Your Own OpenAI API Key (BYOK)

Navie AI uses the AppMap hosted proxy with an AppMap managed OpenAI API key. If you have requirements to use your existing OpenAI API key, you can configure that within AppMap. This will ensure all Navie requests will be interacting with your own OpenAI account.

Configuring Your OpenAI Key

In your code editor, open the Navie Chat window. If the model displays (default), this means that Navie is configured to use the AppMap hosted OpenAI proxy. Click on the gear icon in the top of the Navie Chat window to change the model.

Navie configuration gear

In the modal, select the option to Use your own OpenAI API key

Use your own key modal

After you enter your OpenAI API Key in the menu option, hit enter and your code editor will be prompted to reload.

In VS Code: VS Code popup to store API Key

In JetBrains: JetBrains popup to store API Key

NOTE: You can also use the environment variable in the configuration section to store your API key as an environment variable instead of using the gear icon in the Navie chat window.

After your code editor reloads, you can confirm your requests are being routed to OpenAI directly in the Navie Chat window. It will list the model OpenAI and the location, in this case via OpenAI.

OpenAI location

Modify which OpenAI Model to use

AppMap generally uses the latest OpenAI models as the default, but if you want to use an alternative model like gpt-3.5 or a preview model like gpt-4-vision-preview you can modify the APPMAP_NAVIE_MODEL environment variable after configuring your own OpenAI API key to use other OpenAI models.

After setting your APPMAP_NAVIE_MODEL with your chosen model reload/restart your code editor and then confirm it’s configuration by opening a new Navie chat window. In this example i’ve configured my model to be gpt-4o with my personal OpenAI API Key.

JetBrains OpenAI key modal

Reset Navie AI to use Default Navie Backend

At any time, you can unset your OpenAI API Key and revert usage back to using the AppMap hosted OpenAI proxy. Select the gear icon in the Navie Chat window and select Use Navie Backend in the modal.

Navie Recommended Models

Bring Your Own Model (BYOM)

This feature is in early access. We recommend choosing a model that is trained on a large corpus of both human-written natural language and code.

Navie currently supports any OpenAI-compatible model running locally or remotely. When configured like this, as in the BYOK case, Navie won’t contact the AppMap hosted proxy and your conversations will stay private between you and the model provider.

Configuration

In order to configure Navie for your own LLM, certain environment variables need to be set for AppMap services.

You can use the following variables to direct Navie to use any LLM with an OpenAI-compatible API. If only the API key is set, Navie will connect to OpenAI.com by default.

  • OPENAI_API_KEY — API key to use with OpenAI API.
  • OPENAI_BASE_URL — base URL for OpenAI API (defaults to the OpenAI.com endpoint).
  • APPMAP_NAVIE_MODEL — name of the model to use (the default is GPT-4).
  • APPMAP_NAVIE_TOKEN_LIMIT — maximum context size in tokens (default 8000).

For Azure OpenAI, you need to create a deployment and use these variables instead:

  • AZURE_OPENAI_API_KEY — API key to use with Azure OpenAI API.
  • AZURE_OPENAI_API_VERSION — API version to use when communicating with Azure OpenAI, eg. 2024-02-01
  • AZURE_OPENAI_API_INSTANCE_NAME — Azure OpenAI instance name (ie. the part of the URL before openai.azure.com)
  • AZURE_OPENAI_API_DEPLOYMENT_NAME — Azure OpenAI deployment name.

Configuring in JetBrains
Configuring in VS Code

Configuring in JetBrains

In JetBrains, go to settings.

a screenshot of the JetBrains menu

Go to ToolsAppMap.

a screenshot of the AppMap settings in JetBrains

Enter the environment editor. a screenshot of the entering the AppMap environment editor in JetBrains

Use the editor to define the relevant environment variables according to the BYOM documentation.

a screenshot of the environment editor in JetBrains

Reload your IDE for the changes to take effect.

After reloading you can confirm the model is configured correctly in the Navie Chat window.

Configuring in VS Code

Editing AppMap services environment

In VS Code, go to settings.

a screenshot of the Visual Studio Code menu

Search for “appmap environment” to reveal “AppMap: Command Line Environment” setting.

a screenshot of the AppMap: Command Line Environment settings section

Use Add Item to define the relevant environment variables according to the BYOM documentation.

a screenshot showing an example of the bring your own model key value entry

Reload your VS Code for the changes to take effect.

After reloading you can confirm the model is configured correctly in the Navie Chat window.

Using GitHub Copilot Language Models

Starting with VS Code 1.91 and greater, and with an active GitHub Copilot subscription, you can use Navie with the Copilot Language Model as a supported LLM backend. This allows you to leverage the powerful runtime powered Navie AI Architect with your existing Copilot subscription. This is the recommended option for users in corporate environments where Copilot is the only approved and supported language model.

Requirements

The following items are required to use the GitHub Copilot Language Model with Navie:

  • VS Code Version 1.91 or greater
  • AppMap Extension version v0.123.0 or greater
  • GitHub Copilot VS Code extension must be installed
  • Signed into an active paid or trial GitHub Copilot subscription

Setup

Open the VS Code Settings, and search for navie vscode

Click the box to use the VS Code language model...

After clicking the box to enable the VS Code LM, you’ll be instructed to reload your VS Code to enable these changes.

For more details about using the GitHub Copilot Language Model as a supported Navie backend, refer to the Navie reference guide

Examples

Refer to the Navie Reference Guide for detailed examples of using Navie with your own LLM backend.


Was this page helpful? thumb_up Yes thumb_down No
Thank you for your feedback!