Bring Your Own Model Examples

GitHub Copilot Language Model

Starting with VS Code 1.91 and greater, and with an active GitHub Copilot subscription, you can use Navie with the Copilot Language Model as a supported backend model. This allows you to leverage the powerful runtime powered Navie AI Architect with your existing Copilot subscription. This is the recommended option for users in corporate environments where Copilot is the only approved and supported language model.

Requirements

The following items are required to use the GitHub Copilot Language Model with Navie:

  • VS Code Version 1.91 or greater
  • AppMap Extension version v0.123.0 or greater
  • GitHub Copilot VS Code extension must be installed
  • Signed into an active paid or trial GitHub Copilot subscription

Setup

Open the VS Code Settings, and search for navie vscode

Click the box to use the VS Code language model...

After clicking the box to enable the VS Code LM, you’ll be instructed to reload your VS Code to enable these changes.

After VS Code finishes reloading, open the AppMap extension.

Select New Navie Chat, and confirm the model listed is (via copilot)

You’ll need to allow the AppMap extension access to the Copilot Language Models. After asking your first question to Navie, click Allow to the popup to allow the necessary access.

Troubleshooting

If you attempt to enable the Copilot language models without the Copilot Extension installed, you’ll see the following error in your code editor.

Click Install Copilot to complete the installation for language model support.

If you have the Copilot extension installed, but have not signed in, you’ll see the following notice.

Click the Sign in to GitHub and login with an account that has a valid paid or trial GitHub Copilot subscription.

Video Demo

OpenAI

Note: We recommend configuring your OpenAI key using the code editor extension. Follow the Bring Your Own Key docs for instructions.

Only OPENAI_API_KEY needs to be set, other settings can stay default:

OPENAI_API_KEY sk-9spQsnE3X7myFHnjgNKKgIcGAdaIG78I3HZB4DFDWQGM

When using your own OpenAI API key, you can also modify the OpenAI model for Navie to use. For example if you wanted to use gpt-3.5 or use an preview model like gpt-4-vision-preview.

APPMAP_NAVIE_MODEL gpt-4-vision-preview

Anthropic (Claude)

AppMap supports the Anthropic suite of large language models such as Claude Sonnet or Claude Opus.

To use AppMap Navie with Anthropic LLMs you need to generate an API key for your account.

Login to your Anthropic dashboard, and choose the option to “Get API Keys”

Click the box to “Create Key”

Anthropic Create Key

In the next box, give your key an easy to recognize name.

Anthropic Key Name

In your VS Code or JetBrains editor, configure the following environment variables. For more details on configuring these environment variables in your VS Code or JetBrains editor, refer to the AppMap BOYK documentation.

ANTHROPIC_API_KEY sk-ant-api03-8SgtgQrGB0vTSsB_DeeIZHvDrfmrg
APPMAP_NAVIE_MODEL claude-3-5-sonnet-20240620

When setting the APPMAP_NAVIE_MODEL refer to the Anthropic documentation for the latest available models to chose from.

Video Demo

Azure OpenAI

Assuming you created a navie GPT-4 deployment on contoso.openai.azure.com OpenAI instance:

AZURE_OPENAI_API_KEY e50edc22e83f01802893d654c4268c4f
AZURE_OPENAI_API_VERSION 2024-02-01
AZURE_OPENAI_API_INSTANCE_NAME contoso
AZURE_OPENAI_API_DEPLOYMENT_NAME navie

AnyScale Endpoints

AnyScale Endpoints allows querying a selection of open-source LLMs. After you create an account you can use it by setting:

OPENAI_API_KEY esecret_myxfwgl1iinbz9q5hkexemk8f4xhcou8
OPENAI_BASE_URL https://api.endpoints.anyscale.com/v1
APPMAP_NAVIE_MODEL mistralai/Mixtral-8x7B-Instruct-v0.1

Consult AnyScale documentation for model names. Note we recommend using Mixtral models with Navie.

Anyscale Demo with VS Code

Anyscale Demo with JetBrains

Fireworks AI

You can use Fireworks AI and their serverless or on-demand models as a compatible backend for AppMap Navie AI.

After creating an account on Fireworks AI you can configure your Navie environment settings:

OPENAI_API_KEY WBYq2mKlK8I16ha21k233k2EwzGAJy3e0CLmtNZadJ6byfpu7c
OPENAI_BASE_URL https://api.fireworks.ai/inference/v1
APPMAP_NAVIE_MODEL accounts/fireworks/models/mixtral-8x22b-instruct

Consult the Fireworks AI documentation for a full list of the available models they currently support.

Video Demo

Ollama

You can use Ollama to run Navie with local models; after you’ve successfully ran a model with ollama run command, you can configure Navie to use it:

OPENAI_API_KEY dummy
OPENAI_BASE_URL http://127.0.0.1:11434/v1
APPMAP_NAVIE_MODEL mixtral

Note: Even though it’s running locally a dummy placeholder API key is still required.

LM Studio

You can use LM Studio to run Navie with local models.

After downloading a model to run, select the option to run a local server.

In the next window, select which model you want to load into the local inference server.

After loading your model, you can confirm it’s successfully running in the logs.

NOTE: Save the URL it’s running under to use for OPENAI_BASE_URL environment variable.

For example: http://localhost:1234/v1

In the Model Inspector copy the name of the model and use this for the APPMAP_NAVIE_MODEL environment variable.

For example: Meta-Llama-3-8B-Instruct-imatrix

Continue to configure your local environment with the following environment variables based on your LM Studio configuration. Refer to the documentation above for steps specific to your code editor.

OPENAI_API_KEY dummy
OPENAI_BASE_URL http://localhost:1234/v1
APPMAP_NAVIE_MODEL Meta-Llama-3-8B-Instruct-imatrix

Note: Even though it’s running locally a dummy placeholder API key is still required.


Was this page helpful? thumb_up Yes thumb_down No
Thank you for your feedback!