AppMap Navie AI supports all software languages and frameworks for coding with static analysis and static diagrams based on the software.
AppMap supports the following languages for advanced runtime analysis and automated deep tracing of APIs, packages, classes, functions, databases, etc.
To learn how to make AppMap data of these languages, refer to the AppMap Navie getting started documentation
You can ask free-form questions, or start your question with one of these commands:
@plan
The @plan
command prefix within Navie focuses the AI response on building a detailed implementation plan for the relevant query. This will focus Navie on only understanding the problem and the application to generate a step-by-step plan. This will generally not respond with code implementation details, consider using the @generate
command which can implement code based on the plan.
@plan
Video Demo
@generate
The @generate
prefix will focus the Navie AI response to optimize for new code creation. This is useful when you want the Navie AI to respond with code implementations across your entire code base. This will reduce the amount of code explanation and generally the AI will respond only with the specific files and functions that need to be changed in order to implement a specific plan.
@generate
Video Demo
@test
The @test
command prefix will focus the Navie AI response to optimize for test case creation, such as unit testing or integration testing. This prefix will understand how your tests are currently written and provide updated tests based on features or code that is provided. You can use this command along with the @generate
command to create tests cases for newly generated code.
@explain
The @explain
command prefix within Navie serves as a default option focused on helping you learn more about your project. Using the @explain
prefix will focus the Navie AI response to be more explanatory and will dive into architectural level questions across your entire code base. You can also use this to ask for ways to improve the performance of a feature as well.
@diagram
The @diagram
command prefix within Navie focuses the AI response to generate Mermaid compatible diagrams. Mermaid is an open source diagramming and charting utility with wide support across tools such as GitHub, Atlassian, and more. Use the @diagram
command, and Navie will create and render a Mermaid compatible diagram within the Navie chat window. You can open this diagram in the Mermaid Live Editor, copy the Mermaid Definitions to your clipboard, save to disk, or expand a full window view. Save the Mermaid diagram into any supported tool such as GitHub Issues, Atlassian Confluence, and more.
@diagram the functional steps involved when a new user registers for the service.
@diagram the entity relationships between products and other important data objects.
@diagram using a flow chart how product sales tax is calculated.
@diagram create a detailed class map of the users, stores, products and other associated classes used
Below are a series of open source projects you can use to try out the @diagram
feature using
prebuilt AppMap data in a sample project. Simply clone one of the following projects, open
into your code editor with the AppMap extension installed, and ask Navie to generate diagrams.
@help
Navie will help you setup AppMap, including generating AppMap recordings and diagrams. This prefix will focus the Navie AI response to be more specific towards help with using AppMap products and features. This will leverage the AppMap documentation as part of the context related to your question and provide guidance for using AppMap features or diving into advanced AppMap topics.
Navie supports forward-slash options that can be included at the beginning of your questions to control various aspects of text generation.
/tokenlimit
The /tokenlimit
option is used to specify a limit on the number of tokens processed by the system. This parameter can help control the length of the generated text or manage resource consumption effectively.
Syntax
/tokenlimit=<value>
<value>
: The maximum number of tokens to be processed. This can be either a string or a number. If provided as a string, it will be automatically converted to an integer.Description
When executing commands, the /tokenlimit
option sets the upper limit on the number of tokens the system should utilize. The default token limit is 8000. Increasing the token limit allows more space for context.
Example To set the token limit to 16000, you can use:
@explain /tokenlimit=16000 <question>
Notes
/tokenlimit
is a valid positive integer./tokenlimit
can directly impact the performance and output length of text generation processes./tokenlimit
cannot be increased above the fundamental limit of the LLM backend. Some backends, such as Copilot, may have a lower token limit than others./temperature
The /temperature
option is used to control the randomness of the text generation process. This parameter can help adjust the creativity and diversity of the generated text.
Syntax
/temperature=<value>
<value>
: The temperature value to be set. This can be either a string or a number. If provided as a string, it will be automatically converted to a float.Description
When executing commands, the /temperature
option sets the randomness of the text generation process. The default temperature value is 0.2. Lower values result in more deterministic outputs, while higher values lead to more creative and diverse outputs.
Example To set the temperature to 0, you can use:
@generate /temperature=0 <question>
Notes
/temperature
is a valid float./temperature
can directly impact the creativity and diversity of the generated text./include
and /exclude
The /include
and /exclude
options are used to include or exclude specific file patterns from the retrieved context.
Syntax
/include=<word-or-pattern>|<word-or-pattern> /exclude=<word-or-pattern>|<word-or-pattern>
<word-or-pattern>
: The word or pattern to be included or excluded. Multiple values or patterns can be separated by a pipe |
, because the entire string is treated as a
regular expression.Description
When executing commands, the /include
option includes files according to the words or patterns specified, while the /exclude
option excludes them. This can help control the context used by the system to generate text.
Example
To include only Python files and exclude files containing the word “test”:
@plan /include=\.py /exclude=test
Starting with VS Code 1.91
and greater, and with an active GitHub Copilot subscription, you can use Navie with the Copilot Language Model as a supported backend model. This allows you to leverage the powerful runtime powered Navie AI Architect with your existing Copilot subscription. This is the recommended option for users in corporate environments where Copilot is the only approved and supported language model.
The following items are required to use the GitHub Copilot Language Model with Navie:
1.91
or greaterv0.123.0
or greaterOpen the VS Code Settings, and search for navie vscode
Click the box to use the VS Code language model...
After clicking the box to enable the VS Code LM, you’ll be instructed to reload your VS Code to enable these changes.
After VS Code finishes reloading, open the AppMap extension.
Select New Navie Chat
, and confirm the model listed is (via copilot)
You’ll need to allow the AppMap extension access to the Copilot Language Models. After asking your first question to Navie, click Allow
to the popup to allow the necessary access.
If you attempt to enable the Copilot language models without the Copilot Extension installed, you’ll see the following error in your code editor.
Click Install Copilot
to complete the installation for language model support.
If you have the Copilot extension installed, but have not signed in, you’ll see the following notice.
Click the Sign in to GitHub
and login with an account that has a valid paid or trial GitHub Copilot subscription.
Note: We recommend configuring your OpenAI key using the code editor extension. Follow the Bring Your Own Key docs for instructions.
Only OPENAI_API_KEY
needs to be set, other settings can stay default:
OPENAI_API_KEY |
sk-9spQsnE3X7myFHnjgNKKgIcGAdaIG78I3HZB4DFDWQGM |
When using your own OpenAI API key, you can also modify the OpenAI model for Navie to use. For example if you wanted to use gpt-3.5
or use an preview model like gpt-4-vision-preview
.
APPMAP_NAVIE_MODEL |
gpt-4-vision-preview |
AppMap supports the Anthropic suite of large language models such as Claude Sonnet or Claude Opus.
To use AppMap Navie with Anthropic LLMs you need to generate an API key for your account.
Login to your Anthropic dashboard, and choose the option to “Get API Keys”
Click the box to “Create Key”
In the next box, give your key an easy to recognize name.
In your VS Code or JetBrains editor, configure the following environment variables. For more details on configuring these environment variables in your VS Code or JetBrains editor, refer to the AppMap BOYK documentation.
ANTHROPIC_API_KEY |
sk-ant-api03-8SgtgQrGB0vTSsB_DeeIZHvDrfmrg |
APPMAP_NAVIE_MODEL |
claude-3-5-sonnet-20240620 |
When setting the APPMAP_NAVIE_MODEL
refer to the Anthropic documentation for the latest available models to chose from.
Assuming you created a navie
GPT-4 deployment on contoso.openai.azure.com
OpenAI instance:
AZURE_OPENAI_API_KEY |
e50edc22e83f01802893d654c4268c4f |
AZURE_OPENAI_API_VERSION |
2024-02-01 |
AZURE_OPENAI_API_INSTANCE_NAME |
contoso |
AZURE_OPENAI_API_DEPLOYMENT_NAME |
navie |
AnyScale Endpoints allows querying a selection of open-source LLMs. After you create an account you can use it by setting:
OPENAI_API_KEY |
esecret_myxfwgl1iinbz9q5hkexemk8f4xhcou8 |
OPENAI_BASE_URL |
https://api.endpoints.anyscale.com/v1 |
APPMAP_NAVIE_MODEL |
mistralai/Mixtral-8x7B-Instruct-v0.1 |
Consult AnyScale documentation for model names. Note we recommend using Mixtral models with Navie.
You can use Fireworks AI and their serverless or on-demand models as a compatible backend for AppMap Navie AI.
After creating an account on Fireworks AI you can configure your Navie environment settings:
OPENAI_API_KEY |
WBYq2mKlK8I16ha21k233k2EwzGAJy3e0CLmtNZadJ6byfpu7c |
OPENAI_BASE_URL |
https://api.fireworks.ai/inference/v1 |
APPMAP_NAVIE_MODEL |
accounts/fireworks/models/mixtral-8x22b-instruct |
Consult the Fireworks AI documentation for a full list of the available models they currently support.
You can use Ollama to run Navie with local models; after
you’ve successfully ran a model with ollama run
command, you can configure
Navie to use it:
OPENAI_API_KEY |
dummy |
OPENAI_BASE_URL |
http://127.0.0.1:11434/v1 |
APPMAP_NAVIE_MODEL |
mixtral |
Note: Even though it’s running locally a dummy placeholder API key is still required.
You can use LM Studio to run Navie with local models.
After downloading a model to run, select the option to run a local server.
In the next window, select which model you want to load into the local inference server.
After loading your model, you can confirm it’s successfully running in the logs.
NOTE: Save the URL it’s running under to use for OPENAI_BASE_URL
environment variable.
For example: http://localhost:1234/v1
In the Model Inspector
copy the name of the model and use this for the APPMAP_NAVIE_MODEL
environment variable.
For example: Meta-Llama-3-8B-Instruct-imatrix
Continue to configure your local environment with the following environment variables based on your LM Studio configuration. Refer to the documentation above for steps specific to your code editor.
OPENAI_API_KEY |
dummy |
OPENAI_BASE_URL |
http://localhost:1234/v1 |
APPMAP_NAVIE_MODEL |
Meta-Llama-3-8B-Instruct-imatrix |
Note: Even though it’s running locally a dummy placeholder API key is still required.
The standard way to add an OpenAI API key in VS Code is to use the gear
icon in the Navie chat window, but you can alternatively set the key using the VS Code Command Palette with an AppMap
command option.
In VS Code, open the Command Palette.
You can use a hotkey to open the VS Code Command Palette
Cmd + Shift + P
Ctrl + Shift + P
Or you can select View
-> Command Palette
Search for AppMap Set OpenAPI Key
Paste your key into the new field and hit enter.
You’ll get a notification in VS Code that your key is set.
NOTE: You will need to reload your window for the setting to take effect. Use the Command Palette Developer: Reload Window
To delete your key, simply open the Command Palette
You can use a hotkey to open
Cmd + Shift + P
Ctrl + Shift + P
Or you can select View
-> Command Palette
Search for AppMap Set OpenAPI Key
And simply hit enter with the field blank. VS Code will notify you that the key has been unset.
NOTE: You will need to reload your window for the setting to take effect. Use the Command Palette Developer: Reload Window
For secure storage of API key secrets within AppMap, we use the default VS Code secret storage which leverages Electron’s safeStorage API to ensure the confidentiality of sensitive information. Upon encryption, secrets are stored within the user data directory in a SQLite database, alongside other VS Code state information. This encryption process involves generating a unique encryption key, which, on macOS, is securely stored within Keychain Access
under “Code Safe Storage” or “Code - Insiders Safe Storage,” depending on the version. This method provides a robust layer of protection, preventing unauthorized access by other applications or users with full disk access. The safeStorage API, accessible in the main process, supports operations such as checking encryption availability, encrypting and decrypting strings, and selecting storage backends on Linux. This approach ensures that your secrets are securely encrypted and stored, safeguarding them from potential threats while maintaining application integrity.
The standard way to add an OpenAI API key in JetBrains is to use the gear
icon in the Navie chat window, but you can alternatively set the key directly in the JetBrains settings.
In JetBrains, open the Settings
option.
In the Settings
window, search for appmap
in the search bar on the side. Under the Tools -> AppMap
you will see a configuration option for your OpenAI API Key in the AppMap Services
section. This is the same section you are able to add/edit/modify your other environment settings for using your own custom models.
AppMap follows JetBrains best practices for the storing of sensitive data. The AppMap JetBrains plugin uses the PasswordSafe
package to securely persist your OpenAI API key. The default storage format for PasswordSafe
is operating system dependent. Refer to the JetBrains Developer Documents for more information.
You can access the Navie logs in VS Code by opening the Output
tab and selecting AppMap Services
from the list of available output logs.
To open the Output window, on the menu bar, choose View > Output, or in Windows press Ctrl+Shift+U
or in Mac use Shift+Command+U
Click on the output log dropdown in the right corner to view a list of all the available output logs.
Select on the AppMap: Services
log to view the logs from Navie.
You can enable debug logging of Navie in your JetBrains code editor by first opening Help
> Diagnostic Tools
> Debug Log Settings
.
In the Custom Debug Log Configuration
enter appland
to enable DEBUG level logging for the AppMap plugin.
Next, open Help
> Show Log...
will open the IDE log file.
https://github.com/getappmap/appmap