🚚 change /docs to docs.
This commit is contained in:
parent
dc0b3b4fa3
commit
749165183c
|
@ -15,7 +15,7 @@ body:
|
|||
required: false
|
||||
- label: I'm not able to find an [open issue](https://github.com/continuedev/continue/issues?q=is%3Aopen+is%3Aissue) that reports the same bug
|
||||
required: false
|
||||
- label: I've seen the [troubleshooting guide](https://continue.dev/docs/troubleshooting) on the Continue Docs
|
||||
- label: I've seen the [troubleshooting guide](https://docs.continue.dev/troubleshooting) on the Continue Docs
|
||||
required: false
|
||||
- type: textarea
|
||||
attributes:
|
||||
|
@ -58,5 +58,5 @@ body:
|
|||
attributes:
|
||||
label: Log output
|
||||
description: |
|
||||
Please refer to the [troubleshooting guide](https://continue.dev/docs/troubleshooting) in the Continue Docs for instructions on obtaining the logs. Copy either the relevant lines or the last 100 lines or so.
|
||||
Please refer to the [troubleshooting guide](https://docs.continue.dev/troubleshooting) in the Continue Docs for instructions on obtaining the logs. Copy either the relevant lines or the last 100 lines or so.
|
||||
render: Shell
|
||||
|
|
|
@ -48,7 +48,7 @@ Continue is quickly adding features, and we'd love to hear which are the most im
|
|||
|
||||
## 📖 Updating / Improving Documentation
|
||||
|
||||
Continue is continuously improving, but a feature isn't complete until it is reflected in the documentation! If you see something out-of-date or missing, you can help by clicking "Edit this page" at the bottom of any page on [continue.dev/docs](https://continue.dev/docs).
|
||||
Continue is continuously improving, but a feature isn't complete until it is reflected in the documentation! If you see something out-of-date or missing, you can help by clicking "Edit this page" at the bottom of any page on [docs.continue.dev](https://docs.continue.dev).
|
||||
|
||||
## 🧑💻 Contributing Code
|
||||
|
||||
|
|
12
README.md
12
README.md
|
@ -8,7 +8,7 @@
|
|||
|
||||
<div align="center">
|
||||
|
||||
**[Continue](https://continue.dev/docs) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**
|
||||
**[Continue](https://docs.continue.dev) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
|||
<a target="_blank" href="https://opensource.org/licenses/Apache-2.0" style="background:none">
|
||||
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" style="height: 22px;" />
|
||||
</a>
|
||||
<a target="_blank" href="https://continue.dev/docs" style="background:none">
|
||||
<a target="_blank" href="https://docs.continue.dev" style="background:none">
|
||||
<img src="https://img.shields.io/badge/continue_docs-%23BE1B55" style="height: 22px;" />
|
||||
</a>
|
||||
<a target="_blank" href="https://discord.gg/vapESyrFmJ" style="background:none">
|
||||
|
@ -58,9 +58,9 @@ Open a blank file and let Continue start new Python scripts, React components, e
|
|||
|
||||
### And much more!
|
||||
|
||||
- Try out [experimental support for local tab autocomplete](https://continue.dev/docs/walkthroughs/tab-autocomplete) in VS Code
|
||||
- Use [built-in context providers](https://continue.dev/docs/customization/context-providers#built-in-context-providers) or create your own [custom context providers](https://continue.dev/docs/customization/context-providers#building-your-own-context-provider)
|
||||
- Use [built-in slash commands](https://arc.net/l/quote/zbhwfjmp) or create your own [custom slash commands](https://continue.dev/docs/customization/slash-commands#custom-slash-commands)
|
||||
- Try out [experimental support for local tab autocomplete](https://docs.continue.dev/walkthroughs/tab-autocomplete) in VS Code
|
||||
- Use [built-in context providers](https://docs.continue.dev/customization/context-providers#built-in-context-providers) or create your own [custom context providers](https://docs.continue.dev/customization/context-providers#building-your-own-context-provider)
|
||||
- Use [built-in slash commands](https://arc.net/l/quote/zbhwfjmp) or create your own [custom slash commands](https://docs.continue.dev/customization/slash-commands#custom-slash-commands)
|
||||
|
||||
## Getting Started
|
||||
|
||||
|
@ -68,7 +68,7 @@ Open a blank file and let Continue start new Python scripts, React components, e
|
|||
|
||||
You can try out Continue for free using a proxy server that securely makes calls with our API key to models like GPT-4, Gemini Pro, and Phind CodeLlama via OpenAI, Google, and Together respectively.
|
||||
|
||||
Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://continue.dev/docs/model-setup/overview).
|
||||
Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://docs.continue.dev/model-setup/overview).
|
||||
|
||||
## Contributing
|
||||
|
||||
|
|
|
@ -137,7 +137,7 @@ export async function getTabCompletion(
|
|||
) {
|
||||
shownGptClaudeWarning = true;
|
||||
throw new Error(
|
||||
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://continue.dev/docs/walkthroughs/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
|
||||
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://docs.continue.dev/walkthroughs/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
|
||||
);
|
||||
}
|
||||
|
||||
|
|
|
@ -719,7 +719,7 @@ declare global {
|
|||
};
|
||||
|
||||
export interface Config {
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://continue.dev/docs/telemetry */
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://docs.continue.dev/telemetry */
|
||||
allowAnonymousTelemetry?: boolean;
|
||||
/** Each entry in this array will originally be a ModelDescription, the same object from your config.json, but you may add CustomLLMs.
|
||||
* A CustomLLM requires you only to define an AsyncGenerator that calls the LLM and yields string updates. You can choose to define either \`streamCompletion\` or \`streamChat\` (or both).
|
||||
|
|
|
@ -733,7 +733,7 @@ export type ContinueRcJson = Partial<SerializedContinueConfig> & {
|
|||
};
|
||||
|
||||
export interface Config {
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://continue.dev/docs/telemetry */
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://docs.continue.dev/telemetry */
|
||||
allowAnonymousTelemetry?: boolean;
|
||||
/** Each entry in this array will originally be a ModelDescription, the same object from your config.json, but you may add CustomLLMs.
|
||||
* A CustomLLM requires you only to define an AsyncGenerator that calls the LLM and yields string updates. You can choose to define either `streamCompletion` or `streamChat` (or both).
|
||||
|
|
|
@ -143,8 +143,8 @@ const configs: SiteIndexingConfig[] = [
|
|||
},
|
||||
{
|
||||
title: "Continue",
|
||||
startUrl: "https://continue.dev/docs/intro",
|
||||
rootUrl: "https://continue.dev/docs",
|
||||
startUrl: "https://docs.continue.dev/intro",
|
||||
rootUrl: "https://docs.continue.dev",
|
||||
},
|
||||
{
|
||||
title: "jQuery",
|
||||
|
|
|
@ -151,22 +151,22 @@
|
|||
"groq"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/cohere)",
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/cohere)",
|
||||
"### Bedrock\nTo get started with Bedrock you need to sign up on AWS [here](https://aws.amazon.com/bedrock/claude/)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamafile)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamafile)",
|
||||
"### Mistral API\n\nTo get access to the Mistral API, obtain your API key from the [Mistral platform](https://docs.mistral.ai/)",
|
||||
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)"
|
||||
"### DeepInfra\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/deepinfra)"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -215,7 +215,7 @@
|
|||
},
|
||||
"promptTemplates": {
|
||||
"title": "Prompt Templates",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://continue.dev/docs/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://docs.continue.dev/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
|
@ -1132,9 +1132,7 @@
|
|||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": [
|
||||
"share"
|
||||
]
|
||||
"enum": ["share"]
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -1601,13 +1599,13 @@
|
|||
"properties": {
|
||||
"allowAnonymousTelemetry": {
|
||||
"title": "Allow Anonymous Telemetry",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://continue.dev/docs/telemetry).",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://docs.continue.dev/telemetry).",
|
||||
"default": true,
|
||||
"type": "boolean"
|
||||
},
|
||||
"models": {
|
||||
"title": "Models",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://continue.dev/docs/model-setup/overview).",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://docs.continue.dev/model-setup/overview).",
|
||||
"default": [
|
||||
{
|
||||
"title": "GPT-4 (trial)",
|
||||
|
@ -1655,7 +1653,7 @@
|
|||
},
|
||||
"slashCommands": {
|
||||
"title": "Slash Commands",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands).",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1664,7 +1662,7 @@
|
|||
},
|
||||
"customCommands": {
|
||||
"title": "Custom Commands",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"default": [
|
||||
{
|
||||
"name": "test",
|
||||
|
@ -1679,7 +1677,7 @@
|
|||
},
|
||||
"contextProviders": {
|
||||
"title": "Context Providers",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://continue.dev/docs/customization/context-providers).",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://docs.continue.dev/customization/context-providers).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1717,11 +1715,17 @@
|
|||
},
|
||||
"embeddingsProvider": {
|
||||
"title": "Embeddings Provider",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://continue.dev/docs/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["transformers.js", "ollama", "openai", "cohere", "free-trial"]
|
||||
"enum": [
|
||||
"transformers.js",
|
||||
"ollama",
|
||||
"openai",
|
||||
"cohere",
|
||||
"free-trial"
|
||||
]
|
||||
},
|
||||
"model": {
|
||||
"type": "string"
|
||||
|
@ -1875,7 +1879,7 @@
|
|||
"tabAutocompleteOptions": {
|
||||
"title": "TabAutocompleteOptions",
|
||||
"type": "object",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://continue.dev/docs/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://docs.continue.dev/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"properties": {
|
||||
"disable": {
|
||||
"type": "boolean",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
<!-- Plugin description -->
|
||||
|
||||
**[Continue](https://continue.dev/docs) is the open-source autopilot for software development—an extension that brings the power of ChatGPT to your IDE**
|
||||
**[Continue](https://docs.continue.dev) is the open-source autopilot for software development—an extension that brings the power of ChatGPT to your IDE**
|
||||
|
||||
### Get possible explanations
|
||||
|
||||
|
@ -30,6 +30,6 @@ Open a blank file and let Continue start new Python scripts, React components, e
|
|||
|
||||
You can try out Continue for free using a proxy server that securely makes calls with our API key to models like GPT-4, Gemini Pro, and Phind CodeLlama via OpenAI, Google, and Together respectively.
|
||||
|
||||
Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://continue.dev/docs/model-setup/overview).
|
||||
Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://docs.continue.dev/model-setup/overview).
|
||||
|
||||
<!-- Plugin description end -->
|
||||
|
|
|
@ -151,22 +151,22 @@
|
|||
"groq"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/cohere)",
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/cohere)",
|
||||
"### Bedrock\nTo get started with Bedrock you need to sign up on AWS [here](https://aws.amazon.com/bedrock/claude/)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamafile)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamafile)",
|
||||
"### Mistral API\n\nTo get access to the Mistral API, obtain your API key from the [Mistral platform](https://docs.mistral.ai/)",
|
||||
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)"
|
||||
"### DeepInfra\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/deepinfra)"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -215,7 +215,7 @@
|
|||
},
|
||||
"promptTemplates": {
|
||||
"title": "Prompt Templates",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://continue.dev/docs/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://docs.continue.dev/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
|
@ -1132,9 +1132,7 @@
|
|||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": [
|
||||
"share"
|
||||
]
|
||||
"enum": ["share"]
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -1601,13 +1599,13 @@
|
|||
"properties": {
|
||||
"allowAnonymousTelemetry": {
|
||||
"title": "Allow Anonymous Telemetry",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://continue.dev/docs/telemetry).",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://docs.continue.dev/telemetry).",
|
||||
"default": true,
|
||||
"type": "boolean"
|
||||
},
|
||||
"models": {
|
||||
"title": "Models",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://continue.dev/docs/model-setup/overview).",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://docs.continue.dev/model-setup/overview).",
|
||||
"default": [
|
||||
{
|
||||
"title": "GPT-4 (trial)",
|
||||
|
@ -1655,7 +1653,7 @@
|
|||
},
|
||||
"slashCommands": {
|
||||
"title": "Slash Commands",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands).",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1664,7 +1662,7 @@
|
|||
},
|
||||
"customCommands": {
|
||||
"title": "Custom Commands",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"default": [
|
||||
{
|
||||
"name": "test",
|
||||
|
@ -1679,7 +1677,7 @@
|
|||
},
|
||||
"contextProviders": {
|
||||
"title": "Context Providers",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://continue.dev/docs/customization/context-providers).",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://docs.continue.dev/customization/context-providers).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1717,11 +1715,17 @@
|
|||
},
|
||||
"embeddingsProvider": {
|
||||
"title": "Embeddings Provider",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://continue.dev/docs/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["transformers.js", "ollama", "openai", "cohere", "free-trial"]
|
||||
"enum": [
|
||||
"transformers.js",
|
||||
"ollama",
|
||||
"openai",
|
||||
"cohere",
|
||||
"free-trial"
|
||||
]
|
||||
},
|
||||
"model": {
|
||||
"type": "string"
|
||||
|
@ -1875,7 +1879,7 @@
|
|||
"tabAutocompleteOptions": {
|
||||
"title": "TabAutocompleteOptions",
|
||||
"type": "object",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://continue.dev/docs/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://docs.continue.dev/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"properties": {
|
||||
"disable": {
|
||||
"type": "boolean",
|
||||
|
|
|
@ -66,4 +66,4 @@ accept [⌥ ⇧ Y] or reject [⌥ ⇧ N] the edit"""
|
|||
|
||||
# endregion
|
||||
|
||||
# Ready to learn more? Check out the Continue documentation: https://continue.dev/docs
|
||||
# Ready to learn more? Check out the Continue documentation: https://docs.continue.dev
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
<div align="center">
|
||||
|
||||
**[Continue](https://continue.dev/docs) is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM**
|
||||
**[Continue](https://docs.continue.dev) is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM**
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -15,7 +15,7 @@
|
|||
<a target="_blank" href="https://opensource.org/licenses/Apache-2.0" style="background:none">
|
||||
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" style="height: 20px;" />
|
||||
</a>
|
||||
<a target="_blank" href="https://continue.dev/docs" style="background:none">
|
||||
<a target="_blank" href="https://docs.continue.dev" style="background:none">
|
||||
<img src="https://img.shields.io/badge/continue_docs-%23BE1B55" style="height: 20px;" />
|
||||
</a>
|
||||
<a target="_blank" href="https://discord.gg/vapESyrFmJ" style="background:none">
|
||||
|
@ -74,7 +74,7 @@ Open a blank file, <kbd>Cmd/Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>L</kbd>, and le
|
|||
|
||||
You can try out Continue for free using a proxy server that securely makes calls with our API key to models like GPT-4, Gemini Pro, and Phind CodeLlama via OpenAI, Google, and Together respectively.
|
||||
|
||||
Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://continue.dev/docs/model-setup/overview).
|
||||
Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://docs.continue.dev/model-setup/overview).
|
||||
|
||||
## License
|
||||
|
||||
|
|
|
@ -151,22 +151,22 @@
|
|||
"groq"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/cohere)",
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/cohere)",
|
||||
"### Bedrock\nTo get started with Bedrock you need to sign up on AWS [here](https://aws.amazon.com/bedrock/claude/)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamafile)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamafile)",
|
||||
"### Mistral API\n\nTo get access to the Mistral API, obtain your API key from the [Mistral platform](https://docs.mistral.ai/)",
|
||||
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)"
|
||||
"### DeepInfra\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/deepinfra)"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -215,7 +215,7 @@
|
|||
},
|
||||
"promptTemplates": {
|
||||
"title": "Prompt Templates",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://continue.dev/docs/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://docs.continue.dev/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
|
@ -1132,9 +1132,7 @@
|
|||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": [
|
||||
"share"
|
||||
]
|
||||
"enum": ["share"]
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -1601,13 +1599,13 @@
|
|||
"properties": {
|
||||
"allowAnonymousTelemetry": {
|
||||
"title": "Allow Anonymous Telemetry",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://continue.dev/docs/telemetry).",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://docs.continue.dev/telemetry).",
|
||||
"default": true,
|
||||
"type": "boolean"
|
||||
},
|
||||
"models": {
|
||||
"title": "Models",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://continue.dev/docs/model-setup/overview).",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://docs.continue.dev/model-setup/overview).",
|
||||
"default": [
|
||||
{
|
||||
"title": "GPT-4 (trial)",
|
||||
|
@ -1655,7 +1653,7 @@
|
|||
},
|
||||
"slashCommands": {
|
||||
"title": "Slash Commands",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands).",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1664,7 +1662,7 @@
|
|||
},
|
||||
"customCommands": {
|
||||
"title": "Custom Commands",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"default": [
|
||||
{
|
||||
"name": "test",
|
||||
|
@ -1679,7 +1677,7 @@
|
|||
},
|
||||
"contextProviders": {
|
||||
"title": "Context Providers",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://continue.dev/docs/customization/context-providers).",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://docs.continue.dev/customization/context-providers).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1717,11 +1715,17 @@
|
|||
},
|
||||
"embeddingsProvider": {
|
||||
"title": "Embeddings Provider",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://continue.dev/docs/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["transformers.js", "ollama", "openai", "cohere", "free-trial"]
|
||||
"enum": [
|
||||
"transformers.js",
|
||||
"ollama",
|
||||
"openai",
|
||||
"cohere",
|
||||
"free-trial"
|
||||
]
|
||||
},
|
||||
"model": {
|
||||
"type": "string"
|
||||
|
@ -1875,7 +1879,7 @@
|
|||
"tabAutocompleteOptions": {
|
||||
"title": "TabAutocompleteOptions",
|
||||
"type": "object",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://continue.dev/docs/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://docs.continue.dev/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"properties": {
|
||||
"disable": {
|
||||
"type": "boolean",
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -76,4 +76,4 @@ print_sum(["a", "b", "c"])
|
|||
|
||||
# endregion
|
||||
|
||||
# Ready to learn more? Check out the Continue documentation: https://continue.dev/docs
|
||||
# Ready to learn more? Check out the Continue documentation: https://docs.continue.dev
|
||||
|
|
|
@ -58,7 +58,7 @@
|
|||
"continue.telemetryEnabled": {
|
||||
"type": "boolean",
|
||||
"default": true,
|
||||
"markdownDescription": "Continue collects anonymous usage data, cleaned of PII, to help us improve the product for our users. Read more at [continue.dev › Telemetry](https://continue.dev/docs/telemetry)."
|
||||
"markdownDescription": "Continue collects anonymous usage data, cleaned of PII, to help us improve the product for our users. Read more at [continue.dev › Telemetry](https://docs.continue.dev/telemetry)."
|
||||
},
|
||||
"continue.showInlineTip": {
|
||||
"type": "boolean",
|
||||
|
@ -68,7 +68,7 @@
|
|||
"continue.enableTabAutocomplete": {
|
||||
"type": "boolean",
|
||||
"default": true,
|
||||
"markdownDescription": "Enable Continue's tab autocomplete feature. Read our walkthrough to learn about configuration and how to share feedback: [continue.dev › Walkthrough: Tab Autocomplete](https://continue.dev/docs/walkthroughs/tab-autocomplete)"
|
||||
"markdownDescription": "Enable Continue's tab autocomplete feature. Read our walkthrough to learn about configuration and how to share feedback: [continue.dev › Walkthrough: Tab Autocomplete](https://docs.continue.dev/walkthroughs/tab-autocomplete)"
|
||||
},
|
||||
"continue.remoteConfigServerUrl": {
|
||||
"type": "string",
|
||||
|
|
|
@ -24,7 +24,7 @@ export class ContinueCompletionProvider
|
|||
if (val === "Documentation") {
|
||||
vscode.env.openExternal(
|
||||
vscode.Uri.parse(
|
||||
"https://continue.dev/docs/walkthroughs/tab-autocomplete",
|
||||
"https://docs.continue.dev/walkthroughs/tab-autocomplete",
|
||||
),
|
||||
);
|
||||
} else if (val === "Download Ollama") {
|
||||
|
|
|
@ -40,7 +40,7 @@ export class TabAutocompleteModel {
|
|||
if (value === "Documentation") {
|
||||
vscode.env.openExternal(
|
||||
vscode.Uri.parse(
|
||||
"https://continue.dev/docs/walkthroughs/tab-autocomplete",
|
||||
"https://docs.continue.dev/walkthroughs/tab-autocomplete",
|
||||
),
|
||||
);
|
||||
} else if (value === "Copy Command") {
|
||||
|
@ -63,7 +63,7 @@ export class TabAutocompleteModel {
|
|||
if (value === "Documentation") {
|
||||
vscode.env.openExternal(
|
||||
vscode.Uri.parse(
|
||||
"https://continue.dev/docs/walkthroughs/tab-autocomplete",
|
||||
"https://docs.continue.dev/walkthroughs/tab-autocomplete",
|
||||
),
|
||||
);
|
||||
} else if (value === "Download Ollama") {
|
||||
|
|
|
@ -117,9 +117,9 @@ export class VsCodeWebviewProtocol {
|
|||
let message = e.message;
|
||||
if (e.cause) {
|
||||
if (e.cause.name === "ConnectTimeoutError") {
|
||||
message = `Connection timed out. If you expect it to take a long time to connect, you can increase the timeout in config.json by setting "requestOptions": { "timeout": 10000 }. You can find the full config reference here: https://continue.dev/docs/reference/config`;
|
||||
message = `Connection timed out. If you expect it to take a long time to connect, you can increase the timeout in config.json by setting "requestOptions": { "timeout": 10000 }. You can find the full config reference here: https://docs.continue.dev/reference/config`;
|
||||
} else if (e.cause.code === "ECONNREFUSED") {
|
||||
message = `Connection was refused. This likely means that there is no server running at the specified URL. If you are running your own server you may need to set the "apiBase" parameter in config.json. For example, you can set up an OpenAI-compatible server like here: https://continue.dev/docs/reference/Model%20Providers/openai#openai-compatible-servers--apis`;
|
||||
message = `Connection was refused. This likely means that there is no server running at the specified URL. If you are running your own server you may need to set the "apiBase" parameter in config.json. For example, you can set up an OpenAI-compatible server like here: https://docs.continue.dev/reference/Model%20Providers/openai#openai-compatible-servers--apis`;
|
||||
} else {
|
||||
message = `The request failed with "${e.cause.name}": ${e.cause.message}. If you're having trouble setting up Continue, please see the troubleshooting guide for help.`;
|
||||
}
|
||||
|
@ -134,7 +134,7 @@ export class VsCodeWebviewProtocol {
|
|||
);
|
||||
} else if (selection === "Troubleshooting") {
|
||||
vscode.env.openExternal(
|
||||
vscode.Uri.parse("https://continue.dev/docs/troubleshooting"),
|
||||
vscode.Uri.parse("https://docs.continue.dev/troubleshooting"),
|
||||
);
|
||||
}
|
||||
});
|
||||
|
|
|
@ -27,7 +27,7 @@ function FTCDialog() {
|
|||
OpenAI API key. To keep using Continue, you can either use your own API
|
||||
key, or use a local LLM. To read more about the options, see our{" "}
|
||||
<a
|
||||
href="https://continue.dev/docs/customization/models"
|
||||
href="https://docs.continue.dev/customization/models"
|
||||
target="_blank"
|
||||
>
|
||||
documentation
|
||||
|
|
|
@ -46,7 +46,7 @@ const ProgressBar = ({ completed, total }: ProgressBarProps) => {
|
|||
return (
|
||||
<>
|
||||
<a
|
||||
href="https://continue.dev/docs/reference/Model%20Providers/freetrial"
|
||||
href="https://docs.continue.dev/reference/Model%20Providers/freetrial"
|
||||
className="no-underline ml-2"
|
||||
>
|
||||
<GridDiv data-tooltip-id="usage_progress_bar">
|
||||
|
|
|
@ -144,7 +144,7 @@ export function getMentionSuggestion(
|
|||
action: () => {
|
||||
ideRequest(
|
||||
"openUrl",
|
||||
"https://continue.dev/docs/customization/context-providers#built-in-context-providers",
|
||||
"https://docs.continue.dev/customization/context-providers#built-in-context-providers",
|
||||
);
|
||||
},
|
||||
description: "",
|
||||
|
|
|
@ -112,7 +112,7 @@ function HelpPage() {
|
|||
</IconDiv>
|
||||
<IconDiv backgroundColor={"#1bbe84a8"}>
|
||||
<a
|
||||
href="https://continue.dev/docs/how-to-use-continue"
|
||||
href="https://docs.continue.dev/how-to-use-continue"
|
||||
target="_blank"
|
||||
>
|
||||
<svg
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
import React from "react";
|
||||
import ContinueButton from "../components/mainInput/ContinueButton";
|
||||
import { useNavigate } from "react-router-dom";
|
||||
import ContinueButton from "../components/mainInput/ContinueButton";
|
||||
|
||||
function MigrationPage() {
|
||||
const navigate = useNavigate();
|
||||
|
@ -23,7 +22,7 @@ function MigrationPage() {
|
|||
<p>
|
||||
For a summary of what changed and examples of <code>config.json</code>,
|
||||
please see the{" "}
|
||||
<a href="https://continue.dev/docs/walkthroughs/config-file-migration">
|
||||
<a href="https://docs.continue.dev/walkthroughs/config-file-migration">
|
||||
migration walkthrough
|
||||
</a>
|
||||
, and if you have any questions please reach out to us on{" "}
|
||||
|
|
|
@ -73,7 +73,7 @@ function Models() {
|
|||
<li>a model (the LLM being run, e.g. GPT-4, CodeLlama).</li>
|
||||
</ul>
|
||||
To read more about the options, check out our{" "}
|
||||
<a href="https://continue.dev/docs/model-setup/overview">overview</a> in
|
||||
<a href="https://docs.continue.dev/model-setup/overview">overview</a> in
|
||||
the docs.
|
||||
</IntroDiv>
|
||||
{providersSelected ? (
|
||||
|
@ -84,7 +84,7 @@ function Models() {
|
|||
description={modelInfo.description}
|
||||
tags={modelInfo.tags}
|
||||
icon={modelInfo.icon}
|
||||
refUrl={`https://continue.dev/docs/reference/Model%20Providers/${
|
||||
refUrl={`https://docs.continue.dev/reference/Model%20Providers/${
|
||||
modelInfo.refPage || modelInfo.provider.toLowerCase()
|
||||
}`}
|
||||
onClick={(e) => {
|
||||
|
|
|
@ -81,7 +81,7 @@ function Onboarding() {
|
|||
)}
|
||||
<br></br>
|
||||
{/* <p>
|
||||
<a href="https://continue.dev/docs/customization/overview">
|
||||
<a href="https://docs.continue.dev/customization/overview">
|
||||
Read the docs
|
||||
</a>{" "}
|
||||
to learn more and fully customize Continue by opening config.json.
|
||||
|
@ -101,7 +101,7 @@ function Onboarding() {
|
|||
<h3>⚙️ Your own models</h3>
|
||||
<p>
|
||||
Continue lets you use your own API key or self-hosted LLMs.{" "}
|
||||
<a href="https://continue.dev/docs/customization/overview">
|
||||
<a href="https://docs.continue.dev/customization/overview">
|
||||
Read the docs
|
||||
</a>{" "}
|
||||
to learn more about using config.json to customize Continue. This can
|
||||
|
@ -111,15 +111,15 @@ function Onboarding() {
|
|||
{selected === 2 && (
|
||||
<p className="px-3">
|
||||
Use <code>config.json</code> to configure your own{" "}
|
||||
<a href="https://continue.dev/docs/model-setup/overview">models</a>,{" "}
|
||||
<a href="https://continue.dev/docs/customization/context-providers">
|
||||
<a href="https://docs.continue.dev/model-setup/overview">models</a>,{" "}
|
||||
<a href="https://docs.continue.dev/customization/context-providers">
|
||||
context providers
|
||||
</a>
|
||||
,{" "}
|
||||
<a href="https://continue.dev/docs/customization/slash-commands">
|
||||
<a href="https://docs.continue.dev/customization/slash-commands">
|
||||
slash commands
|
||||
</a>
|
||||
, and <a href="https://continue.dev/docs/reference/config">more</a>.
|
||||
, and <a href="https://docs.continue.dev/reference/config">more</a>.
|
||||
</p>
|
||||
)}
|
||||
|
||||
|
|
Loading…
Reference in New Issue