⚡️ Preview (#1228)
* 🐛 fix off-by-one errors
* 🚧 swap out button on enter
* 💄 styling and auto-resize
* 💄 box shadow
* 🚧 fix keyboard shortcuts to accept/reject diff
* 💄 improve small interactions
* 💄 loading icon, cancellation logic
* 🐛 handle next.value being undefined
* ✨ latex support
* Bug Fix: Add ternary operator to prevent nonexistant value error (#1052)
* add terniary operator
* Removing logging
* remove comment
---------
Co-authored-by: Justin Milner <jmilner@jmilner-lt2.deka.local>
Co-authored-by: Nate Sesti <33237525+sestinj@users.noreply.github.com>
* 🎨 small formatting change
* 🩹 tweak /edit solution
* ✨ Dropdown to select model
* 🔊 print when SSL verification disabled
* 📌 pin esbuild version to match our hosted binary
* 🔥 remove unused package folder
* 👷 add note about pinning esbuild
* 🚚 rename pkg to binary
* ⚡️ update an important stop word for starcoder2, improve dev data
* 🐛 fix autocomplete bug
* Update completionProvider.ts
Add \r\n\r\n stop to tab completion
* 📌 update package-locks
* 🐛 fix bug in edit prompt
* 🔊 log extension version
* 🐛 handle repo undefined in vscode
* ⏪ revert back to esbuild ^0.17.19 to solve no backend found error with onnxruntime
* 🩹 set default autocomplete temp to 0.01 to be strictly positive
* make the useCopyBuffer option effective (#1062)
* Con-1037: Toggle full screen bug (#1065)
* webview reset
* add warning
---------
Co-authored-by: Justin Milner <jmilner@jmilner-lt2.deka.local>
* Update completionProvider.ts
as @rootedbox suggested
* Resolve conflict, accept branch being merged in (#1076)
* Resolve conflict, accept branch being merged in
* remove accidental .gitignore add
* whoops, put gitignore back
* fix
---------
Co-authored-by: Justin Milner <jmilner@jmilner-lt2.deka.local>
* #1073: update outdated documentation (#1074)
* 🩹 small tweaks to stop words
* Add abstraction for fetch to easily allow using request options (#1059)
* add fetch helper function with request options
* add support for request options for Jira context provider
* Add a new slash command to review code. (#1071)
* Add a new slash command to review code.
* clean code
* 🩹 add new starcoder artifact as stopword
* 💄 slight improvements to inline edit UI
* 🔖 update default models, bump gradle version
* 📝 recommend starcoder2
* 🐛 fix jetbrains encoding issue
* 🩹 don't index site-packages
* 🩹 error handling in JetBrains
* 🐛 fix copy to clipboard in jetbrains
* fix: cursor focus issue causing unwanted return to text area (#1086)
* 📝 mention autocomplete in jetbrains
* 📝 Tab-autocomplete README
* 🔥 remove note about custom ctx providers only being on VS Code
* 📝 docs about http context provider
* 👥 pull request template
* Update from Claude 2 to Claude 3 (#1078)
* 📝 add FAQ about single-line completions
* 📝 update autocomplete docs
* fix cursor focus issue causing unwanted return to text area
---------
Co-authored-by: Nate Sesti <sestinj@gmail.com>
Co-authored-by: Ty Dunn <ty@tydunn.com>
Co-authored-by: Nate Sesti <33237525+sestinj@users.noreply.github.com>
* Update tree-sitter-wasms to 0.1.11 (which includes Solidity)
* Make use of solidity tree-sitter parser
* 🔧 option to disable autocomplete from config.json
* ✨ option to disable streaming with anthropic
* ✅ Test to verify that files are packaged
* Add FIM template for CodeGemma (#1097)
Also pass stop tokens to llama.cpp.
* ✨ customizable rerankers (#1088)
* ✨ customizable rerankers
* 💄 fix early truncation button
* ⚡️ improvements to full text search + reranking
* ⚡️ only use starcoder2 stop words for starcoder2
* ⚡️ crawl code graph for call expressions
* 🚧 starcoder2-7b free trial
* 🚧 free trial client for embeddings and re-ranking
* 🚧 embeddings provider
* ✅ test for presence of files in CI
* 🐛 fixes to reranking
* ✨ new onboarding experience
* ✨ new onboarding experience
* 💄 small tweaks to onboarding
* 🩹 add stopAtLines filter to /edit
* 🐛 clean up vite build errors
* 👷 make vscode external in binary build
* 💄 improved models onboarding for existing users
* 💄 default indexing progress to 0.0
* 🐛 small fixes to reranking
* 👷 clear folders before prepackage
* 👷 say where .vsix is output
* 👷 also download arm packages outside of gh actions
* 🎨 add AbortSignal to indexing
* 🔧 starcoder, not 2 in config_schema
* 🚚 again, starcoder, not 2
* 🐛 fix bug when reranker undefined
* 🩹 fix binary tsc error
* ✨ configure context menu prompts
* 🐛 acknowledge useLegacyCompletionsEndpoint
* 🚑 fix keep existing config option
* 🔊 learn about selection
* ⚡️ improvements to indexing reporting when not in git repo
* 🥅 handle situation where git doesn't exist in workspace
* ✨ support for gemini 1.5 pro
* 🐛 handle embeddingProvider name not found
* ✨ Gemini 1.5 and GPT-4 Turbo
* 👷 fix os, arch undefined in prepackage.js
* ⚡️ better detection of terminal code blocks
* 🧑💻 solve tailwind css warnings
* ✨ cmd/ctrl+L to select terminal contents
* 🐛 correctly handle remotes not found
* ✨ allow templating for custom commands
* 🔥 temporarily remove cmd+L to select terminal contents
* 🐛 remove quotes around Ollama stop words
* ✨ add Cohere as Model Provider (#1119)
* 🩹 add gpt-4-turbo to list of chat_only models
* feat: use exponential backoff in llm chat (#1115)
Signed-off-by: inimaz <93inigo93@gmail.com>
* 🩹 update exponential backoff timing
* 💄 spell out Alt in keyboard shortcuts
* 🩹 don't set edit prompt for templateType "none"
* Adds additional ignores for C-fmilies langs (#1129)
Ignored:
- cache directory `.cache`, used by clangd
- dependency files `*o.d`, used by object files
- LLVM and GNU coverage files: `*.profraw`, `*.gcda` and `*.gcno`
* 🔥 temporarily remove problematic expandSnippet import
* 👷 add npx to prefix vsce in build
* 🐛 handle messages sent in multiple parts over stdin
* 🔖 update gradle version
* 🩹 for now, skip onboarding in jetbrains
* 🩹 temporarily don't show use codebase on jetbrains
* 🐛 use system certificates in binary
* 🔖 update jetbrains version
* 🩹 correctly contruct set of certs
* 🔖 bump intellij version to 0.0.45
* 🩹 update to support images for gpt-4-turbo
* 🐛 fix image support autodetection
* ⚡️ again, improve image support autodetection
* 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting
Closes #1132
* 📝 useLegacyCompletionsEndpoint within OpenAI docs
* 🔧 forceCompletionsEndpointType option
* Revert "🔧 forceCompletionsEndpointType option"
This reverts commit dd51fcbb7f
.
* 🩹 set default useLegacyCompletionsEndpoint to undefined
* 🩹 look for bedrock credentials in homedir
* 🩹 use title for autodetect
* 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting
Closes #1132
* 📝 useLegacyCompletionsEndpoint within OpenAI docs
* 🩹 set default useLegacyCompletionsEndpoint to undefined
* 🩹 look for bedrock credentials in homedir
* 🩹 use title for autodetect
* Fix slash command params loading
Existing slash commands expect an object named
"params" so mapping to "options" here caused
params to be undefined within the run scope. I
renamed from 'm' to 's' just to avoid potential
confusion with the model property mapping above.
* Add outputDir param to /share
* Enable basic tilde expansion for /share outputDir
* Add ability to specify workspace for /share
* Add datetimestamp to exported session filename
* Use `.`, `./`, or `.\` for current workspace
* Add description of outputDir param for /share
* Ensure replacement only at start of string
* Create user-specified directory if necessary
* Change "Continue" to "Assistant" in export
* Consolidate to single replace regex
* Reformat markdown code blocks
Currently, user-selected code blocks are formatted
with range in file (rif) info on the same line as
the triple backticks, which means that when
exported to markdown they don't have the language
info needed on that line for syntax highlighting.
This update moves the rif info to the following
line as a comment in the language of the file and
with the language info in the correct place.
Before:
```example.ts (3-6)
function fib(n) {
if (n <= 1) return n;
return fib(n - 2) + fib(n - 1);
}
```
After:
```ts
// example.ts (3-6)
function fib(n) {
if (n <= 1) return n;
return fib(n - 2) + fib(n - 1);
}
```
* Tidy regex to capture filename
* Tidy regex to capture filename
* Ensure adjacent codeblocks separated by newline
* Aesthetic tweaks to output format
* ✨ disableInFiles option for autocomplete
* feat(httpContextProvider): load AC on fetch client (#1150)
Co-authored-by: Bertrand Pinel <bertrand.pinel@pole-emploi.fr>
* ✨ global filewatcher for config.json/ts changes
* 🐛 retry webview requests so that first cmd+L works
* ✨ Improved onboarding experience (#1155)
* 🚸 onboarding improvements
* 🧑💻 keyboard shortcuts to toggle autocomplete and open config.json
* ⚡️ improve detection of terminal code blocks
* 🚧 onboarding improvements
* 🚧 more onboarding improvements
* 💄 last session button
* 🚸 show more fallback options in dropdown
* 💄 add sectioning to models page
* 💄 clean up delete model button
* 💄 make tooltip look nicer
* 🚸 download Ollama button
* 💄 local LLM onboarding
* 🐛 select correct terminal on "runCommand" message
* 💄 polish onboarding
* 💚 fix gui build errors
* 📝 add /v1 to OpenAI examples in docs
* 🚑 hotfix for not iterable error
* ✨ add Cohere as Embeddings Provider
* 💄 add llama3 to UI
* 🔥 remove disable indexing
* 🍱 update continue logo
* 🐛 fix language undefined bug
* 🐛 fix merge mistake
* 📝 update mistral models
* ✨ global request options (#1153)
* ✨ global request options
* 🐛 fix jira context provider by injecting fetch
* ✨ request options for embeddings providers
* ✨ add Cohere as Reranker (#1159)
* ♻️ use custom requestOptions with CohereEmbeddingsProvider
* Update preIndexedDocs.ts (#1154)
Add WordPress and WooCommerce as preIndexedDocs.
* 🩹 remove example "outputDir" from default config
* Fix slash command params loading (#1084)
Existing slash commands expect an object named
"params" so mapping to "options" here caused
params to be undefined within the run scope. I
renamed from 'm' to 's' just to avoid potential
confusion with the model property mapping above.
* 🐛 don't index if no open workspace folders
* 💄 improve onboarding language
* 🚸 improve onboarding
* 🐛 stop loading when error
* 💄 replace text in input box
* Respect Retry-After header when available from 429 responses (#1182)
* 🩹 remove dead code for exponential backoff
This has been replaced by the withExponentialBackoff helper
* 🩹 respect Retry-After header when available
* 🚸 update inline tips language
* ✨ input box history
* 📌 update package-locks
* 🔊 log errors in prepackage
* 🐛 err to string
* 📌 pin llama-tokenizer-js
* 📌 update lockfile
* 🚚 change /docs to docs.
* 📦 package win-ca dependencies in binary
* 🔥 remove unpopular models from UI
* 🍱 new logo in jetbrains
* 🎨 use node-fetch everywhere
* 🚸 immediately select newly added models
* 🚸 spell out Alt instead of using symbol
* 🔥 remove config shortcut
* 🐛 fix changing model bug
* 🩹 de-duplicate before adding models
* 🔧 add embeddingsProvider specific request options
* 🎨 refactor to always use node-fetch from LLM
* 🔥 remove duplicate tokens generated
* 🔊 add timestamp to JetBrains logs
* 🎨 maxStopWords for Groq
* 🐛 fix groq provider calling /completions
* 🐛 correctly adhere to LanceDB table name spec
* 🐛 fix sqlite NOT NULL constraint failed error with custom model
* Fix issue where Accept/Reject All only accepts/rejects a single diff hunk. (#1197)
* Fix issues parsing Ollama /api/show endpoint payloads. (#1199)
* ✨ model role for inlineEdit
* 🩹 various small updates
* 🐛 fix openai image support
* 🔖 update gradle version
* 🍱 update jetbrains icon
* 🐛 fix autocomplete in notebook cells
* 🔥 remove unused media
* 🔥 remove unused files
* Fix schema to allow 'AUTODETECT' sentinel for model when provider is 'groq'. (#1203)
* 🐛 small improvements
* Fix issue with @codebase provider when n becomes odd due to a divide by 2 during the full text search portion of the query. (#1204)
* 🐛 add skipLines
* ✨ URLContextProvider
* 🥅 improved error handling for codebase indexing
* 🏷️ use official Git extension types
* ➕ declare vscode.git extension dependency
* ⚡️ use reranker for docs context provider
* 🚸 Use templating in default customCommand
* 🎨 use U+23CE
* 🚸 disable autocomplete in commit message box
* 🩹 add gems to default ignored paths
* ⚡️ format markdown blocks as comments in .ipynb completions
* 🐛 don't strip port in URL
* 🐛 fix "gemini" provider spacing issues
* 📦 update posthog version
* 🏷️ update types.ts
* 🐛 fix copy/paste/cut behavior in VS Code notebooks
* ✨ llama3 prompt template
* 🐛 fix undefined prefix, suffix and language for `/edit` (#1216)
* 🐛 add .bind to fix templating in systemMessage
* 🐛 small improvements to autocomplete
* Update DocsContextProvider.ts (#1217)
I fixed a bug where you were sending the query variable (which holds the base URL of the doc) to the rerank method, and it made no sense to rerank the chunks based on a URL. So I changed it to extras.fullInput because it should rerank based on the user input, which should provide better results.
* 📝 select-provider.md update
* 🐛 fix merge errors
---------
Signed-off-by: inimaz <93inigo93@gmail.com>
Co-authored-by: Justin Milner <42585006+justinmilner1@users.noreply.github.com>
Co-authored-by: Justin Milner <jmilner@jmilner-lt2.deka.local>
Co-authored-by: lmaosweqf1 <138042737+lmaosweqf1@users.noreply.github.com>
Co-authored-by: ading2210 <71154407+ading2210@users.noreply.github.com>
Co-authored-by: Martin Mois <martin.mois@googlemail.com>
Co-authored-by: Tobias Jung <102594442+tobiajung@users.noreply.github.com>
Co-authored-by: Jason Jacobs <nerfnerd@gmail.com>
Co-authored-by: Nithish <83941930+Nithishvb@users.noreply.github.com>
Co-authored-by: Ty Dunn <ty@tydunn.com>
Co-authored-by: Riccardo Schirone <riccardo.schirone@trailofbits.com>
Co-authored-by: postmasters <namnguyen@google.com>
Co-authored-by: Maxime Brunet <max@brnt.mx>
Co-authored-by: inimaz <49730431+inimaz@users.noreply.github.com>
Co-authored-by: SR_team <me@sr.team>
Co-authored-by: Roger Meier <r.meier@siemens.com>
Co-authored-by: Peter Zaback <pzaback@gmail.com>
Co-authored-by: Bertrand P <49525332+Berber31@users.noreply.github.com>
Co-authored-by: Bertrand Pinel <bertrand.pinel@pole-emploi.fr>
Co-authored-by: Jose Vega <bloguea.y.gana@gmail.com>
Co-authored-by: Nejc Habjan <hab.nejc@gmail.com>
Co-authored-by: Chad Yates <cyates@dynfxdigital.com>
Co-authored-by: 小颚虫 <123357481+5eqn@users.noreply.github.com>
|
@ -15,7 +15,7 @@ body:
|
|||
required: false
|
||||
- label: I'm not able to find an [open issue](https://github.com/continuedev/continue/issues?q=is%3Aopen+is%3Aissue) that reports the same bug
|
||||
required: false
|
||||
- label: I've seen the [troubleshooting guide](https://continue.dev/docs/troubleshooting) on the Continue Docs
|
||||
- label: I've seen the [troubleshooting guide](https://docs.continue.dev/troubleshooting) on the Continue Docs
|
||||
required: false
|
||||
- type: textarea
|
||||
attributes:
|
||||
|
@ -58,5 +58,5 @@ body:
|
|||
attributes:
|
||||
label: Log output
|
||||
description: |
|
||||
Please refer to the [troubleshooting guide](https://continue.dev/docs/troubleshooting) in the Continue Docs for instructions on obtaining the logs. Copy either the relevant lines or the last 100 lines or so.
|
||||
Please refer to the [troubleshooting guide](https://docs.continue.dev/troubleshooting) in the Continue Docs for instructions on obtaining the logs. Copy either the relevant lines or the last 100 lines or so.
|
||||
render: Shell
|
||||
|
|
|
@ -152,4 +152,6 @@ continue_server.build
|
|||
continue_server.dist
|
||||
|
||||
Icon
|
||||
Icon?
|
||||
Icon?
|
||||
|
||||
.continue
|
|
@ -1,4 +1,5 @@
|
|||
{
|
||||
"tabWidth": 2,
|
||||
"useTabs": false
|
||||
"useTabs": false,
|
||||
"trailingComma": "all"
|
||||
}
|
||||
|
|
|
@ -48,7 +48,7 @@ Continue is quickly adding features, and we'd love to hear which are the most im
|
|||
|
||||
## 📖 Updating / Improving Documentation
|
||||
|
||||
Continue is continuously improving, but a feature isn't complete until it is reflected in the documentation! If you see something out-of-date or missing, you can help by clicking "Edit this page" at the bottom of any page on [continue.dev/docs](https://continue.dev/docs).
|
||||
Continue is continuously improving, but a feature isn't complete until it is reflected in the documentation! If you see something out-of-date or missing, you can help by clicking "Edit this page" at the bottom of any page on [docs.continue.dev](https://docs.continue.dev).
|
||||
|
||||
## 🧑💻 Contributing Code
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
|
||||
<div align="center">
|
||||
|
||||
**[Continue](https://continue.dev/docs) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**
|
||||
**[Continue](https://docs.continue.dev) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
|||
<a target="_blank" href="https://opensource.org/licenses/Apache-2.0" style="background:none">
|
||||
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" style="height: 22px;" />
|
||||
</a>
|
||||
<a target="_blank" href="https://continue.dev/docs" style="background:none">
|
||||
<a target="_blank" href="https://docs.continue.dev" style="background:none">
|
||||
<img src="https://img.shields.io/badge/continue_docs-%23BE1B55" style="height: 22px;" />
|
||||
</a>
|
||||
<a target="_blank" href="https://discord.gg/vapESyrFmJ" style="background:none">
|
||||
|
|
|
@ -11,7 +11,10 @@
|
|||
"../../../core/node_modules/sqlite3/**/*",
|
||||
"../../node_modules/@lancedb/vectordb-win32-x64-msvc/index.node",
|
||||
"../../out/tree-sitter.wasm",
|
||||
"../../out/tree-sitter-wasms/*"
|
||||
"../../out/tree-sitter-wasms/*",
|
||||
"../../node_modules/win-ca/lib/crypt32-ia32.node",
|
||||
"../../node_modules/win-ca/lib/crypt32-x64.node",
|
||||
"../../node_modules/win-ca/lib/roots.exe"
|
||||
],
|
||||
"targets": [
|
||||
"node18-win-arm64"
|
||||
|
|
|
@ -11,7 +11,10 @@
|
|||
"../../../core/node_modules/sqlite3/**/*",
|
||||
"../../node_modules/@lancedb/vectordb-win32-x64-msvc/index.node",
|
||||
"../../out/tree-sitter.wasm",
|
||||
"../../out/tree-sitter-wasms/*"
|
||||
"../../out/tree-sitter-wasms/*",
|
||||
"../../node_modules/win-ca/lib/crypt32-ia32.node",
|
||||
"../../node_modules/win-ca/lib/crypt32-x64.node",
|
||||
"../../node_modules/win-ca/lib/roots.exe"
|
||||
],
|
||||
"targets": [
|
||||
"node18-win-x64"
|
||||
|
|
|
@ -6,6 +6,7 @@ import { indexDocs } from "core/indexing/docs";
|
|||
import TransformersJsEmbeddingsProvider from "core/indexing/embeddings/TransformersJsEmbeddingsProvider";
|
||||
import { CodebaseIndexer, PauseToken } from "core/indexing/indexCodebase";
|
||||
import { logDevData } from "core/util/devdata";
|
||||
import { fetchwithRequestOptions } from "core/util/fetchWithOptions";
|
||||
import historyManager from "core/util/history";
|
||||
import { Message } from "core/util/messenger";
|
||||
import { Telemetry } from "core/util/posthog";
|
||||
|
@ -138,7 +139,11 @@ export class Core {
|
|||
const config = await this.config();
|
||||
const items = config.contextProviders
|
||||
?.find((provider) => provider.description.title === msg.data.title)
|
||||
?.loadSubmenuItems({ ide: this.ide });
|
||||
?.loadSubmenuItems({
|
||||
ide: this.ide,
|
||||
fetch: (url, init) =>
|
||||
fetchwithRequestOptions(url, init, config.requestOptions),
|
||||
});
|
||||
return items || [];
|
||||
});
|
||||
on("context/getContextItems", async (msg) => {
|
||||
|
@ -160,6 +165,8 @@ export class Core {
|
|||
ide,
|
||||
selectedCode: msg.data.selectedCode,
|
||||
reranker: config.reranker,
|
||||
fetch: (url, init) =>
|
||||
fetchwithRequestOptions(url, init, config.requestOptions),
|
||||
});
|
||||
|
||||
Telemetry.capture("useContextProvider", {
|
||||
|
@ -278,6 +285,8 @@ export class Core {
|
|||
},
|
||||
selectedCode,
|
||||
config,
|
||||
fetch: (url, init) =>
|
||||
fetchwithRequestOptions(url, init, config.requestOptions),
|
||||
})) {
|
||||
if (content) {
|
||||
yield { content };
|
||||
|
|
|
@ -12,7 +12,10 @@ export class IpcMessenger {
|
|||
constructor() {
|
||||
const logger = (message: any, ...optionalParams: any[]) => {
|
||||
const logFilePath = getCoreLogsPath();
|
||||
const logMessage = `${message} ${optionalParams.join(" ")}\n`;
|
||||
const timestamp = new Date().toISOString().split(".")[0];
|
||||
const logMessage = `[${timestamp}] ${message} ${optionalParams.join(
|
||||
" ",
|
||||
)}\n`;
|
||||
fs.appendFileSync(logFilePath, logMessage);
|
||||
};
|
||||
console.log = logger;
|
||||
|
|
|
@ -38,6 +38,10 @@ export interface AutocompleteInput {
|
|||
recentlyEditedFiles: RangeInFileWithContents[];
|
||||
recentlyEditedRanges: RangeInFileWithContents[];
|
||||
clipboardText: string;
|
||||
// Used for notebook files
|
||||
manuallyPassFileContents?: string;
|
||||
// Used for VS Code git commit input box
|
||||
manuallyPassPrefix?: string;
|
||||
}
|
||||
|
||||
export interface AutocompleteOutcome extends TabAutocompleteOptions {
|
||||
|
@ -57,13 +61,20 @@ const DOUBLE_NEWLINE = "\n\n";
|
|||
const WINDOWS_DOUBLE_NEWLINE = "\r\n\r\n";
|
||||
const SRC_DIRECTORY = "/src/";
|
||||
// Starcoder2 tends to output artifacts starting with the letter "t"
|
||||
const STARCODER2_T_ARTIFACTS = ["t.", "\nt"];
|
||||
const STARCODER2_T_ARTIFACTS = ["t.", "\nt", "<file_sep>"];
|
||||
const PYTHON_ENCODING = "#- coding: utf-8";
|
||||
const CODE_BLOCK_END = "```";
|
||||
|
||||
const multilineStops = [DOUBLE_NEWLINE, WINDOWS_DOUBLE_NEWLINE];
|
||||
const commonStops = [SRC_DIRECTORY, PYTHON_ENCODING, CODE_BLOCK_END];
|
||||
|
||||
// Errors that can be expected on occasion even during normal functioning should not be shown.
|
||||
// Not worth disrupting the user to tell them that a single autocomplete request didn't go through
|
||||
const ERRORS_TO_IGNORE = [
|
||||
// From Ollama
|
||||
"unexpected server status",
|
||||
];
|
||||
|
||||
function formatExternalSnippet(
|
||||
filepath: string,
|
||||
snippet: string,
|
||||
|
@ -105,8 +116,11 @@ export async function getTabCompletion(
|
|||
recentlyEditedFiles,
|
||||
recentlyEditedRanges,
|
||||
clipboardText,
|
||||
manuallyPassFileContents,
|
||||
manuallyPassPrefix,
|
||||
} = input;
|
||||
const fileContents = await ide.readFile(filepath);
|
||||
const fileContents =
|
||||
manuallyPassFileContents ?? (await ide.readFile(filepath));
|
||||
const fileLines = fileContents.split("\n");
|
||||
|
||||
// Filter
|
||||
|
@ -137,7 +151,7 @@ export async function getTabCompletion(
|
|||
) {
|
||||
shownGptClaudeWarning = true;
|
||||
throw new Error(
|
||||
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://continue.dev/docs/walkthroughs/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
|
||||
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://docs.continue.dev/walkthroughs/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -187,6 +201,12 @@ export async function getTabCompletion(
|
|||
extrasSnippets,
|
||||
);
|
||||
|
||||
// If prefix is manually passed
|
||||
if (manuallyPassPrefix) {
|
||||
prefix = manuallyPassPrefix;
|
||||
suffix = "";
|
||||
}
|
||||
|
||||
// Template prompt
|
||||
const { template, completionOptions } = options.template
|
||||
? { template: options.template, completionOptions: {} }
|
||||
|
@ -281,8 +301,16 @@ export async function getTabCompletion(
|
|||
lineGenerator = streamWithNewLines(lineGenerator);
|
||||
|
||||
const finalGenerator = stopAtSimilarLine(lineGenerator, lineBelowCursor);
|
||||
for await (const update of finalGenerator) {
|
||||
completion += update;
|
||||
|
||||
try {
|
||||
for await (const update of finalGenerator) {
|
||||
completion += update;
|
||||
}
|
||||
} catch (e: any) {
|
||||
if (ERRORS_TO_IGNORE.some((err) => e.includes(err))) {
|
||||
return undefined;
|
||||
}
|
||||
throw e;
|
||||
}
|
||||
|
||||
if (cancelled) {
|
||||
|
|
|
@ -13,7 +13,9 @@ export const Typescript = {
|
|||
|
||||
// Python
|
||||
export const Python = {
|
||||
stopWords: ["def", "class"],
|
||||
// """"#" is for .ipynb files, where we add '"""' surrounding markdown blocks.
|
||||
// This stops the model from trying to complete the start of a new markdown block
|
||||
stopWords: ["def", "class", '"""#'],
|
||||
comment: "#",
|
||||
endOfLine: [],
|
||||
};
|
||||
|
@ -211,6 +213,7 @@ export const LANGUAGES: { [extension: string]: AutocompleteLanguageInfo } = {
|
|||
js: Typescript,
|
||||
tsx: Typescript,
|
||||
jsx: Typescript,
|
||||
ipynb: Python,
|
||||
py: Python,
|
||||
pyi: Python,
|
||||
java: Java,
|
||||
|
|
|
@ -81,6 +81,16 @@ export async function* stopAtLines(stream: LineStream): LineStream {
|
|||
}
|
||||
}
|
||||
|
||||
const LINES_TO_SKIP = ["</START EDITING HERE>"];
|
||||
|
||||
export async function* skipLines(stream: LineStream): LineStream {
|
||||
for await (const line of stream) {
|
||||
if (!LINES_TO_SKIP.some((skipAt) => line.startsWith(skipAt))) {
|
||||
yield line;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function shouldRemoveLineBeforeStart(line: string): boolean {
|
||||
return (
|
||||
line.trimStart().startsWith("```") ||
|
||||
|
@ -129,7 +139,7 @@ export async function* filterCodeBlockLines(rawLines: LineStream): LineStream {
|
|||
return;
|
||||
}
|
||||
|
||||
if (line === "```") {
|
||||
if (line.startsWith("```")) {
|
||||
waitingToSeeIfLineIsLast = line;
|
||||
} else {
|
||||
yield line;
|
||||
|
|
|
@ -25,9 +25,17 @@ const stableCodeFimTemplate: AutocompleteTemplate = {
|
|||
};
|
||||
|
||||
const codegemmaFimTemplate: AutocompleteTemplate = {
|
||||
template: "<|fim_prefix|>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|>",
|
||||
template:
|
||||
"<|fim_prefix|>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|>",
|
||||
completionOptions: {
|
||||
stop: ["<|fim_prefix|>", "<|fim_suffix|>", "<|fim_middle|>", "<|file_separator|>", "<end_of_turn>", "<eos>"],
|
||||
stop: [
|
||||
"<|fim_prefix|>",
|
||||
"<|fim_suffix|>",
|
||||
"<|fim_middle|>",
|
||||
"<|file_separator|>",
|
||||
"<end_of_turn>",
|
||||
"<eos>",
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
|
@ -106,7 +114,8 @@ export function getTemplateForModel(model: string): AutocompleteTemplate {
|
|||
lowerCaseModel.includes("star-coder") ||
|
||||
lowerCaseModel.includes("starchat") ||
|
||||
lowerCaseModel.includes("octocoder") ||
|
||||
lowerCaseModel.includes("stable")
|
||||
lowerCaseModel.includes("stable") ||
|
||||
lowerCaseModel.includes("codeqwen")
|
||||
) {
|
||||
return stableCodeFimTemplate;
|
||||
}
|
||||
|
|
|
@ -9,7 +9,7 @@ import {
|
|||
} from "../../autocomplete/lineStream";
|
||||
import { streamLines } from "../../diff/util";
|
||||
import { stripImages } from "../../llm/countTokens";
|
||||
import { dedentAndGetCommonWhitespace } from "../../util";
|
||||
import { dedentAndGetCommonWhitespace, getMarkdownLanguageTagForFile } from "../../util";
|
||||
import {
|
||||
RangeInFileWithContents,
|
||||
contextItemToRangeInFileWithContents,
|
||||
|
@ -226,7 +226,7 @@ const EditSlashCommand: SlashCommand = {
|
|||
}
|
||||
|
||||
if (!contextItemToEdit) {
|
||||
yield "Select (highlight and press `cmd+shift+L` (MacOS) / `ctrl+shift+L` (Windows)) the code that you want to edit first";
|
||||
yield "Please highlight the code you want to edit, then press `cmd/ctrl+shift+L` to add it to chat";
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -456,6 +456,12 @@ const EditSlashCommand: SlashCommand = {
|
|||
userInput,
|
||||
filePrefix: filePrefix,
|
||||
fileSuffix: fileSuffix,
|
||||
|
||||
// Some built-in templates use these instead of the above
|
||||
prefix: filePrefix,
|
||||
suffix: fileSuffix,
|
||||
|
||||
language: getMarkdownLanguageTagForFile(rif.filepath),
|
||||
systemMessage: llm.systemMessage ?? "",
|
||||
// "contextItems": (await sdk.getContextItemChatMessages()).map(x => x.content || "").join("\n\n"),
|
||||
},
|
||||
|
|
|
@ -4,7 +4,7 @@ import { removeQuotesAndEscapes } from "../../util";
|
|||
const HttpSlashCommand: SlashCommand = {
|
||||
name: "http",
|
||||
description: "Call an HTTP endpoint to serve response",
|
||||
run: async function* ({ ide, llm, input, params }) {
|
||||
run: async function* ({ ide, llm, input, params, fetch }) {
|
||||
const url = params?.url;
|
||||
if (!url) {
|
||||
throw new Error("URL is not defined in params");
|
||||
|
|
|
@ -1,24 +1,98 @@
|
|||
import path from "path";
|
||||
import * as fs from "fs";
|
||||
import { homedir } from "os";
|
||||
import { SlashCommand } from "../..";
|
||||
import { languageForFilepath } from "../../autocomplete/constructPrompt";
|
||||
import { stripImages } from "../../llm/countTokens";
|
||||
|
||||
// If useful elsewhere, helper funcs should move to core/util/index.ts or similar
|
||||
function getOffsetDatetime(date: Date): Date {
|
||||
const offset = date.getTimezoneOffset();
|
||||
const offsetHours = Math.floor(offset / 60);
|
||||
const offsetMinutes = offset % 60;
|
||||
date.setHours(date.getHours() - offsetHours);
|
||||
date.setMinutes(date.getMinutes() - offsetMinutes);
|
||||
|
||||
return date;
|
||||
}
|
||||
|
||||
function asBasicISOString(date: Date): string {
|
||||
const isoString = date.toISOString();
|
||||
|
||||
return isoString.replace(/[-:]|(\.\d+Z)/g, "");
|
||||
}
|
||||
|
||||
function reformatCodeBlocks(msgText: string): string {
|
||||
const codeBlockFenceRegex = /```((.*?\.(\w+))\s*.*)\n/g;
|
||||
msgText = msgText.replace(codeBlockFenceRegex,
|
||||
(match, metadata, filename, extension) => {
|
||||
const lang = languageForFilepath(filename);
|
||||
return `\`\`\`${extension}\n${lang.comment} ${metadata}\n`;
|
||||
},
|
||||
);
|
||||
// Appease the markdown linter
|
||||
return msgText.replace(/```\n```/g, '```\n\n```');
|
||||
}
|
||||
|
||||
const ShareSlashCommand: SlashCommand = {
|
||||
name: "share",
|
||||
description: "Download and share this session",
|
||||
run: async function* ({ ide, history }) {
|
||||
let content = `This is a session transcript from [Continue](https://continue.dev) on ${new Date().toLocaleString()}.`;
|
||||
description: "Export the current chat session to markdown",
|
||||
run: async function* ({ ide, history, params }) {
|
||||
const now = new Date();
|
||||
|
||||
for (const msg of history) {
|
||||
content += `\n\n## ${
|
||||
msg.role === "user" ? "User" : "Continue"
|
||||
}\n\n${stripImages(msg.content)}`;
|
||||
let content = `### [Continue](https://continue.dev) session transcript\n Exported: ${now.toLocaleString()}`;
|
||||
|
||||
// As currently implemented, the /share command is by definition the last
|
||||
// message in the chat history, this will omit it
|
||||
for (const msg of history.slice(0, history.length - 1)) {
|
||||
let msgText = msg.content;
|
||||
msgText = stripImages(msg.content);
|
||||
|
||||
if (msg.role === "user" && msgText.search("```") > -1) {
|
||||
msgText = reformatCodeBlocks(msgText);
|
||||
}
|
||||
|
||||
// format messages as blockquotes
|
||||
msgText = msgText.replace(/^/gm, "> ");
|
||||
|
||||
content += `\n\n#### ${
|
||||
msg.role === "user" ? "_User_" : "_Assistant_"
|
||||
}\n\n${msgText}`;
|
||||
}
|
||||
|
||||
const continueDir = await ide.getContinueDir();
|
||||
const path = `${continueDir}/session.md`;
|
||||
await ide.writeFile(path, content);
|
||||
await ide.openFile(path);
|
||||
let outputDir: string = params?.outputDir;
|
||||
if (!outputDir) {
|
||||
outputDir = await ide.getContinueDir();
|
||||
}
|
||||
|
||||
yield `The session transcript has been saved to a markdown file at \`${path}\`.`;
|
||||
if (outputDir.startsWith("~")) {
|
||||
outputDir = outputDir.replace(/^~/, homedir);
|
||||
} else if (
|
||||
outputDir.startsWith("./") ||
|
||||
outputDir.startsWith(`.\\`) ||
|
||||
outputDir === "."
|
||||
) {
|
||||
const workspaceDirs = await ide.getWorkspaceDirs();
|
||||
// Although the most common situation is to have one directory open in a
|
||||
// workspace it's also possible to have just a file open without an
|
||||
// associated directory or to use multi-root workspaces in which multiple
|
||||
// folders are included. We default to using the first item in the list, if
|
||||
// it exists.
|
||||
const workspaceDirectory = workspaceDirs?.[0] || "";
|
||||
outputDir = outputDir.replace(/^./, workspaceDirectory);
|
||||
}
|
||||
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
const dtString = asBasicISOString(getOffsetDatetime(now));
|
||||
const outPath = path.join(outputDir, `${dtString}_session.md`); //TODO: more flexible naming?
|
||||
|
||||
await ide.writeFile(outPath, content);
|
||||
await ide.openFile(outPath);
|
||||
|
||||
yield `The session transcript has been saved to a markdown file at \`${outPath}\`.`;
|
||||
},
|
||||
};
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import { ChatMessageRole, SlashCommand } from "../..";
|
||||
import { ChatMessageRole, FetchFunction, SlashCommand } from "../..";
|
||||
import { pruneStringFromBottom, stripImages } from "../../llm/countTokens";
|
||||
|
||||
const SERVER_URL = "https://proxy-server-l6vsfbzhba-uw.a.run.app";
|
||||
|
@ -9,7 +9,7 @@ const PROMPT = (
|
|||
${input}
|
||||
`;
|
||||
|
||||
async function getResults(q: string): Promise<any> {
|
||||
async function getResults(q: string, fetch: FetchFunction): Promise<any> {
|
||||
const payload = JSON.stringify({
|
||||
q: `${q} site:stackoverflow.com`,
|
||||
});
|
||||
|
@ -24,7 +24,10 @@ async function getResults(q: string): Promise<any> {
|
|||
return await resp.json();
|
||||
}
|
||||
|
||||
async function fetchData(url: string): Promise<string | undefined> {
|
||||
async function fetchData(
|
||||
url: string,
|
||||
fetch: FetchFunction,
|
||||
): Promise<string | undefined> {
|
||||
const response = await fetch(url, {
|
||||
headers: {
|
||||
Accept: "text/html",
|
||||
|
@ -60,16 +63,16 @@ ${answer}
|
|||
const StackOverflowSlashCommand: SlashCommand = {
|
||||
name: "so",
|
||||
description: "Search Stack Overflow",
|
||||
run: async function* ({ llm, input, addContextItem, history }) {
|
||||
run: async function* ({ llm, input, addContextItem, history, fetch }) {
|
||||
const contextLength = llm.contextLength;
|
||||
|
||||
const sources: string[] = [];
|
||||
const results = await getResults(input);
|
||||
const results = await getResults(input, fetch);
|
||||
const links = results.organic.map((result: any) => result.link);
|
||||
let totalTokens = llm.countTokens(input) + 200;
|
||||
|
||||
for (const link of links) {
|
||||
const contents = await fetchData(link);
|
||||
const contents = await fetchData(link, fetch);
|
||||
if (!contents) {
|
||||
continue;
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import { SerializedContinueConfig } from "..";
|
||||
import { ContextProviderWithParams, SerializedContinueConfig } from "..";
|
||||
|
||||
export const defaultConfig: SerializedContinueConfig = {
|
||||
models: [
|
||||
|
@ -28,42 +28,14 @@ export const defaultConfig: SerializedContinueConfig = {
|
|||
model: "mistral-8x7b",
|
||||
},
|
||||
],
|
||||
slashCommands: [
|
||||
{
|
||||
name: "edit",
|
||||
description: "Edit selected code",
|
||||
},
|
||||
{
|
||||
name: "comment",
|
||||
description: "Write comments for the selected code",
|
||||
},
|
||||
{
|
||||
name: "share",
|
||||
description: "Export this session as markdown",
|
||||
},
|
||||
{
|
||||
name: "cmd",
|
||||
description: "Generate a shell command",
|
||||
},
|
||||
],
|
||||
customCommands: [
|
||||
{
|
||||
name: "test",
|
||||
prompt:
|
||||
"Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
description: "Write unit tests for highlighted code",
|
||||
},
|
||||
],
|
||||
contextProviders: [
|
||||
{ name: "code", params: {} },
|
||||
{ name: "docs", params: {} },
|
||||
{ name: "diff", params: {} },
|
||||
{ name: "open", params: {} },
|
||||
{ name: "terminal", params: {} },
|
||||
{ name: "problems", params: {} },
|
||||
{ name: "folder", params: {} },
|
||||
{ name: "codebase", params: {} },
|
||||
],
|
||||
tabAutocompleteModel: {
|
||||
title: "Starcoder2 3b",
|
||||
provider: "ollama",
|
||||
|
@ -99,32 +71,66 @@ export const defaultConfigJetBrains: SerializedContinueConfig = {
|
|||
model: "mistral-8x7b",
|
||||
},
|
||||
],
|
||||
slashCommands: [
|
||||
{
|
||||
name: "edit",
|
||||
description: "Edit selected code",
|
||||
},
|
||||
{
|
||||
name: "comment",
|
||||
description: "Write comments for the selected code",
|
||||
},
|
||||
{
|
||||
name: "share",
|
||||
description: "Export this session as markdown",
|
||||
},
|
||||
],
|
||||
customCommands: [
|
||||
{
|
||||
name: "test",
|
||||
prompt:
|
||||
"Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
description: "Write unit tests for highlighted code",
|
||||
},
|
||||
],
|
||||
contextProviders: [{ name: "open", params: {} }],
|
||||
tabAutocompleteModel: {
|
||||
title: "Starcoder2 3b",
|
||||
provider: "ollama",
|
||||
model: "starcoder2:3b",
|
||||
},
|
||||
};
|
||||
|
||||
export const defaultSlashCommandsVscode = [
|
||||
{
|
||||
name: "edit",
|
||||
description: "Edit selected code",
|
||||
},
|
||||
{
|
||||
name: "comment",
|
||||
description: "Write comments for the selected code",
|
||||
},
|
||||
{
|
||||
name: "share",
|
||||
description: "Export the current chat session to markdown",
|
||||
},
|
||||
{
|
||||
name: "cmd",
|
||||
description: "Generate a shell command",
|
||||
},
|
||||
];
|
||||
|
||||
export const defaultSlashCommandsJetBrains = [
|
||||
{
|
||||
name: "edit",
|
||||
description: "Edit selected code",
|
||||
},
|
||||
{
|
||||
name: "comment",
|
||||
description: "Write comments for the selected code",
|
||||
},
|
||||
{
|
||||
name: "share",
|
||||
description: "Export the current chat session to markdown",
|
||||
},
|
||||
];
|
||||
|
||||
export const defaultContextProvidersVsCode: ContextProviderWithParams[] = [
|
||||
{ name: "code", params: {} },
|
||||
{ name: "docs", params: {} },
|
||||
{ name: "diff", params: {} },
|
||||
{ name: "open", params: {} },
|
||||
{ name: "terminal", params: {} },
|
||||
{ name: "problems", params: {} },
|
||||
{ name: "folder", params: {} },
|
||||
{ name: "codebase", params: {} },
|
||||
];
|
||||
|
||||
export const defaultContextProvidersJetBrains: ContextProviderWithParams[] = [
|
||||
{ name: "open", params: {} },
|
||||
];
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
import { ContinueConfig, ContinueRcJson, IDE, ILLM } from "..";
|
||||
import { IdeSettings } from "../protocol";
|
||||
import { fetchwithRequestOptions } from "../util/fetchWithOptions";
|
||||
import { Telemetry } from "../util/posthog";
|
||||
import {
|
||||
BrowserSerializedContinueConfig,
|
||||
|
@ -15,7 +14,7 @@ export class ConfigHandler {
|
|||
constructor(
|
||||
private readonly ide: IDE,
|
||||
private ideSettingsPromise: Promise<IdeSettings>,
|
||||
private readonly writeLog: (text: string) => void,
|
||||
private readonly writeLog: (text: string) => Promise<void>,
|
||||
private readonly onConfigUpdate: () => void,
|
||||
) {
|
||||
this.ide = ide;
|
||||
|
@ -73,10 +72,11 @@ export class ConfigHandler {
|
|||
} catch (e) {}
|
||||
|
||||
this.savedConfig = await loadFullConfigNode(
|
||||
this.ide.readFile,
|
||||
this.ide.readFile.bind(this.ide),
|
||||
workspaceConfigs,
|
||||
remoteConfigServerUrl,
|
||||
ideInfo.ideType,
|
||||
this.writeLog,
|
||||
);
|
||||
this.savedConfig.allowAnonymousTelemetry =
|
||||
this.savedConfig.allowAnonymousTelemetry &&
|
||||
|
@ -92,53 +92,6 @@ export class ConfigHandler {
|
|||
return this.savedConfig;
|
||||
}
|
||||
|
||||
setupLlm(llm: ILLM): ILLM {
|
||||
llm._fetch = async (input, init) => {
|
||||
try {
|
||||
const resp = await fetchwithRequestOptions(
|
||||
new URL(input),
|
||||
{ ...init },
|
||||
llm.requestOptions,
|
||||
);
|
||||
if (!resp.ok) {
|
||||
let text = await resp.text();
|
||||
if (resp.status === 404 && !resp.url.includes("/v1")) {
|
||||
if (text.includes("try pulling it first")) {
|
||||
const model = JSON.parse(text).error.split(" ")[1].slice(1, -1);
|
||||
text = `The model "${model}" was not found. To download it, run \`ollama run ${model}\`.`;
|
||||
} else if (text.includes("/api/chat")) {
|
||||
text =
|
||||
"The /api/chat endpoint was not found. This may mean that you are using an older version of Ollama that does not support /api/chat. Upgrading to the latest version will solve the issue.";
|
||||
} else {
|
||||
text =
|
||||
"This may mean that you forgot to add '/v1' to the end of your 'apiBase' in config.json.";
|
||||
}
|
||||
}
|
||||
throw new Error(
|
||||
`HTTP ${resp.status} ${resp.statusText} from ${resp.url}\n\n${text}`,
|
||||
);
|
||||
}
|
||||
|
||||
return resp;
|
||||
} catch (e: any) {
|
||||
if (
|
||||
e.code === "ECONNREFUSED" &&
|
||||
e.message.includes("http://127.0.0.1:11434")
|
||||
) {
|
||||
throw new Error(
|
||||
"Failed to connect to local Ollama instance. To start Ollama, first download it at https://ollama.ai.",
|
||||
);
|
||||
}
|
||||
throw new Error(`${e}`);
|
||||
}
|
||||
};
|
||||
|
||||
llm.writeLog = async (log: string) => {
|
||||
this.writeLog(log);
|
||||
};
|
||||
return llm;
|
||||
}
|
||||
|
||||
async llmFromTitle(title?: string): Promise<ILLM> {
|
||||
const config = await this.loadConfig();
|
||||
const model =
|
||||
|
@ -147,6 +100,6 @@ export class ConfigHandler {
|
|||
throw new Error("No model found");
|
||||
}
|
||||
|
||||
return this.setupLlm(model);
|
||||
return model;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -32,6 +32,7 @@ import { BaseLLM } from "../llm";
|
|||
import { llmFromDescription } from "../llm/llms";
|
||||
import CustomLLMClass from "../llm/llms/CustomLLM";
|
||||
import { copyOf } from "../util";
|
||||
import { fetchwithRequestOptions } from "../util/fetchWithOptions";
|
||||
import mergeJson from "../util/merge";
|
||||
import {
|
||||
getConfigJsPath,
|
||||
|
@ -42,6 +43,12 @@ import {
|
|||
getContinueDotEnv,
|
||||
migrate,
|
||||
} from "../util/paths";
|
||||
import {
|
||||
defaultContextProvidersJetBrains,
|
||||
defaultContextProvidersVsCode,
|
||||
defaultSlashCommandsJetBrains,
|
||||
defaultSlashCommandsVscode,
|
||||
} from "./default";
|
||||
const { execSync } = require("child_process");
|
||||
|
||||
function resolveSerializedConfig(filepath: string): SerializedContinueConfig {
|
||||
|
@ -138,6 +145,16 @@ function loadSerializedConfig(
|
|||
);
|
||||
}
|
||||
|
||||
// Set defaults if undefined (this lets us keep config.json uncluttered for new users)
|
||||
config.contextProviders ??=
|
||||
ideType === "vscode"
|
||||
? defaultContextProvidersVsCode
|
||||
: defaultContextProvidersJetBrains;
|
||||
config.slashCommands ??=
|
||||
ideType === "vscode"
|
||||
? defaultSlashCommandsVscode
|
||||
: defaultSlashCommandsJetBrains;
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
@ -180,13 +197,16 @@ function isContextProviderWithParams(
|
|||
async function intermediateToFinalConfig(
|
||||
config: Config,
|
||||
readFile: (filepath: string) => Promise<string>,
|
||||
writeLog: (log: string) => Promise<void>,
|
||||
): Promise<ContinueConfig> {
|
||||
// Auto-detect models
|
||||
const models: BaseLLM[] = [];
|
||||
for (const desc of config.models) {
|
||||
if (isModelDescription(desc)) {
|
||||
const llm = await llmFromDescription(
|
||||
desc,
|
||||
readFile,
|
||||
writeLog,
|
||||
config.completionOptions,
|
||||
config.systemMessage,
|
||||
);
|
||||
|
@ -204,6 +224,7 @@ async function intermediateToFinalConfig(
|
|||
title: llm.title + " - " + modelName,
|
||||
},
|
||||
readFile,
|
||||
writeLog,
|
||||
copyOf(config.completionOptions),
|
||||
config.systemMessage,
|
||||
);
|
||||
|
@ -221,7 +242,10 @@ async function intermediateToFinalConfig(
|
|||
models.push(llm);
|
||||
}
|
||||
} else {
|
||||
const llm = new CustomLLMClass(desc);
|
||||
const llm = new CustomLLMClass({
|
||||
...desc,
|
||||
options: { ...desc.options, writeLog } as any,
|
||||
});
|
||||
if (llm.model === "AUTODETECT") {
|
||||
try {
|
||||
const modelNames = await llm.listModels();
|
||||
|
@ -229,7 +253,7 @@ async function intermediateToFinalConfig(
|
|||
(modelName) =>
|
||||
new CustomLLMClass({
|
||||
...desc,
|
||||
options: { ...desc.options, model: modelName },
|
||||
options: { ...desc.options, model: modelName, writeLog },
|
||||
}),
|
||||
);
|
||||
|
||||
|
@ -243,12 +267,22 @@ async function intermediateToFinalConfig(
|
|||
}
|
||||
}
|
||||
|
||||
// Prepare models
|
||||
for (const model of models) {
|
||||
model.requestOptions = {
|
||||
...model.requestOptions,
|
||||
...config.requestOptions,
|
||||
};
|
||||
}
|
||||
|
||||
// Tab autocomplete model
|
||||
let autocompleteLlm: BaseLLM | undefined = undefined;
|
||||
if (config.tabAutocompleteModel) {
|
||||
if (isModelDescription(config.tabAutocompleteModel)) {
|
||||
autocompleteLlm = await llmFromDescription(
|
||||
config.tabAutocompleteModel,
|
||||
readFile,
|
||||
writeLog,
|
||||
config.completionOptions,
|
||||
config.systemMessage,
|
||||
);
|
||||
|
@ -257,6 +291,7 @@ async function intermediateToFinalConfig(
|
|||
}
|
||||
}
|
||||
|
||||
// Context providers
|
||||
const contextProviders: IContextProvider[] = [new FileContextProvider({})];
|
||||
for (const provider of config.contextProviders || []) {
|
||||
if (isContextProviderWithParams(provider)) {
|
||||
|
@ -279,7 +314,14 @@ async function intermediateToFinalConfig(
|
|||
const { provider, ...options } = embeddingsProviderDescription;
|
||||
const embeddingsProviderClass = AllEmbeddingsProviders[provider];
|
||||
if (embeddingsProviderClass) {
|
||||
config.embeddingsProvider = new embeddingsProviderClass(options);
|
||||
config.embeddingsProvider = new embeddingsProviderClass(
|
||||
options,
|
||||
(url: string | URL, init: any) =>
|
||||
fetchwithRequestOptions(url, init, {
|
||||
...config.requestOptions,
|
||||
...options.requestOptions,
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -334,10 +376,10 @@ function finalToBrowserConfig(
|
|||
})),
|
||||
systemMessage: final.systemMessage,
|
||||
completionOptions: final.completionOptions,
|
||||
slashCommands: final.slashCommands?.map((m) => ({
|
||||
name: m.name,
|
||||
description: m.description,
|
||||
options: m.params,
|
||||
slashCommands: final.slashCommands?.map((s) => ({
|
||||
name: s.name,
|
||||
description: s.description,
|
||||
params: s.params, //PZTODO: is this why params aren't referenced properly by slash commands?
|
||||
})),
|
||||
contextProviders: final.contextProviders?.map((c) => c.description),
|
||||
disableIndexing: final.disableIndexing,
|
||||
|
@ -430,6 +472,7 @@ async function loadFullConfigNode(
|
|||
workspaceConfigs: ContinueRcJson[],
|
||||
remoteConfigServerUrl: URL | undefined,
|
||||
ideType: IdeType,
|
||||
writeLog: (log: string) => Promise<void>,
|
||||
): Promise<ContinueConfig> {
|
||||
let serialized = loadSerializedConfig(
|
||||
workspaceConfigs,
|
||||
|
@ -471,7 +514,11 @@ async function loadFullConfigNode(
|
|||
}
|
||||
}
|
||||
|
||||
const finalConfig = await intermediateToFinalConfig(intermediate, readFile);
|
||||
const finalConfig = await intermediateToFinalConfig(
|
||||
intermediate,
|
||||
readFile,
|
||||
writeLog,
|
||||
);
|
||||
return finalConfig;
|
||||
}
|
||||
|
||||
|
|
|
@ -77,8 +77,6 @@ declare global {
|
|||
region?: string;
|
||||
projectId?: string;
|
||||
|
||||
_fetch?: (input: any, init?: any) => Promise<any>;
|
||||
|
||||
complete(prompt: string, options?: LLMFullCompletionOptions): Promise<string>;
|
||||
|
||||
streamComplete(
|
||||
|
@ -124,6 +122,8 @@ declare global {
|
|||
type: ContextProviderType;
|
||||
}
|
||||
|
||||
export type FetchFunction = (url: string | URL, init?: any) => Promise<any>;
|
||||
|
||||
export interface ContextProviderExtras {
|
||||
fullInput: string;
|
||||
embeddingsProvider: EmbeddingsProvider;
|
||||
|
@ -131,10 +131,12 @@ declare global {
|
|||
llm: ILLM;
|
||||
ide: IDE;
|
||||
selectedCode: RangeInFile[];
|
||||
fetch: FetchFunction;
|
||||
}
|
||||
|
||||
export interface LoadSubmenuItemsArgs {
|
||||
ide: IDE;
|
||||
fetch: FetchFunction;
|
||||
}
|
||||
|
||||
export interface CustomContextProvider {
|
||||
|
@ -311,7 +313,7 @@ declare global {
|
|||
}[Keys];
|
||||
|
||||
export interface CustomLLMWithOptionals {
|
||||
options?: LLMOptions;
|
||||
options: LLMOptions;
|
||||
streamCompletion?: (
|
||||
prompt: string,
|
||||
options: CompletionOptions,
|
||||
|
@ -428,6 +430,7 @@ declare global {
|
|||
contextItems: ContextItemWithId[];
|
||||
selectedCode: RangeInFile[];
|
||||
config: ContinueConfig;
|
||||
fetch: FetchFunction;
|
||||
}
|
||||
|
||||
export interface SlashCommand {
|
||||
|
@ -497,6 +500,7 @@ declare global {
|
|||
| "openai"
|
||||
| "free-trial"
|
||||
| "anthropic"
|
||||
| "cohere"
|
||||
| "together"
|
||||
| "ollama"
|
||||
| "huggingface-tgi"
|
||||
|
@ -504,7 +508,6 @@ declare global {
|
|||
| "llama.cpp"
|
||||
| "replicate"
|
||||
| "text-gen-webui"
|
||||
| "gemini"
|
||||
| "lmstudio"
|
||||
| "llamafile"
|
||||
| "gemini"
|
||||
|
@ -512,7 +515,8 @@ declare global {
|
|||
| "bedrock"
|
||||
| "deepinfra"
|
||||
| "flowise"
|
||||
| "groq";
|
||||
| "groq"
|
||||
| "custom";
|
||||
|
||||
export type ModelName =
|
||||
| "AUTODETECT"
|
||||
|
@ -522,11 +526,13 @@ declare global {
|
|||
| "gpt-4"
|
||||
| "gpt-3.5-turbo-0613"
|
||||
| "gpt-4-32k"
|
||||
| "gpt-4-turbo"
|
||||
| "gpt-4-turbo-preview"
|
||||
| "gpt-4-vision-preview"
|
||||
// Open Source
|
||||
// Mistral
|
||||
| "mistral-7b"
|
||||
| "mistral-8x7b"
|
||||
// Llama 2
|
||||
| "llama2-7b"
|
||||
| "llama2-13b"
|
||||
| "llama2-70b"
|
||||
|
@ -534,6 +540,10 @@ declare global {
|
|||
| "codellama-13b"
|
||||
| "codellama-34b"
|
||||
| "codellama-70b"
|
||||
// Llama 3
|
||||
| "llama3-8b"
|
||||
| "llama3-70b"
|
||||
// Other Open-source
|
||||
| "phi2"
|
||||
| "phind-codellama-34b"
|
||||
| "wizardcoder-7b"
|
||||
|
@ -550,6 +560,9 @@ declare global {
|
|||
| "claude-3-sonnet-20240229"
|
||||
| "claude-3-haiku-20240307"
|
||||
| "claude-2.1"
|
||||
// Cohere
|
||||
| "command-r"
|
||||
| "command-r-plus"
|
||||
// Gemini
|
||||
| "gemini-pro"
|
||||
| "gemini-1.5-pro-latest"
|
||||
|
@ -629,12 +642,14 @@ declare global {
|
|||
| "transformers.js"
|
||||
| "ollama"
|
||||
| "openai"
|
||||
| "cohere"
|
||||
| "free-trial";
|
||||
|
||||
export interface EmbedOptions {
|
||||
apiBase?: string;
|
||||
apiKey?: string;
|
||||
model?: string;
|
||||
requestOptions?: RequestOptions;
|
||||
}
|
||||
|
||||
export interface EmbeddingsProviderDescription extends EmbedOptions {
|
||||
|
@ -646,7 +661,7 @@ declare global {
|
|||
embed(chunks: string[]): Promise<number[][]>;
|
||||
}
|
||||
|
||||
export type RerankerName = "voyage" | "llm" | "free-trial";
|
||||
export type RerankerName = "cohere" | "voyage" | "llm" | "free-trial";
|
||||
|
||||
export interface RerankerDescription {
|
||||
name: RerankerName;
|
||||
|
@ -675,6 +690,7 @@ declare global {
|
|||
useCache: boolean;
|
||||
onlyMyCode: boolean;
|
||||
useOtherFiles: boolean;
|
||||
disableInFiles?: string[];
|
||||
}
|
||||
|
||||
export interface ContinueUIConfig {
|
||||
|
@ -688,8 +704,14 @@ declare global {
|
|||
optimize?: string;
|
||||
fixGrammar?: string;
|
||||
}
|
||||
|
||||
interface ModelRoles {
|
||||
inlineEdit?: string;
|
||||
}
|
||||
|
||||
interface ExperimantalConfig {
|
||||
contextMenuPrompts?: ContextMenuConfig;
|
||||
modelRoles?: ModelRoles;
|
||||
}
|
||||
|
||||
export interface SerializedContinueConfig {
|
||||
|
@ -698,6 +720,7 @@ declare global {
|
|||
models: ModelDescription[];
|
||||
systemMessage?: string;
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
requestOptions?: RequestOptions;
|
||||
slashCommands?: SlashCommandDescription[];
|
||||
customCommands?: CustomCommand[];
|
||||
contextProviders?: ContextProviderWithParams[];
|
||||
|
@ -719,7 +742,7 @@ declare global {
|
|||
};
|
||||
|
||||
export interface Config {
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://continue.dev/docs/telemetry */
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://docs.continue.dev/telemetry */
|
||||
allowAnonymousTelemetry?: boolean;
|
||||
/** Each entry in this array will originally be a ModelDescription, the same object from your config.json, but you may add CustomLLMs.
|
||||
* A CustomLLM requires you only to define an AsyncGenerator that calls the LLM and yields string updates. You can choose to define either \`streamCompletion\` or \`streamChat\` (or both).
|
||||
|
@ -730,6 +753,8 @@ declare global {
|
|||
systemMessage?: string;
|
||||
/** The default completion options for all models */
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
/** Request options that will be applied to all models and context providers */
|
||||
requestOptions?: RequestOptions;
|
||||
/** The list of slash commands that will be available in the sidebar */
|
||||
slashCommands?: SlashCommand[];
|
||||
/** Each entry in this array will originally be a ContextProviderWithParams, the same object from your config.json, but you may add CustomContextProviders.
|
||||
|
@ -761,6 +786,7 @@ declare global {
|
|||
models: ILLM[];
|
||||
systemMessage?: string;
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
requestOptions?: RequestOptions;
|
||||
slashCommands?: SlashCommand[];
|
||||
contextProviders?: IContextProvider[];
|
||||
disableSessionTitles?: boolean;
|
||||
|
@ -779,6 +805,7 @@ declare global {
|
|||
models: ModelDescription[];
|
||||
systemMessage?: string;
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
requestOptions?: RequestOptions;
|
||||
slashCommands?: SlashCommandDescription[];
|
||||
contextProviders?: ContextProviderDescription[];
|
||||
disableIndexing?: boolean;
|
||||
|
|
|
@ -2,17 +2,30 @@ import { readFileSync, writeFileSync } from "fs";
|
|||
import { ModelDescription } from "..";
|
||||
import { editConfigJson, getConfigJsonPath } from "../util/paths";
|
||||
|
||||
export function addModel(model: ModelDescription) {
|
||||
const config = readFileSync(getConfigJsonPath(), "utf8");
|
||||
const configJson = JSON.parse(config);
|
||||
configJson.models.push(model);
|
||||
const newConfigString = JSON.stringify(
|
||||
configJson,
|
||||
function stringify(obj: any, indentation?: number): string {
|
||||
return JSON.stringify(
|
||||
obj,
|
||||
(key, value) => {
|
||||
return value === null ? undefined : value;
|
||||
},
|
||||
2,
|
||||
indentation,
|
||||
);
|
||||
}
|
||||
|
||||
export function addModel(model: ModelDescription) {
|
||||
const config = readFileSync(getConfigJsonPath(), "utf8");
|
||||
const configJson = JSON.parse(config);
|
||||
|
||||
// De-duplicate
|
||||
if (configJson.models?.some((m: any) => stringify(m) === stringify(model))) {
|
||||
return config;
|
||||
}
|
||||
if (configJson.models?.some((m: any) => m?.title === model.title)) {
|
||||
model.title = `${model.title} (1)`;
|
||||
}
|
||||
|
||||
configJson.models.push(model);
|
||||
const newConfigString = stringify(configJson, 2);
|
||||
writeFileSync(getConfigJsonPath(), newConfigString);
|
||||
return newConfigString;
|
||||
}
|
||||
|
|
|
@ -10,6 +10,8 @@ import configs from "../../indexing/docs/preIndexedDocs";
|
|||
import TransformersJsEmbeddingsProvider from "../../indexing/embeddings/TransformersJsEmbeddingsProvider";
|
||||
|
||||
class DocsContextProvider extends BaseContextProvider {
|
||||
static DEFAULT_N_RETRIEVE = 30;
|
||||
static DEFAULT_N_FINAL = 15;
|
||||
static description: ContextProviderDescription = {
|
||||
title: "docs",
|
||||
displayTitle: "Docs",
|
||||
|
@ -26,13 +28,32 @@ class DocsContextProvider extends BaseContextProvider {
|
|||
const embeddingsProvider = new TransformersJsEmbeddingsProvider();
|
||||
const [vector] = await embeddingsProvider.embed([extras.fullInput]);
|
||||
|
||||
const chunks = await retrieveDocs(
|
||||
let chunks = await retrieveDocs(
|
||||
query,
|
||||
vector,
|
||||
this.options?.nRetrieve || 15,
|
||||
this.options?.nRetrieve ?? DocsContextProvider.DEFAULT_N_RETRIEVE,
|
||||
embeddingsProvider.id,
|
||||
);
|
||||
|
||||
if (extras.reranker) {
|
||||
try {
|
||||
const scores = await extras.reranker.rerank(extras.fullInput, chunks);
|
||||
chunks.sort(
|
||||
(a, b) => scores[chunks.indexOf(b)] - scores[chunks.indexOf(a)],
|
||||
);
|
||||
chunks = chunks.splice(
|
||||
0,
|
||||
this.options?.nFinal ?? DocsContextProvider.DEFAULT_N_FINAL,
|
||||
);
|
||||
} catch (e) {
|
||||
console.warn(`Failed to rerank docs results: ${e}`);
|
||||
chunks = chunks.splice(
|
||||
0,
|
||||
this.options?.nFinal ?? DocsContextProvider.DEFAULT_N_FINAL,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return [
|
||||
...chunks
|
||||
.map((chunk) => ({
|
||||
|
|
|
@ -24,6 +24,9 @@ class GitHubIssuesContextProvider extends BaseContextProvider {
|
|||
|
||||
const octokit = new Octokit({
|
||||
auth: this.options?.githubToken,
|
||||
request: {
|
||||
fetch: extras.fetch,
|
||||
},
|
||||
});
|
||||
|
||||
const { owner, repo, issue_number } = JSON.parse(issueId);
|
||||
|
@ -64,6 +67,9 @@ class GitHubIssuesContextProvider extends BaseContextProvider {
|
|||
|
||||
const octokit = new Octokit({
|
||||
auth: this.options?.githubToken,
|
||||
request: {
|
||||
fetch: args.fetch,
|
||||
},
|
||||
});
|
||||
|
||||
const allIssues = [];
|
||||
|
|
|
@ -32,7 +32,7 @@ class GoogleContextProvider extends BaseContextProvider {
|
|||
"Content-Type": "application/json",
|
||||
};
|
||||
|
||||
const response = await fetch(url, {
|
||||
const response = await extras.fetch(url, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: payload,
|
||||
|
|
|
@ -4,7 +4,6 @@ import {
|
|||
ContextProviderDescription,
|
||||
ContextProviderExtras,
|
||||
} from "../..";
|
||||
import { fetchwithRequestOptions } from "../../util/fetchWithOptions";
|
||||
|
||||
class HttpContextProvider extends BaseContextProvider {
|
||||
static description: ContextProviderDescription = {
|
||||
|
@ -29,20 +28,17 @@ class HttpContextProvider extends BaseContextProvider {
|
|||
query: string,
|
||||
extras: ContextProviderExtras,
|
||||
): Promise<ContextItem[]> {
|
||||
const response = await fetchwithRequestOptions(
|
||||
new URL(this.options.url),
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
query: query || "",
|
||||
fullInput: extras.fullInput,
|
||||
}),
|
||||
}
|
||||
);
|
||||
|
||||
const response = await extras.fetch(new URL(this.options.url), {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
query: query || "",
|
||||
fullInput: extras.fullInput,
|
||||
}),
|
||||
});
|
||||
|
||||
const json: any = await response.json();
|
||||
return [
|
||||
{
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
import { RequestOptions } from "../../..";
|
||||
import { fetchwithRequestOptions } from "../../../util/fetchWithOptions";
|
||||
const { convert: adf2md } = require("adf-to-md");
|
||||
|
||||
interface JiraClientOptions {
|
||||
|
@ -85,12 +84,15 @@ export class JiraClient {
|
|||
};
|
||||
}
|
||||
|
||||
async issue(issueId: string): Promise<Issue> {
|
||||
async issue(
|
||||
issueId: string,
|
||||
customFetch: (url: string | URL, init: any) => Promise<any>,
|
||||
): Promise<Issue> {
|
||||
const result = {} as Issue;
|
||||
|
||||
const response = await fetchwithRequestOptions(
|
||||
const response = await customFetch(
|
||||
new URL(
|
||||
this.baseUrl + `/issue/${issueId}?fields=description,comment,summary`
|
||||
this.baseUrl + `/issue/${issueId}?fields=description,comment,summary`,
|
||||
),
|
||||
{
|
||||
method: "GET",
|
||||
|
@ -99,7 +101,6 @@ export class JiraClient {
|
|||
...this.authHeader,
|
||||
},
|
||||
},
|
||||
this.options.requestOptions
|
||||
);
|
||||
|
||||
const issue = (await response.json()) as any;
|
||||
|
@ -133,14 +134,16 @@ export class JiraClient {
|
|||
return result;
|
||||
}
|
||||
|
||||
async listIssues(): Promise<Array<QueryResult>> {
|
||||
const response = await fetchwithRequestOptions(
|
||||
async listIssues(
|
||||
customFetch: (url: string | URL, init: any) => Promise<any>,
|
||||
): Promise<Array<QueryResult>> {
|
||||
const response = await customFetch(
|
||||
new URL(
|
||||
this.baseUrl +
|
||||
`/search?fields=summary&jql=${
|
||||
this.options.issueQuery ??
|
||||
`assignee = currentUser() AND resolution = Unresolved order by updated DESC`
|
||||
}`
|
||||
}`,
|
||||
),
|
||||
{
|
||||
method: "GET",
|
||||
|
@ -149,13 +152,12 @@ export class JiraClient {
|
|||
...this.authHeader,
|
||||
},
|
||||
},
|
||||
this.options.requestOptions
|
||||
);
|
||||
|
||||
if (response.status != 200) {
|
||||
console.warn(
|
||||
"Unable to get jira tickets. Response code from API is",
|
||||
response.status
|
||||
response.status,
|
||||
);
|
||||
return Promise.resolve([]);
|
||||
}
|
||||
|
|
|
@ -29,12 +29,12 @@ class JiraIssuesContextProvider extends BaseContextProvider {
|
|||
|
||||
async getContextItems(
|
||||
query: string,
|
||||
extras: ContextProviderExtras
|
||||
extras: ContextProviderExtras,
|
||||
): Promise<ContextItem[]> {
|
||||
const issueId = query;
|
||||
|
||||
const api = this.getApi();
|
||||
const issue = await api.issue(query);
|
||||
const issue = await api.issue(query, extras.fetch);
|
||||
|
||||
const parts = [
|
||||
`# Jira Issue ${issue.key}: ${issue.summary}`,
|
||||
|
@ -48,7 +48,7 @@ class JiraIssuesContextProvider extends BaseContextProvider {
|
|||
parts.push(
|
||||
...issue.comments.map((comment) => {
|
||||
return `### ${comment.author.displayName} on ${comment.created}\n\n${comment.body}`;
|
||||
})
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -64,12 +64,12 @@ class JiraIssuesContextProvider extends BaseContextProvider {
|
|||
}
|
||||
|
||||
async loadSubmenuItems(
|
||||
args: LoadSubmenuItemsArgs
|
||||
args: LoadSubmenuItemsArgs,
|
||||
): Promise<ContextSubmenuItem[]> {
|
||||
const api = await this.getApi();
|
||||
|
||||
try {
|
||||
const issues = await api.listIssues();
|
||||
const issues = await api.listIssues(args.fetch);
|
||||
|
||||
return issues.map((issue) => ({
|
||||
id: issue.id,
|
||||
|
|
|
@ -0,0 +1,54 @@
|
|||
import { Readability } from "@mozilla/readability";
|
||||
import { JSDOM } from "jsdom";
|
||||
import { NodeHtmlMarkdown } from "node-html-markdown";
|
||||
import { BaseContextProvider } from "..";
|
||||
import {
|
||||
ContextItem,
|
||||
ContextProviderDescription,
|
||||
ContextProviderExtras,
|
||||
} from "../..";
|
||||
|
||||
class URLContextProvider extends BaseContextProvider {
|
||||
static description: ContextProviderDescription = {
|
||||
title: "url",
|
||||
displayTitle: "URL",
|
||||
description: "Reference a webpage at a given URL",
|
||||
type: "query",
|
||||
};
|
||||
|
||||
async getContextItems(
|
||||
query: string,
|
||||
extras: ContextProviderExtras,
|
||||
): Promise<ContextItem[]> {
|
||||
try {
|
||||
const url = new URL(query);
|
||||
const resp = await extras.fetch(url);
|
||||
const html = await resp.text();
|
||||
|
||||
const dom = new JSDOM(html);
|
||||
let reader = new Readability(dom.window.document);
|
||||
let article = reader.parse();
|
||||
const content = article?.content || "";
|
||||
const markdown = NodeHtmlMarkdown.translate(
|
||||
content,
|
||||
{},
|
||||
undefined,
|
||||
undefined,
|
||||
);
|
||||
|
||||
const title = article?.title || url.pathname;
|
||||
return [
|
||||
{
|
||||
description: title,
|
||||
content: markdown,
|
||||
name: title,
|
||||
},
|
||||
];
|
||||
} catch (e) {
|
||||
console.log(e);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export default URLContextProvider;
|
|
@ -19,6 +19,7 @@ import PostgresContextProvider from "./PostgresContextProvider";
|
|||
import ProblemsContextProvider from "./ProblemsContextProvider";
|
||||
import SearchContextProvider from "./SearchContextProvider";
|
||||
import TerminalContextProvider from "./TerminalContextProvider";
|
||||
import URLContextProvider from "./URLContextProvider";
|
||||
|
||||
const Providers: (typeof BaseContextProvider)[] = [
|
||||
DiffContextProvider,
|
||||
|
@ -42,6 +43,7 @@ const Providers: (typeof BaseContextProvider)[] = [
|
|||
PostgresContextProvider,
|
||||
DatabaseContextProvider,
|
||||
CodeContextProvider,
|
||||
URLContextProvider,
|
||||
];
|
||||
|
||||
export function contextProviderClassFromName(
|
||||
|
|
|
@ -0,0 +1,47 @@
|
|||
import fetch from "node-fetch";
|
||||
import { Chunk, Reranker } from "../..";
|
||||
|
||||
export class CohereReranker implements Reranker {
|
||||
name = "cohere";
|
||||
|
||||
static defaultOptions = {
|
||||
apiBase: "https://api.cohere.ai/v1/",
|
||||
model: "rerank-english-v3.0",
|
||||
};
|
||||
|
||||
constructor(
|
||||
private readonly params: {
|
||||
apiBase?: string;
|
||||
apiKey: string;
|
||||
model?: string;
|
||||
},
|
||||
) {}
|
||||
|
||||
async rerank(query: string, chunks: Chunk[]): Promise<number[]> {
|
||||
let apiBase = this.params.apiBase ?? CohereReranker.defaultOptions.apiBase
|
||||
if (!apiBase.endsWith("/")) {
|
||||
apiBase += "/";
|
||||
}
|
||||
|
||||
const resp = await fetch(new URL("rerank", apiBase), {
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: `Bearer ${this.params.apiKey}`,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
model: this.params.model ?? CohereReranker.defaultOptions.model,
|
||||
query,
|
||||
documents: chunks.map((chunk) => chunk.content),
|
||||
}),
|
||||
});
|
||||
|
||||
if (!resp.ok) {
|
||||
throw new Error(await resp.text());
|
||||
}
|
||||
|
||||
const data = (await resp.json()) as any;
|
||||
const results = data.results.sort((a: any, b: any) => a.index - b.index);
|
||||
return results.map((result: any) => result.relevance_score);
|
||||
}
|
||||
}
|
|
@ -1,9 +1,11 @@
|
|||
import { RerankerName } from "../..";
|
||||
import { CohereReranker } from "./cohere";
|
||||
import { FreeTrialReranker } from "./freeTrial";
|
||||
import { LLMReranker } from "./llm";
|
||||
import { VoyageReranker } from "./voyage";
|
||||
|
||||
export const AllRerankers: { [key in RerankerName]: any } = {
|
||||
cohere: CohereReranker,
|
||||
llm: LLMReranker,
|
||||
voyage: VoyageReranker,
|
||||
"free-trial": FreeTrialReranker,
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import fetch from "node-fetch";
|
||||
import { Chunk, Reranker } from "../..";
|
||||
|
||||
export class VoyageReranker implements Reranker {
|
||||
|
@ -23,7 +24,7 @@ export class VoyageReranker implements Reranker {
|
|||
model: this.params.model ?? "rerank-lite-1",
|
||||
}),
|
||||
});
|
||||
const data = await resp.json();
|
||||
const data: any = await resp.json();
|
||||
const results = data.data.sort((a: any, b: any) => a.index - b.index);
|
||||
return results.map((result: any) => result.relevance_score);
|
||||
}
|
||||
|
|
|
@ -75,8 +75,6 @@ export interface ILLM extends LLMOptions {
|
|||
region?: string;
|
||||
projectId?: string;
|
||||
|
||||
_fetch?: (input: any, init?: any) => Promise<any>;
|
||||
|
||||
complete(prompt: string, options?: LLMFullCompletionOptions): Promise<string>;
|
||||
|
||||
streamComplete(
|
||||
|
@ -122,6 +120,8 @@ export interface ContextProviderDescription {
|
|||
type: ContextProviderType;
|
||||
}
|
||||
|
||||
export type FetchFunction = (url: string | URL, init?: any) => Promise<any>;
|
||||
|
||||
export interface ContextProviderExtras {
|
||||
fullInput: string;
|
||||
embeddingsProvider: EmbeddingsProvider;
|
||||
|
@ -129,10 +129,12 @@ export interface ContextProviderExtras {
|
|||
llm: ILLM;
|
||||
ide: IDE;
|
||||
selectedCode: RangeInFile[];
|
||||
fetch: FetchFunction;
|
||||
}
|
||||
|
||||
export interface LoadSubmenuItemsArgs {
|
||||
ide: IDE;
|
||||
fetch: FetchFunction;
|
||||
}
|
||||
|
||||
export interface CustomContextProvider {
|
||||
|
@ -309,7 +311,7 @@ type RequireAtLeastOne<T, Keys extends keyof T = keyof T> = Pick<
|
|||
}[Keys];
|
||||
|
||||
export interface CustomLLMWithOptionals {
|
||||
options?: LLMOptions;
|
||||
options: LLMOptions;
|
||||
streamCompletion?: (
|
||||
prompt: string,
|
||||
options: CompletionOptions,
|
||||
|
@ -426,6 +428,7 @@ export interface ContinueSDK {
|
|||
contextItems: ContextItemWithId[];
|
||||
selectedCode: RangeInFile[];
|
||||
config: ContinueConfig;
|
||||
fetch: FetchFunction;
|
||||
}
|
||||
|
||||
export interface SlashCommand {
|
||||
|
@ -489,7 +492,8 @@ type TemplateType =
|
|||
| "neural-chat"
|
||||
| "codellama-70b"
|
||||
| "llava"
|
||||
| "gemma";
|
||||
| "gemma"
|
||||
| "llama3";
|
||||
|
||||
type ModelProvider =
|
||||
| "openai"
|
||||
|
@ -510,7 +514,8 @@ type ModelProvider =
|
|||
| "bedrock"
|
||||
| "deepinfra"
|
||||
| "flowise"
|
||||
| "groq";
|
||||
| "groq"
|
||||
| "custom";
|
||||
|
||||
export type ModelName =
|
||||
| "AUTODETECT"
|
||||
|
@ -636,12 +641,14 @@ export type EmbeddingsProviderName =
|
|||
| "transformers.js"
|
||||
| "ollama"
|
||||
| "openai"
|
||||
| "cohere"
|
||||
| "free-trial";
|
||||
|
||||
export interface EmbedOptions {
|
||||
apiBase?: string;
|
||||
apiKey?: string;
|
||||
model?: string;
|
||||
requestOptions?: RequestOptions;
|
||||
}
|
||||
|
||||
export interface EmbeddingsProviderDescription extends EmbedOptions {
|
||||
|
@ -653,7 +660,7 @@ export interface EmbeddingsProvider {
|
|||
embed(chunks: string[]): Promise<number[][]>;
|
||||
}
|
||||
|
||||
export type RerankerName = "voyage" | "llm" | "free-trial";
|
||||
export type RerankerName = "cohere" | "voyage" | "llm" | "free-trial";
|
||||
|
||||
export interface RerankerDescription {
|
||||
name: RerankerName;
|
||||
|
@ -696,8 +703,14 @@ interface ContextMenuConfig {
|
|||
optimize?: string;
|
||||
fixGrammar?: string;
|
||||
}
|
||||
|
||||
interface ModelRoles {
|
||||
inlineEdit?: string;
|
||||
}
|
||||
|
||||
interface ExperimantalConfig {
|
||||
contextMenuPrompts?: ContextMenuConfig;
|
||||
modelRoles?: ModelRoles;
|
||||
}
|
||||
|
||||
export interface SerializedContinueConfig {
|
||||
|
@ -706,6 +719,7 @@ export interface SerializedContinueConfig {
|
|||
models: ModelDescription[];
|
||||
systemMessage?: string;
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
requestOptions?: RequestOptions;
|
||||
slashCommands?: SlashCommandDescription[];
|
||||
customCommands?: CustomCommand[];
|
||||
contextProviders?: ContextProviderWithParams[];
|
||||
|
@ -727,7 +741,7 @@ export type ContinueRcJson = Partial<SerializedContinueConfig> & {
|
|||
};
|
||||
|
||||
export interface Config {
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://continue.dev/docs/telemetry */
|
||||
/** If set to true, Continue will collect anonymous usage data to improve the product. If set to false, we will collect nothing. Read here to learn more: https://docs.continue.dev/telemetry */
|
||||
allowAnonymousTelemetry?: boolean;
|
||||
/** Each entry in this array will originally be a ModelDescription, the same object from your config.json, but you may add CustomLLMs.
|
||||
* A CustomLLM requires you only to define an AsyncGenerator that calls the LLM and yields string updates. You can choose to define either `streamCompletion` or `streamChat` (or both).
|
||||
|
@ -738,6 +752,8 @@ export interface Config {
|
|||
systemMessage?: string;
|
||||
/** The default completion options for all models */
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
/** Request options that will be applied to all models and context providers */
|
||||
requestOptions?: RequestOptions;
|
||||
/** The list of slash commands that will be available in the sidebar */
|
||||
slashCommands?: SlashCommand[];
|
||||
/** Each entry in this array will originally be a ContextProviderWithParams, the same object from your config.json, but you may add CustomContextProviders.
|
||||
|
@ -769,6 +785,7 @@ export interface ContinueConfig {
|
|||
models: ILLM[];
|
||||
systemMessage?: string;
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
requestOptions?: RequestOptions;
|
||||
slashCommands?: SlashCommand[];
|
||||
contextProviders?: IContextProvider[];
|
||||
disableSessionTitles?: boolean;
|
||||
|
@ -787,6 +804,7 @@ export interface BrowserSerializedContinueConfig {
|
|||
models: ModelDescription[];
|
||||
systemMessage?: string;
|
||||
completionOptions?: BaseCompletionOptions;
|
||||
requestOptions?: RequestOptions;
|
||||
slashCommands?: SlashCommandDescription[];
|
||||
contextProviders?: ContextProviderDescription[];
|
||||
disableIndexing?: boolean;
|
||||
|
|
|
@ -104,10 +104,16 @@ export class CodeSnippetsCodebaseIndex implements CodebaseIndex {
|
|||
|
||||
for (let i = 0; i < results.compute.length; i++) {
|
||||
const compute = results.compute[i];
|
||||
const snippets = await this.getSnippetsInFile(
|
||||
compute.path,
|
||||
await this.ide.readFile(compute.path),
|
||||
);
|
||||
|
||||
let snippets: (ChunkWithoutID & { title: string })[] = [];
|
||||
try {
|
||||
snippets = await this.getSnippetsInFile(
|
||||
compute.path,
|
||||
await this.ide.readFile(compute.path),
|
||||
);
|
||||
} catch (e) {
|
||||
// If can't parse, assume malformatted code
|
||||
}
|
||||
|
||||
// Add snippets to sqlite
|
||||
for (const snippet of snippets) {
|
||||
|
|
|
@ -121,7 +121,7 @@ export class FullTextSearchCodebaseIndex implements CodebaseIndex {
|
|||
let results = await db.all(query, [
|
||||
...tagStrings,
|
||||
...(filterPaths || []),
|
||||
n,
|
||||
Math.ceil(n),
|
||||
]);
|
||||
|
||||
results = results.filter((result) => result.rank <= bm25Threshold);
|
||||
|
|
|
@ -44,10 +44,7 @@ export class LanceDbIndex implements CodebaseIndex {
|
|||
) {}
|
||||
|
||||
private tableNameForTag(tag: IndexTag) {
|
||||
return tagToString(tag)
|
||||
.replace(/\//g, "")
|
||||
.replace(/\\/g, "")
|
||||
.replace(/\:/g, "");
|
||||
return tagToString(tag).replace(/[^\w-_.]/g, "");
|
||||
}
|
||||
|
||||
private async createSqliteCacheTable(db: DatabaseConnection) {
|
||||
|
@ -101,6 +98,12 @@ export class LanceDbIndex implements CodebaseIndex {
|
|||
chunks.map((c) => c.content),
|
||||
);
|
||||
|
||||
if (embeddings.some((emb) => emb === undefined)) {
|
||||
throw new Error(
|
||||
`Failed to generate embedding for ${chunks[0]?.filepath} with provider: ${this.embeddingsProvider.id}`,
|
||||
);
|
||||
}
|
||||
|
||||
// Create row format
|
||||
for (let j = 0; j < chunks.length; j++) {
|
||||
const progress = (i + j / chunks.length) / items.length;
|
||||
|
|
|
@ -44,8 +44,8 @@ async function crawlGithubRepo(baseUrl: URL) {
|
|||
);
|
||||
|
||||
const paths = tree.data.tree
|
||||
.filter((file) => file.type === "blob" && file.path?.endsWith(".md"))
|
||||
.map((file) => baseUrl.pathname + "/tree/main/" + file.path);
|
||||
.filter((file: any) => file.type === "blob" && file.path?.endsWith(".md"))
|
||||
.map((file: any) => baseUrl.pathname + "/tree/main/" + file.path);
|
||||
|
||||
return paths;
|
||||
}
|
||||
|
@ -113,7 +113,9 @@ async function getLinksFromUrl(url: string, path: string) {
|
|||
}
|
||||
|
||||
function splitUrl(url: URL) {
|
||||
const baseUrl = `${url.protocol}//${url.hostname}`;
|
||||
const baseUrl = `${url.protocol}//${url.hostname}${
|
||||
url.port ? ":" + url.port : ""
|
||||
}`;
|
||||
const basePath = url.pathname;
|
||||
return {
|
||||
baseUrl,
|
||||
|
|
|
@ -143,8 +143,8 @@ const configs: SiteIndexingConfig[] = [
|
|||
},
|
||||
{
|
||||
title: "Continue",
|
||||
startUrl: "https://continue.dev/docs/intro",
|
||||
rootUrl: "https://continue.dev/docs",
|
||||
startUrl: "https://docs.continue.dev/intro",
|
||||
rootUrl: "https://docs.continue.dev",
|
||||
},
|
||||
{
|
||||
title: "jQuery",
|
||||
|
@ -216,6 +216,16 @@ const configs: SiteIndexingConfig[] = [
|
|||
startUrl: "https://python.langchain.com/docs/get_started/introduction",
|
||||
rootUrl: "https://python.langchain.com/docs",
|
||||
},
|
||||
{
|
||||
title: "WooCommerce",
|
||||
startUrl: "https://developer.woocommerce.com/docs/",
|
||||
rootUrl: "https://developer.woocommerce.com/docs/",
|
||||
},
|
||||
{
|
||||
title: "WordPress",
|
||||
startUrl: "https://developer.wordpress.org/reference/",
|
||||
rootUrl: "https://developer.wordpress.org/reference/",
|
||||
},
|
||||
];
|
||||
|
||||
export default configs;
|
||||
|
|
|
@ -1,18 +1,20 @@
|
|||
import { EmbedOptions, EmbeddingsProvider } from "../..";
|
||||
import { EmbedOptions, EmbeddingsProvider, FetchFunction } from "../..";
|
||||
|
||||
class BaseEmbeddingsProvider implements EmbeddingsProvider {
|
||||
options: EmbedOptions;
|
||||
fetch: FetchFunction;
|
||||
static defaultOptions: Partial<EmbedOptions> | undefined = undefined;
|
||||
|
||||
get id(): string {
|
||||
throw new Error("Method not implemented.");
|
||||
}
|
||||
|
||||
constructor(options: EmbedOptions) {
|
||||
constructor(options: EmbedOptions, fetch: FetchFunction) {
|
||||
this.options = {
|
||||
...(this.constructor as typeof BaseEmbeddingsProvider).defaultOptions,
|
||||
...options,
|
||||
};
|
||||
this.fetch = fetch;
|
||||
}
|
||||
|
||||
embed(chunks: string[]): Promise<number[][]> {
|
||||
|
|
|
@ -0,0 +1,67 @@
|
|||
import { Response } from "node-fetch";
|
||||
import { EmbedOptions } from "../..";
|
||||
import { withExponentialBackoff } from "../../util/withExponentialBackoff";
|
||||
import BaseEmbeddingsProvider from "./BaseEmbeddingsProvider";
|
||||
|
||||
class CohereEmbeddingsProvider extends BaseEmbeddingsProvider {
|
||||
static maxBatchSize = 96;
|
||||
|
||||
static defaultOptions: Partial<EmbedOptions> | undefined = {
|
||||
apiBase: "https://api.cohere.ai/v1/",
|
||||
model: "embed-english-v3.0",
|
||||
};
|
||||
|
||||
get id(): string {
|
||||
return this.options.model ?? "cohere";
|
||||
}
|
||||
|
||||
async embed(chunks: string[]) {
|
||||
if (!this.options.apiBase?.endsWith("/")) {
|
||||
this.options.apiBase += "/";
|
||||
}
|
||||
|
||||
const batchedChunks = [];
|
||||
for (
|
||||
let i = 0;
|
||||
i < chunks.length;
|
||||
i += CohereEmbeddingsProvider.maxBatchSize
|
||||
) {
|
||||
batchedChunks.push(
|
||||
chunks.slice(i, i + CohereEmbeddingsProvider.maxBatchSize),
|
||||
);
|
||||
}
|
||||
return (
|
||||
await Promise.all(
|
||||
batchedChunks.map(async (batch) => {
|
||||
const fetchWithBackoff = () =>
|
||||
withExponentialBackoff<Response>(() =>
|
||||
this.fetch(new URL("embed", this.options.apiBase), {
|
||||
method: "POST",
|
||||
body: JSON.stringify({
|
||||
texts: batch,
|
||||
model: this.options.model,
|
||||
input_type: "search_document",
|
||||
embedding_types: ["float"],
|
||||
truncate: "END",
|
||||
}),
|
||||
headers: {
|
||||
Authorization: `Bearer ${this.options.apiKey}`,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
}),
|
||||
);
|
||||
const resp = await fetchWithBackoff();
|
||||
|
||||
if (!resp.ok) {
|
||||
throw new Error(await resp.text());
|
||||
}
|
||||
|
||||
const data = (await resp.json()) as any;
|
||||
return data.embeddings.float;
|
||||
}),
|
||||
)
|
||||
).flat();
|
||||
}
|
||||
}
|
||||
|
||||
export default CohereEmbeddingsProvider;
|
|
@ -14,13 +14,16 @@ class DeepInfraEmbeddingsProvider extends BaseEmbeddingsProvider {
|
|||
async embed(chunks: string[]) {
|
||||
const fetchWithBackoff = () =>
|
||||
withExponentialBackoff<Response>(() =>
|
||||
fetch(`https://api.deepinfra.com/v1/inference/${this.options.model}`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: `bearer ${this.options.apiKey}`,
|
||||
this.fetch(
|
||||
`https://api.deepinfra.com/v1/inference/${this.options.model}`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: `bearer ${this.options.apiKey}`,
|
||||
},
|
||||
body: JSON.stringify({ inputs: chunks }),
|
||||
},
|
||||
body: JSON.stringify({ inputs: chunks }),
|
||||
}),
|
||||
),
|
||||
);
|
||||
const resp = await fetchWithBackoff();
|
||||
const data = await resp.json();
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import fetch, { Response } from "node-fetch";
|
||||
import { Response } from "node-fetch";
|
||||
import { EmbedOptions } from "../..";
|
||||
import { getHeaders } from "../../continueServer/stubs/headers";
|
||||
import { SERVER_URL } from "../../util/parameters";
|
||||
|
@ -31,7 +31,7 @@ class FreeTrialEmbeddingsProvider extends BaseEmbeddingsProvider {
|
|||
batchedChunks.map(async (batch) => {
|
||||
const fetchWithBackoff = () =>
|
||||
withExponentialBackoff<Response>(() =>
|
||||
fetch(new URL("embeddings", SERVER_URL), {
|
||||
this.fetch(new URL("embeddings", SERVER_URL), {
|
||||
method: "POST",
|
||||
body: JSON.stringify({
|
||||
input: batch,
|
||||
|
|
|
@ -1,11 +1,15 @@
|
|||
import { EmbedOptions } from "../..";
|
||||
import { EmbedOptions, FetchFunction } from "../..";
|
||||
import { withExponentialBackoff } from "../../util/withExponentialBackoff";
|
||||
import BaseEmbeddingsProvider from "./BaseEmbeddingsProvider";
|
||||
|
||||
async function embedOne(chunk: string, options: EmbedOptions) {
|
||||
async function embedOne(
|
||||
chunk: string,
|
||||
options: EmbedOptions,
|
||||
customFetch: FetchFunction,
|
||||
) {
|
||||
const fetchWithBackoff = () =>
|
||||
withExponentialBackoff<Response>(() =>
|
||||
fetch(new URL("api/embeddings", options.apiBase), {
|
||||
customFetch(new URL("api/embeddings", options.apiBase), {
|
||||
method: "POST",
|
||||
body: JSON.stringify({
|
||||
model: options.model,
|
||||
|
@ -33,7 +37,7 @@ class OllamaEmbeddingsProvider extends BaseEmbeddingsProvider {
|
|||
async embed(chunks: string[]) {
|
||||
const results: any = [];
|
||||
for (const chunk of chunks) {
|
||||
results.push(await embedOne(chunk, this.options));
|
||||
results.push(await embedOne(chunk, this.options, this.fetch));
|
||||
}
|
||||
return results;
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import fetch, { Response } from "node-fetch";
|
||||
import { Response } from "node-fetch";
|
||||
import { EmbedOptions } from "../..";
|
||||
import { withExponentialBackoff } from "../../util/withExponentialBackoff";
|
||||
import BaseEmbeddingsProvider from "./BaseEmbeddingsProvider";
|
||||
|
@ -37,7 +37,7 @@ class OpenAIEmbeddingsProvider extends BaseEmbeddingsProvider {
|
|||
batchedChunks.map(async (batch) => {
|
||||
const fetchWithBackoff = () =>
|
||||
withExponentialBackoff<Response>(() =>
|
||||
fetch(new URL("embeddings", this.options.apiBase), {
|
||||
this.fetch(new URL("embeddings", this.options.apiBase), {
|
||||
method: "POST",
|
||||
body: JSON.stringify({
|
||||
input: batch,
|
||||
|
|
|
@ -32,7 +32,7 @@ export class TransformersJsEmbeddingsProvider extends BaseEmbeddingsProvider {
|
|||
static MaxGroupSize: number = 4;
|
||||
|
||||
constructor() {
|
||||
super({ model: "all-MiniLM-L2-v6" });
|
||||
super({ model: "all-MiniLM-L2-v6" }, () => Promise.resolve(null));
|
||||
}
|
||||
|
||||
get id(): string {
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import { EmbeddingsProviderName } from "../..";
|
||||
import CohereEmbeddingsProvider from "./CohereEmbeddingsProvider";
|
||||
import FreeTrialEmbeddingsProvider from "./FreeTrialEmbeddingsProvider";
|
||||
import OllamaEmbeddingsProvider from "./OllamaEmbeddingsProvider";
|
||||
import OpenAIEmbeddingsProvider from "./OpenAIEmbeddingsProvider";
|
||||
|
@ -10,5 +11,6 @@ export const AllEmbeddingsProviders: {
|
|||
ollama: OllamaEmbeddingsProvider,
|
||||
"transformers.js": TransformersJsEmbeddingsProvider,
|
||||
openai: OpenAIEmbeddingsProvider,
|
||||
cohere: CohereEmbeddingsProvider,
|
||||
"free-trial": FreeTrialEmbeddingsProvider,
|
||||
};
|
||||
|
|
|
@ -1,23 +0,0 @@
|
|||
export const fetchWithExponentialBackoff = async (
|
||||
url: string,
|
||||
options: RequestInit,
|
||||
retries: number = 5,
|
||||
delay: number = 1000,
|
||||
): Promise<Response> => {
|
||||
try {
|
||||
const response = await fetch(url, options);
|
||||
if (!response.ok && response.status === 429 && retries > 0) {
|
||||
// Wait for delay milliseconds and retry
|
||||
await new Promise((resolve) => setTimeout(resolve, delay));
|
||||
return fetchWithExponentialBackoff(url, options, retries - 1, delay * 2);
|
||||
}
|
||||
return response;
|
||||
} catch (error) {
|
||||
if (retries > 0) {
|
||||
// Wait for delay milliseconds and retry
|
||||
await new Promise((resolve) => setTimeout(resolve, delay));
|
||||
return fetchWithExponentialBackoff(url, options, retries - 1, delay * 2);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
};
|
|
@ -81,5 +81,6 @@ export const DEFAULT_IGNORE_DIRS = [
|
|||
"__pycache__",
|
||||
"site-packages",
|
||||
".cache",
|
||||
"gems",
|
||||
];
|
||||
export const defaultIgnoreDir = ignore().add(DEFAULT_IGNORE_DIRS);
|
||||
|
|
|
@ -61,6 +61,10 @@ export class CodebaseIndexer {
|
|||
workspaceDirs: string[],
|
||||
abortSignal: AbortSignal,
|
||||
): AsyncGenerator<IndexingProgressUpdate> {
|
||||
if (workspaceDirs.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const config = await this.configHandler.loadConfig();
|
||||
if (config.disableIndexing) {
|
||||
return;
|
||||
|
@ -72,9 +76,8 @@ export class CodebaseIndexer {
|
|||
|
||||
// Wait until Git Extension has loaded to report progress
|
||||
// so we don't appear stuck at 0% while waiting
|
||||
if (workspaceDirs.length > 0) {
|
||||
await this.ide.getRepoName(workspaceDirs[0]);
|
||||
}
|
||||
await this.ide.getRepoName(workspaceDirs[0]);
|
||||
|
||||
yield {
|
||||
progress: 0,
|
||||
desc: "Starting indexing...",
|
||||
|
@ -86,21 +89,21 @@ export class CodebaseIndexer {
|
|||
const repoName = await this.ide.getRepoName(directory);
|
||||
let completedIndexes = 0;
|
||||
|
||||
try {
|
||||
for (let codebaseIndex of indexesToBuild) {
|
||||
// TODO: IndexTag type should use repoName rather than directory
|
||||
const tag: IndexTag = {
|
||||
directory,
|
||||
branch,
|
||||
artifactId: codebaseIndex.artifactId,
|
||||
};
|
||||
const [results, markComplete] = await getComputeDeleteAddRemove(
|
||||
tag,
|
||||
{ ...stats },
|
||||
(filepath) => this.ide.readFile(filepath),
|
||||
repoName,
|
||||
);
|
||||
for (let codebaseIndex of indexesToBuild) {
|
||||
// TODO: IndexTag type should use repoName rather than directory
|
||||
const tag: IndexTag = {
|
||||
directory,
|
||||
branch,
|
||||
artifactId: codebaseIndex.artifactId,
|
||||
};
|
||||
const [results, markComplete] = await getComputeDeleteAddRemove(
|
||||
tag,
|
||||
{ ...stats },
|
||||
(filepath) => this.ide.readFile(filepath),
|
||||
repoName,
|
||||
);
|
||||
|
||||
try {
|
||||
for await (let { progress, desc } of codebaseIndex.update(
|
||||
tag,
|
||||
results,
|
||||
|
@ -134,9 +137,11 @@ export class CodebaseIndexer {
|
|||
workspaceDirs.length,
|
||||
desc: "Completed indexing " + codebaseIndex.artifactId,
|
||||
};
|
||||
} catch (e) {
|
||||
console.warn(
|
||||
`Error updating the ${codebaseIndex.artifactId} index: ${e}`,
|
||||
);
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn("Error refreshing index: ", e);
|
||||
}
|
||||
|
||||
completedDirs++;
|
||||
|
|
|
@ -6,6 +6,7 @@ import {
|
|||
deepseekTemplateMessages,
|
||||
gemmaTemplateMessage,
|
||||
llama2TemplateMessages,
|
||||
llama3TemplateMessages,
|
||||
llavaTemplateMessages,
|
||||
neuralChatTemplateMessages,
|
||||
openchatTemplateMessages,
|
||||
|
@ -22,6 +23,7 @@ import {
|
|||
deepseekEditPrompt,
|
||||
gemmaEditPrompt,
|
||||
gptEditPrompt,
|
||||
llama3EditPrompt,
|
||||
mistralEditPrompt,
|
||||
neuralChatEditPrompt,
|
||||
openchatEditPrompt,
|
||||
|
@ -124,6 +126,10 @@ function autodetectTemplateType(model: string): TemplateType | undefined {
|
|||
return undefined;
|
||||
}
|
||||
|
||||
if (lower.includes("llama3")) {
|
||||
return "llama3";
|
||||
}
|
||||
|
||||
if (lower.includes("llava")) {
|
||||
return "llava";
|
||||
}
|
||||
|
@ -218,6 +224,7 @@ function autodetectTemplateFunction(
|
|||
llava: llavaTemplateMessages,
|
||||
"codellama-70b": codeLlama70bTemplateMessages,
|
||||
gemma: gemmaTemplateMessage,
|
||||
llama3: llama3TemplateMessages,
|
||||
none: null,
|
||||
};
|
||||
|
||||
|
@ -241,6 +248,7 @@ const USES_OS_MODELS_EDIT_PROMPT: TemplateType[] = [
|
|||
"phind",
|
||||
"xwin-coder",
|
||||
"zephyr",
|
||||
"llama3",
|
||||
];
|
||||
|
||||
function autodetectPromptTemplates(
|
||||
|
@ -284,6 +292,8 @@ function autodetectPromptTemplates(
|
|||
editTemplate = claudeEditPrompt;
|
||||
} else if (templateType === "gemma") {
|
||||
editTemplate = gemmaEditPrompt;
|
||||
} else if (templateType === "llama3") {
|
||||
editTemplate = llama3EditPrompt;
|
||||
} else if (templateType === "none") {
|
||||
editTemplate = null;
|
||||
} else if (templateType) {
|
||||
|
|
|
@ -13,6 +13,7 @@ import {
|
|||
TemplateType,
|
||||
} from "..";
|
||||
import { DevDataSqliteDb } from "../util/devdataSqlite";
|
||||
import { fetchwithRequestOptions } from "../util/fetchWithOptions";
|
||||
import mergeJson from "../util/merge";
|
||||
import { Telemetry } from "../util/posthog";
|
||||
import { withExponentialBackoff } from "../util/withExponentialBackoff";
|
||||
|
@ -59,6 +60,9 @@ export abstract class BaseLLM implements ILLM {
|
|||
return false;
|
||||
}
|
||||
}
|
||||
if (this.providerName === "groq") {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -219,44 +223,55 @@ ${prompt}`;
|
|||
provider: this.providerName,
|
||||
tokens: tokens,
|
||||
});
|
||||
Telemetry.capture("tokensGenerated", {
|
||||
model: model,
|
||||
provider: this.providerName,
|
||||
tokens: tokens,
|
||||
});
|
||||
DevDataSqliteDb.logTokensGenerated(model, this.providerName, tokens);
|
||||
}
|
||||
|
||||
_fetch?: (input: RequestInfo | URL, init?: RequestInit) => Promise<Response> =
|
||||
undefined;
|
||||
fetch(url: RequestInfo | URL, init?: RequestInit): Promise<Response> {
|
||||
// Custom Node.js fetch
|
||||
const customFetch = async (input: URL | RequestInfo, init: any) => {
|
||||
try {
|
||||
const resp = await fetchwithRequestOptions(
|
||||
new URL(input as any),
|
||||
{ ...init },
|
||||
{ ...this.requestOptions },
|
||||
);
|
||||
|
||||
protected fetch(
|
||||
url: RequestInfo | URL,
|
||||
init?: RequestInit,
|
||||
): Promise<Response> {
|
||||
if (this._fetch) {
|
||||
// Custom Node.js fetch
|
||||
const customFetch = this._fetch;
|
||||
return withExponentialBackoff<Response>(
|
||||
() => customFetch(url, init),
|
||||
5,
|
||||
0.5,
|
||||
);
|
||||
}
|
||||
if (!resp.ok) {
|
||||
let text = await resp.text();
|
||||
if (resp.status === 404 && !resp.url.includes("/v1")) {
|
||||
if (text.includes("try pulling it first")) {
|
||||
const model = JSON.parse(text).error.split(" ")[1].slice(1, -1);
|
||||
text = `The model "${model}" was not found. To download it, run \`ollama run ${model}\`.`;
|
||||
} else if (text.includes("/api/chat")) {
|
||||
text =
|
||||
"The /api/chat endpoint was not found. This may mean that you are using an older version of Ollama that does not support /api/chat. Upgrading to the latest version will solve the issue.";
|
||||
} else {
|
||||
text =
|
||||
"This may mean that you forgot to add '/v1' to the end of your 'apiBase' in config.json.";
|
||||
}
|
||||
}
|
||||
throw new Error(
|
||||
`HTTP ${resp.status} ${resp.statusText} from ${resp.url}\n\n${text}`,
|
||||
);
|
||||
}
|
||||
|
||||
// Most of the requestOptions aren't available in the browser
|
||||
const headers = new Headers(init?.headers);
|
||||
for (const [key, value] of Object.entries(
|
||||
this.requestOptions?.headers ?? {},
|
||||
)) {
|
||||
headers.append(key, value as string);
|
||||
}
|
||||
|
||||
return withExponentialBackoff<Response>(() =>
|
||||
fetch(url, {
|
||||
...init,
|
||||
headers,
|
||||
}),
|
||||
return resp;
|
||||
} catch (e: any) {
|
||||
if (
|
||||
e.code === "ECONNREFUSED" &&
|
||||
e.message.includes("http://127.0.0.1:11434")
|
||||
) {
|
||||
throw new Error(
|
||||
"Failed to connect to local Ollama instance. To start Ollama, first download it at https://ollama.ai.",
|
||||
);
|
||||
}
|
||||
throw new Error(`${e}`);
|
||||
}
|
||||
};
|
||||
return withExponentialBackoff<Response>(
|
||||
() => customFetch(url, init) as any,
|
||||
5,
|
||||
0.5,
|
||||
);
|
||||
}
|
||||
|
||||
|
|
|
@ -1,7 +1,16 @@
|
|||
import { BaseLLM } from "..";
|
||||
import { ChatMessage, CompletionOptions, CustomLLM } from "../..";
|
||||
import {
|
||||
ChatMessage,
|
||||
CompletionOptions,
|
||||
CustomLLM,
|
||||
ModelProvider,
|
||||
} from "../..";
|
||||
|
||||
class CustomLLMClass extends BaseLLM {
|
||||
get providerName(): ModelProvider {
|
||||
return "custom";
|
||||
}
|
||||
|
||||
private customStreamCompletion?: (
|
||||
prompt: string,
|
||||
options: CompletionOptions,
|
||||
|
|
|
@ -94,10 +94,8 @@ class FreeTrial extends BaseLLM {
|
|||
async listModels(): Promise<string[]> {
|
||||
return [
|
||||
"gpt-3.5-turbo",
|
||||
"gpt-4",
|
||||
"gemini-1.5-pro-latest",
|
||||
"gpt-4-turbo",
|
||||
"codellama-70b",
|
||||
"gemini-1.5-pro-latest",
|
||||
"claude-3-opus-20240229",
|
||||
"claude-3-sonnet-20240229",
|
||||
"claude-3-haiku-20240307",
|
||||
|
|
|
@ -133,13 +133,13 @@ class Gemini extends BaseLLM {
|
|||
|
||||
// Incrementally stream the content to make it smoother
|
||||
const content = data.candidates[0].content.parts[0].text;
|
||||
const words = content.split(" ");
|
||||
const words = content.split(/(\s+)/);
|
||||
const delaySeconds = Math.min(4.0 / (words.length + 1), 0.1);
|
||||
while (words.length > 0) {
|
||||
const wordsToYield = Math.min(3, words.length);
|
||||
yield {
|
||||
role: "assistant",
|
||||
content: words.splice(0, wordsToYield).join(" ") + " ",
|
||||
content: words.splice(0, wordsToYield).join(""),
|
||||
};
|
||||
await delay(delaySeconds);
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@ class Groq extends OpenAI {
|
|||
static defaultOptions: Partial<LLMOptions> = {
|
||||
apiBase: "https://api.groq.com/openai/v1/",
|
||||
};
|
||||
protected maxStopWords: number | undefined = 4;
|
||||
|
||||
private static modelConversion: { [key: string]: string } = {
|
||||
"llama2-70b": "llama2-70b-4096",
|
||||
|
|
|
@ -38,12 +38,12 @@ class Ollama extends BaseLLM {
|
|||
if (body.parameters) {
|
||||
const params = [];
|
||||
for (let line of body.parameters.split("\n")) {
|
||||
let parts = line.split(" ");
|
||||
let parts = line.match(/^(\S+)\s+((?:".*")|\S+)$/);
|
||||
if (parts.length < 2) {
|
||||
continue;
|
||||
}
|
||||
let key = parts[0];
|
||||
let value = parts[parts.length - 1];
|
||||
let key = parts[1];
|
||||
let value = parts[2];
|
||||
switch (key) {
|
||||
case "num_ctx":
|
||||
this.contextLength = parseInt(value);
|
||||
|
@ -52,7 +52,13 @@ class Ollama extends BaseLLM {
|
|||
if (!this.completionOptions.stop) {
|
||||
this.completionOptions.stop = [];
|
||||
}
|
||||
this.completionOptions.stop.push(JSON.parse(value));
|
||||
try {
|
||||
this.completionOptions.stop.push(JSON.parse(value));
|
||||
} catch (e) {
|
||||
console.warn(
|
||||
'Error parsing stop parameter value "{value}: ${e}',
|
||||
);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
|
|
@ -40,6 +40,8 @@ const CHAT_ONLY_MODELS = [
|
|||
class OpenAI extends BaseLLM {
|
||||
public useLegacyCompletionsEndpoint: boolean | undefined = undefined;
|
||||
|
||||
protected maxStopWords: number | undefined = undefined;
|
||||
|
||||
constructor(options: LLMOptions) {
|
||||
super(options);
|
||||
this.useLegacyCompletionsEndpoint = options.useLegacyCompletionsEndpoint;
|
||||
|
@ -62,6 +64,7 @@ class OpenAI extends BaseLLM {
|
|||
};
|
||||
if (part.type === "imageUrl") {
|
||||
msg.image_url = { ...part.imageUrl, detail: "low" };
|
||||
msg.type = "image_url";
|
||||
}
|
||||
return msg;
|
||||
});
|
||||
|
@ -87,9 +90,11 @@ class OpenAI extends BaseLLM {
|
|||
presence_penalty: options.presencePenalty,
|
||||
stop:
|
||||
// Jan + Azure OpenAI don't truncate and will throw an error
|
||||
url.port === "1337" ||
|
||||
url.host === "api.openai.com" ||
|
||||
this.apiType === "azure"
|
||||
this.maxStopWords !== undefined
|
||||
? options.stop?.slice(0, this.maxStopWords)
|
||||
: url.port === "1337" ||
|
||||
url.host === "api.openai.com" ||
|
||||
this.apiType === "azure"
|
||||
? options.stop?.slice(0, 4)
|
||||
: options.stop,
|
||||
};
|
||||
|
|
|
@ -101,6 +101,7 @@ const LLMs = [
|
|||
export async function llmFromDescription(
|
||||
desc: ModelDescription,
|
||||
readFile: (filepath: string) => Promise<string>,
|
||||
writeLog: (log: string) => Promise<void>,
|
||||
completionOptions?: BaseCompletionOptions,
|
||||
systemMessage?: string,
|
||||
): Promise<BaseLLM | undefined> {
|
||||
|
@ -131,6 +132,7 @@ export async function llmFromDescription(
|
|||
DEFAULT_MAX_TOKENS,
|
||||
},
|
||||
systemMessage,
|
||||
writeLog,
|
||||
};
|
||||
|
||||
return new cls(options);
|
||||
|
|
|
@ -254,6 +254,14 @@ function codeLlama70bTemplateMessages(msgs: ChatMessage[]): string {
|
|||
return prompt;
|
||||
}
|
||||
|
||||
const llama3TemplateMessages = templateFactory(
|
||||
(msg: ChatMessage) =>
|
||||
`<|begin_of_text|><|start_header_id|>${msg.role}<|end_header_id|>\n${msg.content}<|eot_id|>\n`,
|
||||
"<|start_header_id|>user<|end_header_id|>\n",
|
||||
"<|start_header_id|>assistant<|end_header_id|>\n",
|
||||
"<|eot_id|>",
|
||||
);
|
||||
|
||||
/**
|
||||
<start_of_turn>user
|
||||
What is Cramer's Rule?<end_of_turn>
|
||||
|
@ -273,6 +281,7 @@ export {
|
|||
deepseekTemplateMessages,
|
||||
gemmaTemplateMessage,
|
||||
llama2TemplateMessages,
|
||||
llama3TemplateMessages,
|
||||
llavaTemplateMessages,
|
||||
neuralChatTemplateMessages,
|
||||
openchatTemplateMessages,
|
||||
|
|
|
@ -259,6 +259,15 @@ Output only a code block with the rewritten code:
|
|||
},
|
||||
];
|
||||
|
||||
const llama3EditPrompt: PromptTemplate = `<|begin_of_text|><|start_header_id|>user<|end_header_id|>
|
||||
\`\`\`{{{language}}}
|
||||
{{{codeToEdit}}}
|
||||
\`\`\`
|
||||
|
||||
Rewrite the above code to satisfy this request: "{{{userInput}}}"<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
Sure! Here's the code you requested:
|
||||
\`\`\`{{{language}}}`;
|
||||
|
||||
const gemmaEditPrompt = `<start_of_turn>user
|
||||
You are an expert programmer and write code on the first attempt without any errors or fillers. Rewrite the code to satisfy this request: "{{{userInput}}}"
|
||||
|
||||
|
@ -279,6 +288,7 @@ export {
|
|||
deepseekEditPrompt,
|
||||
gemmaEditPrompt,
|
||||
gptEditPrompt,
|
||||
llama3EditPrompt,
|
||||
mistralEditPrompt,
|
||||
neuralChatEditPrompt,
|
||||
openchatEditPrompt,
|
||||
|
|
|
@ -40,14 +40,14 @@
|
|||
"ignore": "^5.3.1",
|
||||
"js-tiktoken": "^1.0.8",
|
||||
"jsdom": "^24.0.0",
|
||||
"llama-tokenizer-js": "^1.1.3",
|
||||
"llama-tokenizer-js": "1.1.3",
|
||||
"llm-code-highlighter": "^0.0.14",
|
||||
"node-fetch": "^3.3.2",
|
||||
"node-html-markdown": "^1.3.0",
|
||||
"ollama": "^0.4.6",
|
||||
"openai": "^4.20.1",
|
||||
"pg": "^8.11.3",
|
||||
"posthog-node": "^3.6.2",
|
||||
"posthog-node": "^3.6.3",
|
||||
"replicate": "^0.26.0",
|
||||
"request": "^2.88.2",
|
||||
"socket.io-client": "^4.7.3",
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import * as dotenv from "dotenv";
|
||||
import fetch from "node-fetch";
|
||||
import { ContinueSDK } from "..";
|
||||
import EditSlashCommand, { getPromptParts } from "../commands/slash/edit";
|
||||
import { contextItemToRangeInFileWithContents } from "../commands/util";
|
||||
|
@ -64,6 +65,7 @@ describe("/edit slash command", () => {
|
|||
contextItems: [TEST_CONTEXT_ITEM],
|
||||
selectedCode: [],
|
||||
config: {} as any,
|
||||
fetch,
|
||||
};
|
||||
|
||||
let total = "";
|
||||
|
@ -82,7 +84,7 @@ describe("/edit slash command", () => {
|
|||
...
|
||||
...
|
||||
..
|
||||
. .`
|
||||
. .`,
|
||||
);
|
||||
|
||||
expect(dedented).toEqual(`\
|
||||
|
@ -110,7 +112,7 @@ describe("/edit slash command", () => {
|
|||
fullFile,
|
||||
new FreeTrial({ model: "gpt-3.5-turbo" }),
|
||||
"implement this function",
|
||||
1200
|
||||
1200,
|
||||
);
|
||||
|
||||
expect(filePrefix).toEqual(`${f1}`);
|
||||
|
|
|
@ -8,10 +8,15 @@ import tls from "tls";
|
|||
import { RequestOptions } from "..";
|
||||
|
||||
export function fetchwithRequestOptions(
|
||||
url: URL,
|
||||
init: RequestInit,
|
||||
url_: URL | string,
|
||||
init?: RequestInit,
|
||||
requestOptions?: RequestOptions,
|
||||
): Promise<Response> {
|
||||
let url = url_;
|
||||
if (typeof url === "string") {
|
||||
url = new URL(url);
|
||||
}
|
||||
|
||||
const TIMEOUT = 7200; // 7200 seconds = 2 hours
|
||||
|
||||
let globalCerts: string[] = [];
|
||||
|
@ -55,7 +60,7 @@ export function fetchwithRequestOptions(
|
|||
: new protocol.Agent(agentOptions);
|
||||
|
||||
const headers: { [key: string]: string } = requestOptions?.headers || {};
|
||||
for (const [key, value] of Object.entries(init.headers || {})) {
|
||||
for (const [key, value] of Object.entries(init?.headers || {})) {
|
||||
headers[key] = value as string;
|
||||
}
|
||||
|
||||
|
@ -67,7 +72,7 @@ export function fetchwithRequestOptions(
|
|||
// add extra body properties if provided
|
||||
let updatedBody: string | undefined = undefined;
|
||||
try {
|
||||
if (requestOptions?.extraBodyProperties && typeof init.body === "string") {
|
||||
if (requestOptions?.extraBodyProperties && typeof init?.body === "string") {
|
||||
const parsedBody = JSON.parse(init.body);
|
||||
updatedBody = JSON.stringify({
|
||||
...parsedBody,
|
||||
|
@ -81,7 +86,7 @@ export function fetchwithRequestOptions(
|
|||
// fetch the request with the provided options
|
||||
let resp = fetch(url, {
|
||||
...init,
|
||||
body: updatedBody ?? init.body,
|
||||
body: updatedBody ?? init?.body,
|
||||
headers: headers,
|
||||
agent: agent,
|
||||
});
|
||||
|
|
|
@ -4,6 +4,7 @@ import {
|
|||
filterEnglishLinesAtEnd,
|
||||
filterEnglishLinesAtStart,
|
||||
filterLeadingAndTrailingNewLineInsertion,
|
||||
skipLines,
|
||||
stopAtLines,
|
||||
} from "../autocomplete/lineStream";
|
||||
import { streamDiff } from "../diff/streamDiff";
|
||||
|
@ -90,6 +91,7 @@ export async function* streamDiffLines(
|
|||
lines = filterEnglishLinesAtStart(lines);
|
||||
lines = filterCodeBlockLines(lines);
|
||||
lines = stopAtLines(lines);
|
||||
lines = skipLines(lines);
|
||||
if (inept) {
|
||||
// lines = fixCodeLlamaFirstLineIndentation(lines);
|
||||
lines = filterEnglishLinesAtEnd(lines);
|
||||
|
|
|
@ -2,6 +2,8 @@ interface APIError extends Error {
|
|||
response?: Response;
|
||||
}
|
||||
|
||||
const RETRY_AFTER_HEADER = "Retry-After";
|
||||
|
||||
const withExponentialBackoff = async <T>(
|
||||
apiCall: () => Promise<T>,
|
||||
maxRetries = 5,
|
||||
|
@ -16,7 +18,8 @@ const withExponentialBackoff = async <T>(
|
|||
(error as APIError).response?.status === 429 &&
|
||||
attempt < maxRetries - 1
|
||||
) {
|
||||
const delay = initialDelaySeconds * 2 ** attempt;
|
||||
const retryAfter = (error as APIError).response?.headers.get(RETRY_AFTER_HEADER);
|
||||
const delay = retryAfter ? parseInt(retryAfter, 10) : initialDelaySeconds * 2 ** attempt;
|
||||
console.log(
|
||||
`Hit rate limit. Retrying in ${delay} seconds (attempt ${
|
||||
attempt + 1
|
||||
|
|
5962
core/yarn.lock
|
@ -131,7 +131,7 @@ After the "Full example" these examples will only show the relevant portion of t
|
|||
},
|
||||
{
|
||||
"name": "share",
|
||||
"description": "Download and share this session",
|
||||
"description": "Export the current chat session to markdown",
|
||||
"step": "ShareSessionStep"
|
||||
},
|
||||
{
|
||||
|
@ -143,7 +143,7 @@ After the "Full example" these examples will only show the relevant portion of t
|
|||
"custom_commands": [
|
||||
{
|
||||
"name": "test",
|
||||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"description": "Write unit tests for highlighted code"
|
||||
}
|
||||
],
|
||||
|
|
|
@ -82,6 +82,14 @@ Type '@search' to reference the results of codebase search, just like the result
|
|||
{ "name": "search" }
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
Type '@url' and input a URL, then Continue will convert it to a markdown document to pass to the model.
|
||||
|
||||
```json
|
||||
{ "name": "url" }
|
||||
```
|
||||
|
||||
### File Tree
|
||||
|
||||
Type '@tree' to reference the structure of your current workspace. The LLM will be able to see the nested directory structure of your project.
|
||||
|
|
|
@ -43,10 +43,13 @@ Type "/share" to generate a shareable markdown transcript of your current chat h
|
|||
```json
|
||||
{
|
||||
"name": "share",
|
||||
"description": "Download and share this session"
|
||||
"description": "Export the current chat session to markdown",
|
||||
"params": { "ouputDir": "~/.continue/session-transcripts" }
|
||||
}
|
||||
```
|
||||
|
||||
Use the `ouputDir` parameter to specify where you want to the markdown file to be saved.
|
||||
|
||||
### `/cmd`
|
||||
|
||||
Generate a shell command from natural language and (only in VS Code) automatically paste it into the terminal.
|
||||
|
|
|
@ -56,7 +56,7 @@ You can use commercial LLMs via APIs using:
|
|||
|
||||
- [Anthrophic API](../reference/Model%20Providers/anthropicllm.md)
|
||||
- [OpenAI API](../reference/Model%20Providers/openai.md)
|
||||
- [Azure OpenAI Service](../reference/Model%20Providers/openai.md) (OpenAI compatible API)
|
||||
- [Azure OpenAI Service](../reference/Model%20Providers/openai.md)
|
||||
- [Google Gemini API](../reference/Model%20Providers/googlepalmapi.md)
|
||||
- [Mistral API](../reference/Model%20Providers/mistral.md)
|
||||
- [Voyage AI API](../walkthroughs/codebase-embeddings.md#openai)
|
||||
|
|
|
@ -156,6 +156,22 @@ For certain scenarios, you may still find the text-embedding-ada-002 model relev
|
|||
}
|
||||
```
|
||||
|
||||
### Cohere
|
||||
|
||||
Configuration for the `embed-english-v3.0` model. This is the default.
|
||||
|
||||
```json title="~/.continue/config.json"
|
||||
{
|
||||
"embeddingsProvider": {
|
||||
"provider": "cohere",
|
||||
"model": "embed-english-v3.0",
|
||||
"apiKey": "YOUR_API_KEY"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See Cohere's [embeddings](https://docs.cohere.com/docs/embed-2) for available models. Only embedding models v3 and higher are supported.
|
||||
|
||||
### Writing a custom `EmbeddingsProvider`
|
||||
|
||||
If you have your own API capable of generating embeddings, Continue makes it easy to write a custom `EmbeddingsProvider`. All you have to do is write a function that converts strings to arrays of numbers, and add this to your config in `config.ts`. Here's an example:
|
||||
|
|
|
@ -130,7 +130,7 @@ After the "Full example" these examples will only show the relevant portion of t
|
|||
},
|
||||
{
|
||||
"name": "share",
|
||||
"description": "Download and share this session",
|
||||
"description": "Export the current chat session to markdown",
|
||||
"step": "ShareSessionStep"
|
||||
},
|
||||
{
|
||||
|
@ -142,7 +142,7 @@ After the "Full example" these examples will only show the relevant portion of t
|
|||
"custom_commands": [
|
||||
{
|
||||
"name": "test",
|
||||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"description": "Write unit tests for highlighted code"
|
||||
}
|
||||
],
|
||||
|
|
|
@ -151,22 +151,22 @@
|
|||
"groq"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/cohere)",
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/cohere)",
|
||||
"### Bedrock\nTo get started with Bedrock you need to sign up on AWS [here](https://aws.amazon.com/bedrock/claude/)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamafile)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamafile)",
|
||||
"### Mistral API\n\nTo get access to the Mistral API, obtain your API key from the [Mistral platform](https://docs.mistral.ai/)",
|
||||
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)"
|
||||
"### DeepInfra\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/deepinfra)"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -209,13 +209,14 @@
|
|||
"neural-chat",
|
||||
"codellama-70b",
|
||||
"llava",
|
||||
"gemma"
|
||||
"gemma",
|
||||
"llama3"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
"promptTemplates": {
|
||||
"title": "Prompt Templates",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://continue.dev/docs/setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://docs.continue.dev/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
|
@ -785,7 +786,14 @@
|
|||
"then": {
|
||||
"properties": {
|
||||
"model": {
|
||||
"enum": ["mistral-tiny", "mistral-small", "mistral-medium"]
|
||||
"enum": [
|
||||
"open-mistral-7b",
|
||||
"open-mixtral-8x7b",
|
||||
"open-mixtral-8x22b",
|
||||
"mistral-small-latest",
|
||||
"mistral-medium-latest",
|
||||
"mistral-large-latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -807,7 +815,8 @@
|
|||
"mistral-8x7b",
|
||||
"gemma",
|
||||
"llama3-8b",
|
||||
"llama3-70b"
|
||||
"llama3-70b",
|
||||
"AUTODETECT"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -1120,6 +1129,27 @@
|
|||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["share"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"then": {
|
||||
"properties": {
|
||||
"params": {
|
||||
"properties": {
|
||||
"outputDir": {
|
||||
"type": "string",
|
||||
"markdownDescription": "If outputDir is set to `.` or begins with `./` or `.\\`, file will be saved to the current workspace or a subdirectory thereof, respectively. `~` can similarly be used to specify the user's home directory."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"required": ["name", "description"]
|
||||
|
@ -1169,7 +1199,8 @@
|
|||
"outline",
|
||||
"postgres",
|
||||
"code",
|
||||
"system"
|
||||
"system",
|
||||
"url"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"Reference the contents of the current changes as given by `git diff`",
|
||||
|
@ -1190,7 +1221,8 @@
|
|||
"Displays definition lines from the currently open files",
|
||||
"References Postgres table schema and sample rows",
|
||||
"Reference specific functions and classes from throughout your codebase",
|
||||
"Reference your operating system and cpu"
|
||||
"Reference your operating system and cpu",
|
||||
"Reference the contents of a page at a URL"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -1571,13 +1603,13 @@
|
|||
"properties": {
|
||||
"allowAnonymousTelemetry": {
|
||||
"title": "Allow Anonymous Telemetry",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://continue.dev/docs/telemetry).",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://docs.continue.dev/telemetry).",
|
||||
"default": true,
|
||||
"type": "boolean"
|
||||
},
|
||||
"models": {
|
||||
"title": "Models",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://continue.dev/docs/setup/overview).",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://docs.continue.dev/model-setup/overview).",
|
||||
"default": [
|
||||
{
|
||||
"title": "GPT-4 (trial)",
|
||||
|
@ -1614,9 +1646,18 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"requestOptions": {
|
||||
"title": "Request Options",
|
||||
"description": "Default request options for all fetch requests from models and context providers. These will be overriden by any model-specific request options.",
|
||||
"allOf": [
|
||||
{
|
||||
"$ref": "#/definitions/RequestOptions"
|
||||
}
|
||||
]
|
||||
},
|
||||
"slashCommands": {
|
||||
"title": "Slash Commands",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands).",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1625,11 +1666,11 @@
|
|||
},
|
||||
"customCommands": {
|
||||
"title": "Custom Commands",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"default": [
|
||||
{
|
||||
"name": "test",
|
||||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"description": "This is an example custom command. Open config.json to edit it and create more"
|
||||
}
|
||||
],
|
||||
|
@ -1640,7 +1681,7 @@
|
|||
},
|
||||
"contextProviders": {
|
||||
"title": "Context Providers",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://continue.dev/docs/customization/context-providers).",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://docs.continue.dev/customization/context-providers).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1678,11 +1719,17 @@
|
|||
},
|
||||
"embeddingsProvider": {
|
||||
"title": "Embeddings Provider",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://continue.dev/docs/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["transformers.js", "ollama", "openai", "free-trial"]
|
||||
"enum": [
|
||||
"transformers.js",
|
||||
"ollama",
|
||||
"openai",
|
||||
"cohere",
|
||||
"free-trial"
|
||||
]
|
||||
},
|
||||
"model": {
|
||||
"type": "string"
|
||||
|
@ -1692,6 +1739,11 @@
|
|||
},
|
||||
"apiBase": {
|
||||
"type": "string"
|
||||
},
|
||||
"requestOptions": {
|
||||
"title": "Request Options",
|
||||
"description": "Request options to be used in any fetch requests made by the embeddings provider",
|
||||
"$ref": "#/definitions/RequestOptions"
|
||||
}
|
||||
},
|
||||
"required": ["provider"],
|
||||
|
@ -1708,6 +1760,19 @@
|
|||
"then": {
|
||||
"required": ["model"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["cohere"]
|
||||
}
|
||||
},
|
||||
"required": ["provider"]
|
||||
},
|
||||
"then": {
|
||||
"required": ["apiKey"]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -1717,7 +1782,7 @@
|
|||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["voyage", "llm", "free-trial"]
|
||||
"enum": ["cohere", "voyage", "llm", "free-trial"]
|
||||
},
|
||||
"params": {
|
||||
"type": "object"
|
||||
|
@ -1725,6 +1790,40 @@
|
|||
},
|
||||
"required": ["name"],
|
||||
"allOf": [
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["cohere"]
|
||||
}
|
||||
},
|
||||
"required": ["name"]
|
||||
},
|
||||
"then": {
|
||||
"properties": {
|
||||
"params": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"enum": [
|
||||
"rerank-english-v3.0",
|
||||
"rerank-multilingual-v3.0",
|
||||
"rerank-english-v2.0",
|
||||
"rerank-multilingual-v2.0"
|
||||
]
|
||||
},
|
||||
"apiBase": {
|
||||
"type": "string"
|
||||
},
|
||||
"apiKey": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["apiKey"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
|
@ -1789,7 +1888,7 @@
|
|||
"tabAutocompleteOptions": {
|
||||
"title": "TabAutocompleteOptions",
|
||||
"type": "object",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://continue.dev/docs/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://docs.continue.dev/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"properties": {
|
||||
"disable": {
|
||||
"type": "boolean",
|
||||
|
@ -1865,6 +1964,14 @@
|
|||
"title": "Experimental",
|
||||
"description": "Experimental properties are subject to change.",
|
||||
"properties": {
|
||||
"modelRoles": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"inlineEdit": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"contextMenuPrompts": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
|
|
@ -40,7 +40,7 @@
|
|||
|
||||
## Getting Started
|
||||
|
||||
You can try out Continue with our free trial models before configuring your setup.
|
||||
You can try out Continue with our free trial models before configuring your setup.
|
||||
|
||||
Learn more about the models and providers [here](https://continue.dev/docs/setup/overview).
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ pluginGroup = com.github.continuedev.continueintellijextension
|
|||
pluginName = continue-intellij-extension
|
||||
pluginRepositoryUrl = https://github.com/continuedev/continue
|
||||
# SemVer format -> https://semver.org
|
||||
pluginVersion = 0.0.45
|
||||
pluginVersion = 0.0.46
|
||||
|
||||
# Supported build number ranges and IntelliJ Platform versions -> https://plugins.jetbrains.com/docs/intellij/build-number-ranges.html
|
||||
pluginSinceBuild = 223
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
continuedev
|
|
@ -1,4 +0,0 @@
|
|||
from continuedev.server.main import run_server
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_server()
|
|
@ -26,7 +26,7 @@ class ContinueCustomElementRenderer (
|
|||
protected val font: Font
|
||||
get() {
|
||||
val editorFont = editor.colorsScheme.getFont(EditorFontType.PLAIN)
|
||||
return editorFont.deriveFont(Font.ITALIC) ?: editorFont
|
||||
return editorFont.deriveFont(Font.PLAIN) ?: editorFont
|
||||
}
|
||||
|
||||
private fun offsetY(): Int {
|
||||
|
|
|
@ -39,7 +39,7 @@ class ContinueMultilineCustomElementRenderer (
|
|||
protected val font: Font
|
||||
get() {
|
||||
val editorFont = editor.colorsScheme.getFont(EditorFontType.PLAIN)
|
||||
return editorFont.deriveFont(Font.ITALIC) ?: editorFont
|
||||
return editorFont.deriveFont(Font.PLAIN) ?: editorFont
|
||||
}
|
||||
|
||||
private fun offsetY(): Int {
|
||||
|
|
|
@ -41,7 +41,7 @@ const val DEFAULT_CONFIG = """
|
|||
},
|
||||
{
|
||||
"name": "share",
|
||||
"description": "Download and share this session",
|
||||
"description": "Export the current chat session to markdown",
|
||||
"step": "ShareSessionStep"
|
||||
},
|
||||
{
|
||||
|
@ -53,7 +53,7 @@ const val DEFAULT_CONFIG = """
|
|||
"customCommands": [
|
||||
{
|
||||
"name": "test",
|
||||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"description": "Write unit tests for highlighted code"
|
||||
}
|
||||
],
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
|
||||
<extensions defaultExtensionNs="com.intellij">
|
||||
<editorFactoryListener implementation="com.github.continuedev.continueintellijextension.autocomplete.AutocompleteEditorListener"/>
|
||||
<toolWindow id="Continue" anchor="right" icon="/tool-window-icon.png"
|
||||
<toolWindow id="Continue" anchor="right" icon="/tool-window-icon.svg"
|
||||
factoryClass="com.github.continuedev.continueintellijextension.toolWindow.ContinuePluginToolWindowFactory"/>
|
||||
<projectService id="ContinuePluginService"
|
||||
serviceImplementation="com.github.continuedev.continueintellijextension.services.ContinuePluginService"/>
|
||||
|
|
|
@ -1,6 +1,11 @@
|
|||
<svg width="168" height="168" viewBox="0 0 168 168" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M13.875 70.625V61.25H23.25V51.875H32.625V42.5H42V33.125H32.625V23.75H23.25V14.375H13.875V5H32.625V14.375H42V23.75H51.375V33.125H60.75V42.5H51.375V51.875H42V61.25H32.625V70.625H13.875Z" fill="black"/>
|
||||
<path d="M4.5 154.625V89H51.375V98.375H60.75V107.75H70.125V135.875H60.75V145.25H51.375V154.625H4.5ZM23.25 145.25H42V135.875H51.375V107.75H42V98.375H23.25V145.25Z" fill="black"/>
|
||||
<path d="M107.25 70.625V61.25H97.875V51.875H88.5V23.75H97.875V14.375H107.25V5H144.75V14.375H154.125V23.75H135.375V14.375H116.625V23.75H107.25V51.875H116.625V61.25H135.375V51.875H154.125V61.25H144.75V70.625H107.25Z" fill="black"/>
|
||||
<path d="M88.5 164V154.625H154.125V164H88.5Z" fill="black"/>
|
||||
<svg width="300" height="300" viewBox="0 0 300 300" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<g clip-path="url(#clip0_86_9)">
|
||||
<path d="M300 150C300 67.1573 232.843 0 150 0C67.1573 0 0 67.1573 0 150C0 232.843 67.1573 300 150 300C232.843 300 300 232.843 300 150Z" fill="#F3F3F3"/>
|
||||
<path d="M206.97 92.7966L197.78 108.729L221.009 148.932C221.179 149.237 221.281 149.61 221.281 149.949C221.281 150.288 221.179 150.661 221.009 150.966L197.78 191.203L206.97 207.136L240 149.949L206.97 92.7627V92.7966ZM194.219 106.661L203.409 90.7288H185.029L175.839 106.661H194.253H194.219ZM175.805 110.797L197.271 147.915H215.651L194.219 110.797H175.805ZM194.219 189.169L215.651 152.017H197.271L175.805 189.169H194.219ZM175.805 193.305L184.995 209.169H203.375L194.185 193.305H175.771H175.805ZM113.509 213.102C113.136 213.102 112.797 213 112.492 212.83C112.186 212.661 111.915 212.39 111.745 212.085L88.4819 171.847H70.1017L103.132 229H169.158L159.968 213.102H113.543H113.509ZM163.529 211.034L172.719 226.932L181.909 211L172.719 195.068L163.529 211V211.034ZM169.158 193.034H126.294L117.104 208.966H159.968L169.158 193.034ZM122.699 191L101.233 153.847L92.0427 169.78L113.509 206.932L122.699 191ZM70.0678 167.712H88.448L97.6381 151.78H79.2918L70.0678 167.712ZM111.644 87.9491C111.813 87.6441 112.085 87.3729 112.39 87.2034C112.695 87.0339 113.068 86.9322 113.407 86.9322H159.9L169.09 71H103.03L70 128.186H88.3802L111.576 87.9831L111.644 87.9491ZM97.6381 148.22L88.448 132.288H70.0678L79.2579 148.22H97.6381ZM113.441 93.1017L92.0088 130.22L101.199 146.153L122.631 109.034L113.441 93.1017ZM159.934 91.0339H117.002L126.192 106.966H169.124L159.934 91.0339ZM172.719 104.898L181.875 89L172.719 73.0678L163.529 88.9661L172.719 104.898Z" fill="#33333B"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_86_9">
|
||||
<rect width="300" height="300" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
|
Before Width: | Height: | Size: 801 B After Width: | Height: | Size: 1.8 KiB |
|
@ -1,6 +1,11 @@
|
|||
<svg width="168" height="168" viewBox="0 0 168 168" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M13.875 70.625V61.25H23.25V51.875H32.625V42.5H42V33.125H32.625V23.75H23.25V14.375H13.875V5H32.625V14.375H42V23.75H51.375V33.125H60.75V42.5H51.375V51.875H42V61.25H32.625V70.625H13.875Z" fill="white"/>
|
||||
<path d="M4.5 154.625V89H51.375V98.375H60.75V107.75H70.125V135.875H60.75V145.25H51.375V154.625H4.5ZM23.25 145.25H42V135.875H51.375V107.75H42V98.375H23.25V145.25Z" fill="white"/>
|
||||
<path d="M107.25 70.625V61.25H97.875V51.875H88.5V23.75H97.875V14.375H107.25V5H144.75V14.375H154.125V23.75H135.375V14.375H116.625V23.75H107.25V51.875H116.625V61.25H135.375V51.875H154.125V61.25H144.75V70.625H107.25Z" fill="white"/>
|
||||
<path d="M88.5 164V154.625H154.125V164H88.5Z" fill="white"/>
|
||||
<svg width="300" height="300" viewBox="0 0 300 300" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<g clip-path="url(#clip0_86_9)">
|
||||
<path d="M300 150C300 67.1573 232.843 0 150 0C67.1573 0 0 67.1573 0 150C0 232.843 67.1573 300 150 300C232.843 300 300 232.843 300 150Z" fill="#F3F3F3"/>
|
||||
<path d="M206.97 92.7966L197.78 108.729L221.009 148.932C221.179 149.237 221.281 149.61 221.281 149.949C221.281 150.288 221.179 150.661 221.009 150.966L197.78 191.203L206.97 207.136L240 149.949L206.97 92.7627V92.7966ZM194.219 106.661L203.409 90.7288H185.029L175.839 106.661H194.253H194.219ZM175.805 110.797L197.271 147.915H215.651L194.219 110.797H175.805ZM194.219 189.169L215.651 152.017H197.271L175.805 189.169H194.219ZM175.805 193.305L184.995 209.169H203.375L194.185 193.305H175.771H175.805ZM113.509 213.102C113.136 213.102 112.797 213 112.492 212.83C112.186 212.661 111.915 212.39 111.745 212.085L88.4819 171.847H70.1017L103.132 229H169.158L159.968 213.102H113.543H113.509ZM163.529 211.034L172.719 226.932L181.909 211L172.719 195.068L163.529 211V211.034ZM169.158 193.034H126.294L117.104 208.966H159.968L169.158 193.034ZM122.699 191L101.233 153.847L92.0427 169.78L113.509 206.932L122.699 191ZM70.0678 167.712H88.448L97.6381 151.78H79.2918L70.0678 167.712ZM111.644 87.9491C111.813 87.6441 112.085 87.3729 112.39 87.2034C112.695 87.0339 113.068 86.9322 113.407 86.9322H159.9L169.09 71H103.03L70 128.186H88.3802L111.576 87.9831L111.644 87.9491ZM97.6381 148.22L88.448 132.288H70.0678L79.2579 148.22H97.6381ZM113.441 93.1017L92.0088 130.22L101.199 146.153L122.631 109.034L113.441 93.1017ZM159.934 91.0339H117.002L126.192 106.966H169.124L159.934 91.0339ZM172.719 104.898L181.875 89L172.719 73.0678L163.529 88.9661L172.719 104.898Z" fill="#33333B"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_86_9">
|
||||
<rect width="300" height="300" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
|
Before Width: | Height: | Size: 801 B After Width: | Height: | Size: 1.8 KiB |
|
@ -151,22 +151,22 @@
|
|||
"groq"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/cohere)",
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/cohere)",
|
||||
"### Bedrock\nTo get started with Bedrock you need to sign up on AWS [here](https://aws.amazon.com/bedrock/claude/)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamafile)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamafile)",
|
||||
"### Mistral API\n\nTo get access to the Mistral API, obtain your API key from the [Mistral platform](https://docs.mistral.ai/)",
|
||||
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)"
|
||||
"### DeepInfra\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/deepinfra)"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -209,13 +209,14 @@
|
|||
"neural-chat",
|
||||
"codellama-70b",
|
||||
"llava",
|
||||
"gemma"
|
||||
"gemma",
|
||||
"llama3"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
"promptTemplates": {
|
||||
"title": "Prompt Templates",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://continue.dev/docs/setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://docs.continue.dev/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
|
@ -785,7 +786,14 @@
|
|||
"then": {
|
||||
"properties": {
|
||||
"model": {
|
||||
"enum": ["mistral-tiny", "mistral-small", "mistral-medium"]
|
||||
"enum": [
|
||||
"open-mistral-7b",
|
||||
"open-mixtral-8x7b",
|
||||
"open-mixtral-8x22b",
|
||||
"mistral-small-latest",
|
||||
"mistral-medium-latest",
|
||||
"mistral-large-latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -807,7 +815,8 @@
|
|||
"mistral-8x7b",
|
||||
"gemma",
|
||||
"llama3-8b",
|
||||
"llama3-70b"
|
||||
"llama3-70b",
|
||||
"AUTODETECT"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -1120,6 +1129,27 @@
|
|||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["share"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"then": {
|
||||
"properties": {
|
||||
"params": {
|
||||
"properties": {
|
||||
"outputDir": {
|
||||
"type": "string",
|
||||
"markdownDescription": "If outputDir is set to `.` or begins with `./` or `.\\`, file will be saved to the current workspace or a subdirectory thereof, respectively. `~` can similarly be used to specify the user's home directory."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"required": ["name", "description"]
|
||||
|
@ -1169,7 +1199,8 @@
|
|||
"outline",
|
||||
"postgres",
|
||||
"code",
|
||||
"system"
|
||||
"system",
|
||||
"url"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"Reference the contents of the current changes as given by `git diff`",
|
||||
|
@ -1190,7 +1221,8 @@
|
|||
"Displays definition lines from the currently open files",
|
||||
"References Postgres table schema and sample rows",
|
||||
"Reference specific functions and classes from throughout your codebase",
|
||||
"Reference your operating system and cpu"
|
||||
"Reference your operating system and cpu",
|
||||
"Reference the contents of a page at a URL"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -1571,13 +1603,13 @@
|
|||
"properties": {
|
||||
"allowAnonymousTelemetry": {
|
||||
"title": "Allow Anonymous Telemetry",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://continue.dev/docs/telemetry).",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://docs.continue.dev/telemetry).",
|
||||
"default": true,
|
||||
"type": "boolean"
|
||||
},
|
||||
"models": {
|
||||
"title": "Models",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://continue.dev/docs/setup/overview).",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://docs.continue.dev/model-setup/overview).",
|
||||
"default": [
|
||||
{
|
||||
"title": "GPT-4 (trial)",
|
||||
|
@ -1614,9 +1646,18 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"requestOptions": {
|
||||
"title": "Request Options",
|
||||
"description": "Default request options for all fetch requests from models and context providers. These will be overriden by any model-specific request options.",
|
||||
"allOf": [
|
||||
{
|
||||
"$ref": "#/definitions/RequestOptions"
|
||||
}
|
||||
]
|
||||
},
|
||||
"slashCommands": {
|
||||
"title": "Slash Commands",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands).",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1625,11 +1666,11 @@
|
|||
},
|
||||
"customCommands": {
|
||||
"title": "Custom Commands",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"default": [
|
||||
{
|
||||
"name": "test",
|
||||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"description": "This is an example custom command. Open config.json to edit it and create more"
|
||||
}
|
||||
],
|
||||
|
@ -1640,7 +1681,7 @@
|
|||
},
|
||||
"contextProviders": {
|
||||
"title": "Context Providers",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://continue.dev/docs/customization/context-providers).",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://docs.continue.dev/customization/context-providers).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1678,11 +1719,17 @@
|
|||
},
|
||||
"embeddingsProvider": {
|
||||
"title": "Embeddings Provider",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://continue.dev/docs/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["transformers.js", "ollama", "openai", "free-trial"]
|
||||
"enum": [
|
||||
"transformers.js",
|
||||
"ollama",
|
||||
"openai",
|
||||
"cohere",
|
||||
"free-trial"
|
||||
]
|
||||
},
|
||||
"model": {
|
||||
"type": "string"
|
||||
|
@ -1692,6 +1739,11 @@
|
|||
},
|
||||
"apiBase": {
|
||||
"type": "string"
|
||||
},
|
||||
"requestOptions": {
|
||||
"title": "Request Options",
|
||||
"description": "Request options to be used in any fetch requests made by the embeddings provider",
|
||||
"$ref": "#/definitions/RequestOptions"
|
||||
}
|
||||
},
|
||||
"required": ["provider"],
|
||||
|
@ -1708,6 +1760,19 @@
|
|||
"then": {
|
||||
"required": ["model"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["cohere"]
|
||||
}
|
||||
},
|
||||
"required": ["provider"]
|
||||
},
|
||||
"then": {
|
||||
"required": ["apiKey"]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -1717,7 +1782,7 @@
|
|||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["voyage", "llm", "free-trial"]
|
||||
"enum": ["cohere", "voyage", "llm", "free-trial"]
|
||||
},
|
||||
"params": {
|
||||
"type": "object"
|
||||
|
@ -1725,6 +1790,40 @@
|
|||
},
|
||||
"required": ["name"],
|
||||
"allOf": [
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["cohere"]
|
||||
}
|
||||
},
|
||||
"required": ["name"]
|
||||
},
|
||||
"then": {
|
||||
"properties": {
|
||||
"params": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"enum": [
|
||||
"rerank-english-v3.0",
|
||||
"rerank-multilingual-v3.0",
|
||||
"rerank-english-v2.0",
|
||||
"rerank-multilingual-v2.0"
|
||||
]
|
||||
},
|
||||
"apiBase": {
|
||||
"type": "string"
|
||||
},
|
||||
"apiKey": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["apiKey"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
|
@ -1789,7 +1888,7 @@
|
|||
"tabAutocompleteOptions": {
|
||||
"title": "TabAutocompleteOptions",
|
||||
"type": "object",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://continue.dev/docs/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://docs.continue.dev/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"properties": {
|
||||
"disable": {
|
||||
"type": "boolean",
|
||||
|
@ -1865,6 +1964,14 @@
|
|||
"title": "Experimental",
|
||||
"description": "Experimental properties are subject to change.",
|
||||
"properties": {
|
||||
"modelRoles": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"inlineEdit": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"contextMenuPrompts": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
|
|
@ -66,4 +66,4 @@ accept [⌥ ⇧ Y] or reject [⌥ ⇧ N] the edit"""
|
|||
|
||||
# endregion
|
||||
|
||||
# Ready to learn more? Check out the Continue documentation: https://continue.dev/docs
|
||||
# Ready to learn more? Check out the Continue documentation: https://docs.continue.dev
|
Before Width: | Height: | Size: 677 B |
|
@ -0,0 +1,3 @@
|
|||
<svg width="13" height="13" viewBox="0 0 13 13" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M10.4742 1.6668L9.77141 2.88516L11.5477 5.95951C11.5607 5.98283 11.5685 6.01135 11.5685 6.03728C11.5685 6.0632 11.5607 6.09172 11.5477 6.11505L9.77141 9.19199L10.4742 10.4104L13 6.03728L10.4742 1.66421V1.6668ZM9.4991 2.72702L10.2019 1.50867H8.79634L8.09357 2.72702H9.5017H9.4991ZM8.09097 3.0433L9.73249 5.88173H11.138L9.4991 3.0433H8.09097ZM9.4991 9.03645L11.138 6.19542H9.73249L8.09097 9.03645H9.4991ZM8.09097 9.35273L8.79374 10.5659H10.1993L9.4965 9.35273H8.08837H8.09097ZM3.32716 10.8666C3.29864 10.8666 3.27271 10.8588 3.24939 10.8458C3.22599 10.8329 3.20527 10.8122 3.19227 10.7889L1.41332 7.71183H0.00777705L2.53362 12.0824H7.58267L6.87991 10.8666H3.32976H3.32716ZM7.15222 10.7085L7.85498 11.9242L8.55775 10.7059L7.85498 9.48755L7.15222 10.7059V10.7085ZM7.58267 9.33201H4.30484L3.60207 10.5503H6.87991L7.58267 9.33201ZM4.02992 9.17647L2.38841 6.33536L1.68562 7.55376L3.32716 10.3948L4.02992 9.17647ZM0.00518489 7.39562H1.41073L2.1135 6.17729H0.71055L0.00518489 7.39562ZM3.18454 1.29611C3.19746 1.27278 3.21826 1.25205 3.24159 1.23908C3.26491 1.22612 3.29344 1.21834 3.31936 1.21834H6.87471L7.57747 0H2.52582L0 4.37305H1.40555L3.17934 1.29871L3.18454 1.29611ZM2.1135 5.90506L1.41073 4.68673H0.00518489L0.707957 5.90506H2.1135ZM3.32196 1.69013L1.68303 4.52859L2.38581 5.74699L4.02472 2.90848L3.32196 1.69013ZM6.87731 1.532H3.59427L4.29704 2.75034H7.58007L6.87731 1.532ZM7.85498 2.5922L8.55515 1.37647L7.85498 0.158126L7.15222 1.37388L7.85498 2.5922Z" fill="#000000"/>
|
||||
</svg>
|
After Width: | Height: | Size: 1.5 KiB |
Before Width: | Height: | Size: 168 B |
|
@ -0,0 +1,3 @@
|
|||
<svg width="13" height="13" viewBox="0 0 13 13" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M10.4742 1.6668L9.77141 2.88516L11.5477 5.95951C11.5607 5.98283 11.5685 6.01135 11.5685 6.03728C11.5685 6.0632 11.5607 6.09172 11.5477 6.11505L9.77141 9.19199L10.4742 10.4104L13 6.03728L10.4742 1.66421V1.6668ZM9.4991 2.72702L10.2019 1.50867H8.79634L8.09357 2.72702H9.5017H9.4991ZM8.09097 3.0433L9.73249 5.88173H11.138L9.4991 3.0433H8.09097ZM9.4991 9.03645L11.138 6.19542H9.73249L8.09097 9.03645H9.4991ZM8.09097 9.35273L8.79374 10.5659H10.1993L9.4965 9.35273H8.08837H8.09097ZM3.32716 10.8666C3.29864 10.8666 3.27271 10.8588 3.24939 10.8458C3.22599 10.8329 3.20527 10.8122 3.19227 10.7889L1.41332 7.71183H0.00777705L2.53362 12.0824H7.58267L6.87991 10.8666H3.32976H3.32716ZM7.15222 10.7085L7.85498 11.9242L8.55775 10.7059L7.85498 9.48755L7.15222 10.7059V10.7085ZM7.58267 9.33201H4.30484L3.60207 10.5503H6.87991L7.58267 9.33201ZM4.02992 9.17647L2.38841 6.33536L1.68562 7.55376L3.32716 10.3948L4.02992 9.17647ZM0.00518489 7.39562H1.41073L2.1135 6.17729H0.71055L0.00518489 7.39562ZM3.18454 1.29611C3.19746 1.27278 3.21826 1.25205 3.24159 1.23908C3.26491 1.22612 3.29344 1.21834 3.31936 1.21834H6.87471L7.57747 0H2.52582L0 4.37305H1.40555L3.17934 1.29871L3.18454 1.29611ZM2.1135 5.90506L1.41073 4.68673H0.00518489L0.707957 5.90506H2.1135ZM3.32196 1.69013L1.68303 4.52859L2.38581 5.74699L4.02472 2.90848L3.32196 1.69013ZM6.87731 1.532H3.59427L4.29704 2.75034H7.58007L6.87731 1.532ZM7.85498 2.5922L8.55515 1.37647L7.85498 0.158126L7.15222 1.37388L7.85498 2.5922Z" fill="#ffffff"/>
|
||||
</svg>
|
After Width: | Height: | Size: 1.5 KiB |
|
@ -2,7 +2,7 @@
|
|||
"customCommands": [
|
||||
{
|
||||
"name": "hello",
|
||||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"description": "This is an example custom command. Use /config to edit it and create more"
|
||||
}
|
||||
]
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
<div align="center">
|
||||
|
||||
**[Continue](https://continue.dev/docs) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**
|
||||
**[Continue](https://docs.continue.dev) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**
|
||||
|
||||
</div>
|
||||
|
||||
|
|
|
@ -151,22 +151,22 @@
|
|||
"groq"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/cohere)",
|
||||
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/openai)",
|
||||
"### Free Trial\nNew users can try out Continue for free using a proxy server that securely makes calls to OpenAI using our API key. If you are ready to use your own API key or have used all 250 free uses, you can enter your API key in config.py where it says `apiKey=\"\"` or select another model provider.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/freetrial)",
|
||||
"### Anthropic\nTo get started with Anthropic models, you first need to sign up for the open beta [here](https://claude.ai/login) to obtain an API key.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/anthropicllm)",
|
||||
"### Cohere\nTo use Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/cohere)",
|
||||
"### Bedrock\nTo get started with Bedrock you need to sign up on AWS [here](https://aws.amazon.com/bedrock/claude/)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamafile)",
|
||||
"### Together\nTogether is a hosted service that provides extremely fast streaming of open-source language models. To get started with Together:\n1. Obtain an API key from [here](https://together.ai)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/togetherllm)",
|
||||
"### Ollama\nTo get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.py (e.g. `model=\"codellama:7b-instruct\"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/ollama)",
|
||||
"### Huggingface TGI\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfacetgi)",
|
||||
"### Huggingface Inference API\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/huggingfaceinferenceapi)",
|
||||
"### Llama.cpp\nllama.cpp comes with a [built-in server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#llamacppexampleserver) that can be run from source. To do this:\n\n1. Clone the repository with `git clone https://github.com/ggerganov/llama.cpp`.\n2. `cd llama.cpp`\n3. Run `make` to build the server.\n4. Download the model you'd like to use and place it in the `llama.cpp/models` directory (the best place to find models is [The Bloke on HuggingFace](https://huggingface.co/TheBloke))\n5. Run the llama.cpp server with the command below (replacing with the model you downloaded):\n\n```shell\n.\\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models/codellama-7b-instruct.Q8_0.gguf\n```\n\nAfter it's up and running, you can start using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamacpp)",
|
||||
"### Replicate\nReplicate is a hosted service that makes it easy to run ML models. To get started with Replicate:\n1. Obtain an API key from [here](https://replicate.com)\n2. Paste below\n3. Select a model preset\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/replicatellm)",
|
||||
"### Gemini API\nTo get started with Google Makersuite, obtain your API key from [here](https://makersuite.google.com) and paste it below.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/googlepalmapi)",
|
||||
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/lmstudio)",
|
||||
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/llamafile)",
|
||||
"### Mistral API\n\nTo get access to the Mistral API, obtain your API key from the [Mistral platform](https://docs.mistral.ai/)",
|
||||
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)"
|
||||
"### DeepInfra\n\n> [Reference](https://docs.continue.dev/reference/Model%20Providers/deepinfra)"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -209,13 +209,14 @@
|
|||
"neural-chat",
|
||||
"codellama-70b",
|
||||
"llava",
|
||||
"gemma"
|
||||
"gemma",
|
||||
"llama3"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
"promptTemplates": {
|
||||
"title": "Prompt Templates",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://continue.dev/docs/setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"markdownDescription": "A mapping of prompt template name ('edit' is currently the only one used in Continue) to a string giving the prompt template. See [here](https://docs.continue.dev/model-setup/configuration#customizing-the-edit-prompt) for an example.",
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
|
@ -785,7 +786,14 @@
|
|||
"then": {
|
||||
"properties": {
|
||||
"model": {
|
||||
"enum": ["mistral-tiny", "mistral-small", "mistral-medium"]
|
||||
"enum": [
|
||||
"open-mistral-7b",
|
||||
"open-mixtral-8x7b",
|
||||
"open-mixtral-8x22b",
|
||||
"mistral-small-latest",
|
||||
"mistral-medium-latest",
|
||||
"mistral-large-latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -807,7 +815,8 @@
|
|||
"mistral-8x7b",
|
||||
"gemma",
|
||||
"llama3-8b",
|
||||
"llama3-70b"
|
||||
"llama3-70b",
|
||||
"AUTODETECT"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -1120,6 +1129,27 @@
|
|||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["share"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"then": {
|
||||
"properties": {
|
||||
"params": {
|
||||
"properties": {
|
||||
"outputDir": {
|
||||
"type": "string",
|
||||
"markdownDescription": "If outputDir is set to `.` or begins with `./` or `.\\`, file will be saved to the current workspace or a subdirectory thereof, respectively. `~` can similarly be used to specify the user's home directory."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"required": ["name", "description"]
|
||||
|
@ -1169,7 +1199,8 @@
|
|||
"outline",
|
||||
"postgres",
|
||||
"code",
|
||||
"system"
|
||||
"system",
|
||||
"url"
|
||||
],
|
||||
"markdownEnumDescriptions": [
|
||||
"Reference the contents of the current changes as given by `git diff`",
|
||||
|
@ -1190,7 +1221,8 @@
|
|||
"Displays definition lines from the currently open files",
|
||||
"References Postgres table schema and sample rows",
|
||||
"Reference specific functions and classes from throughout your codebase",
|
||||
"Reference your operating system and cpu"
|
||||
"Reference your operating system and cpu",
|
||||
"Reference the contents of a page at a URL"
|
||||
],
|
||||
"type": "string"
|
||||
},
|
||||
|
@ -1571,13 +1603,13 @@
|
|||
"properties": {
|
||||
"allowAnonymousTelemetry": {
|
||||
"title": "Allow Anonymous Telemetry",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://continue.dev/docs/telemetry).",
|
||||
"markdownDescription": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to `false`, we will not collect any data. Learn more in [the docs](https://docs.continue.dev/telemetry).",
|
||||
"default": true,
|
||||
"type": "boolean"
|
||||
},
|
||||
"models": {
|
||||
"title": "Models",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://continue.dev/docs/setup/overview).",
|
||||
"markdownDescription": "Learn about setting up models in [the documentation](https://docs.continue.dev/model-setup/overview).",
|
||||
"default": [
|
||||
{
|
||||
"title": "GPT-4 (trial)",
|
||||
|
@ -1614,9 +1646,18 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"requestOptions": {
|
||||
"title": "Request Options",
|
||||
"description": "Default request options for all fetch requests from models and context providers. These will be overriden by any model-specific request options.",
|
||||
"allOf": [
|
||||
{
|
||||
"$ref": "#/definitions/RequestOptions"
|
||||
}
|
||||
]
|
||||
},
|
||||
"slashCommands": {
|
||||
"title": "Slash Commands",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands).",
|
||||
"markdownDescription": "An array of slash commands that let you take custom actions from the sidebar. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1625,11 +1666,11 @@
|
|||
},
|
||||
"customCommands": {
|
||||
"title": "Custom Commands",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://continue.dev/docs/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"markdownDescription": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt. Learn more in the [documentation](https://docs.continue.dev/customization/slash-commands#custom-commands-use-natural-language).",
|
||||
"default": [
|
||||
{
|
||||
"name": "test",
|
||||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
"description": "This is an example custom command. Open config.json to edit it and create more"
|
||||
}
|
||||
],
|
||||
|
@ -1640,7 +1681,7 @@
|
|||
},
|
||||
"contextProviders": {
|
||||
"title": "Context Providers",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://continue.dev/docs/customization/context-providers).",
|
||||
"markdownDescription": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in [the documentation](https://docs.continue.dev/customization/context-providers).",
|
||||
"default": [],
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
@ -1678,11 +1719,17 @@
|
|||
},
|
||||
"embeddingsProvider": {
|
||||
"title": "Embeddings Provider",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://continue.dev/docs/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"markdownDescription": "The method that will be used to generate codebase embeddings. The default is transformers.js, which will run locally in the browser. Learn about the other options [here](https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers).",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["transformers.js", "ollama", "openai", "free-trial"]
|
||||
"enum": [
|
||||
"transformers.js",
|
||||
"ollama",
|
||||
"openai",
|
||||
"cohere",
|
||||
"free-trial"
|
||||
]
|
||||
},
|
||||
"model": {
|
||||
"type": "string"
|
||||
|
@ -1692,6 +1739,11 @@
|
|||
},
|
||||
"apiBase": {
|
||||
"type": "string"
|
||||
},
|
||||
"requestOptions": {
|
||||
"title": "Request Options",
|
||||
"description": "Request options to be used in any fetch requests made by the embeddings provider",
|
||||
"$ref": "#/definitions/RequestOptions"
|
||||
}
|
||||
},
|
||||
"required": ["provider"],
|
||||
|
@ -1708,6 +1760,19 @@
|
|||
"then": {
|
||||
"required": ["model"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"provider": {
|
||||
"enum": ["cohere"]
|
||||
}
|
||||
},
|
||||
"required": ["provider"]
|
||||
},
|
||||
"then": {
|
||||
"required": ["apiKey"]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -1717,7 +1782,7 @@
|
|||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["voyage", "llm", "free-trial"]
|
||||
"enum": ["cohere", "voyage", "llm", "free-trial"]
|
||||
},
|
||||
"params": {
|
||||
"type": "object"
|
||||
|
@ -1725,6 +1790,40 @@
|
|||
},
|
||||
"required": ["name"],
|
||||
"allOf": [
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
"name": {
|
||||
"enum": ["cohere"]
|
||||
}
|
||||
},
|
||||
"required": ["name"]
|
||||
},
|
||||
"then": {
|
||||
"properties": {
|
||||
"params": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"enum": [
|
||||
"rerank-english-v3.0",
|
||||
"rerank-multilingual-v3.0",
|
||||
"rerank-english-v2.0",
|
||||
"rerank-multilingual-v2.0"
|
||||
]
|
||||
},
|
||||
"apiBase": {
|
||||
"type": "string"
|
||||
},
|
||||
"apiKey": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["apiKey"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"if": {
|
||||
"properties": {
|
||||
|
@ -1789,7 +1888,7 @@
|
|||
"tabAutocompleteOptions": {
|
||||
"title": "TabAutocompleteOptions",
|
||||
"type": "object",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://continue.dev/docs/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"markdownDescription": "These options let you customize your tab-autocomplete experience. Read about all options in [the docs](https://docs.continue.dev/walkthroughs/tab-autocomplete#configuration-options).",
|
||||
"properties": {
|
||||
"disable": {
|
||||
"type": "boolean",
|
||||
|
@ -1865,6 +1964,14 @@
|
|||
"title": "Experimental",
|
||||
"description": "Experimental properties are subject to change.",
|
||||
"properties": {
|
||||
"modelRoles": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"inlineEdit": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"contextMenuPrompts": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
|
|
@ -76,4 +76,4 @@ print_sum(["a", "b", "c"])
|
|||
|
||||
# endregion
|
||||
|
||||
# Ready to learn more? Check out the Continue documentation: https://continue.dev/docs
|
||||
# Ready to learn more? Check out the Continue documentation: https://docs.continue.dev
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def sort(array):\n",
|
||||
" return sorted(array)\n",
|
||||
"\n",
|
||||
"def reverse(array):\n",
|
||||
" return array[::-1]\n",
|
||||
"\n",
|
||||
"for i in range(10):\n",
|
||||
" print(i)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 210 KiB |