forked from eclipse-theia/theia
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial implementation of AI quick fixes in the Monaco editor #127
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- add ui - add openai integration
- introduce ChatResponseParts - LanguageModelProvider can be used in both backend and frontend - frontend access is generically implemented independent of the actual LanguageModelProvider implementation - split code into four packages: - ai-agent: containing the AgentDispatcher. At the moment just delegates to the LanguageModelProvider. Can run in both frontend and backend - ai-chat: only containing the UI part of the chat. - ai-model-provider: containing the infrastructure of the LanguageModelProvider and its frontend bridge - ai-openao: only contains the Open AI LanguageModelProvider
Implements the LanguageModelProviderRegistry which is able to handle an arbitrary number of LanguageModelProviders. Refactors the LanguageModelProvider to only return a simple text or stream of text. It's now the agent's responsibility to convert this into response parts. Therefore the interfaces are also moved to the agent package. The LanguageModelProviderRegistry implementation for the frontend handles all LanguageModelProvider registered in the frontend as well as in the backend. Fixes the StreamNode in the tree-widget to update itself correctly when new tokens arrive.
Introduces ChatModel, including nested ChatRequestModel and ChatResponseModel to represent chat sessions. The chat models allow to inspect and track requests and their responses. Also introduces the ChatService which can be used to manage chat sessions and sending requests. Architecture is inspired by the VS Code implementation, however it intends to be more generic.
…e-theia#13936) fixes eclipse-theia#13800 contributed on behalf of STMicroelectronics Signed-off-by: Remi Schnekenburger <[email protected]> Co-authored-by: Philip Langer <[email protected]>
…#13912) fixes eclipse-theia#13886 contributed on behalf of STMicroelectronics Signed-off-by: Remi Schnekenburger <[email protected]>
Change-Id: I179432698332ff52b33aba7b1f7e203f2bee9c77
fixes eclipse-theia#13848 contributed on behalf of STMicroelectronics Signed-off-by: Remi Schnekenburger <[email protected]>
Change-Id: I80d33303ceadf940f17265b7d910a5c13b59ec89
eclipsesource/osweek-2024#47 Change-Id: Ib9dd82e3ba062990f5642883bc9439aca52931ad
Change-Id: I186190dede14d729992977c2805e2c07100c2d17
Change-Id: Ia257c9a65b5f2bb9aa3e9ccc506f6394d744ff8f
…a#13900) fixes eclipse-theia#13846 contributed on behalf of STMicroelectronics
Co-authored-by: sgraband <[email protected]>
Logs LanguageModel requests and their results to a separate output channel per LanguageModel.
- rename the open button to 'Select Folder' - set the default folder name to 'prompt-templates'
- check if a template with a given id was overridden - adapt calls to return the overridden template if so
- add temporary test command - implement initial cutomization service reading the templates on preferences change - no file watching yet
Review and adapt prompt templates
Fixes an issue with circular injections when using the PromptService in an agent.
Fixes a circular dependency by removing prompt collection. Instead the PromptService is filled programatically on start.
Co-authored-by: Alexandra Buzila <[email protected]>
Co-authored-by: Olaf Lessenich <[email protected]>
Adds a new ai-code-completion Theia extension which provides the CodeCompletionAgent. The agent is integrated via a CompletionItemProvider into Monaco for all files. The extension offers two preferences to enable/disable the feature as well as control its behavior. Co-authored-by: Stefan Dirix <[email protected]>
Change-Id: Ie7f4cfc1923db5afbaef6455089ad5cb21107db7
The language model selection value should be initialized with the value from settings if it exists
Implements: - Copy - Insert at cursor - Monaco Editor - Navigating to the location of the file (if provided) Co-authored-by: Lucas Koehler <[email protected]>
- Ensure we always create a variable part even for undefined variables -- Prompt text will then default to user text (including '#') - Allow adopters to register resolvers with priority -- Given a particular variable name, argument and context - Automatically resolve all variable parts in a chat request -- Ensure parts always provide a matching prompt text - Make sure variable service is part of core -- Generic variable handling for all agents and UI layers -- Chat-specific variable handling only in the chat layer -- Provide example of 'today' variable Fixes eclipsesource/osweek-2024#46 Co-authored-by: Christian W. Damus <[email protected]>
Added a view for displaying all the configured llamafiles. Configured llamafiles can be started and killed. One llamafile can be set as active, then being used in the chat. The chat integration is currently hardcoded to use the active llamafile language model. This should be changed as soon as the chat integration has a dropdown to select the language model (#42). A follow up will be created to describe the next steps.
- Extend `ChatServiceImpl` to extract the selected agent from the parsed request - Extend `ChatModel` to ensure request and response model keep a reference to their agent - Update rendering of chat view to render icons and label of the chat agent if set - Update agent definitions. Since we reuse the request parsing from vscode following reuqirements have to be met - No whitespaces are allowed in the agent name. - The locations property needs to be set - Extract type of `Agent.languageModelRequirements` into dedicated definition
Allow to select Agent in Chat
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What it does
A very basic prototype of a Monaco
CodeActionProvider
for AI quick fixes.Addresses parts of eclipsesource/osweek-2024#29
How to test
Manifest some kind of problem, identified in the Monaco editor by a problem marker, in a source file of your choice in a language of your choice. So long as that choice is implemented in a Monaco editor with a suitable language server.
Summon quick fixes on the problem using
Ctrl+.
(orCmd+.
on Mac).Select the "AI Quick Fix" option.
Bask in the glory of a corrected source code.
Follow-ups
Review checklist
Reminder for reviewers