Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: bot message continuation #5440

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

skymkmk
Copy link
Contributor

@skymkmk skymkmk commented Sep 15, 2024

💻 变更类型 | Change Type

  • feat
  • fix
  • refactor
  • perf
  • style
  • test
  • docs
  • ci
  • chore
  • build

🔀 变更说明 | Description of Change

为因达到 max_token 限制(Currently only available for OpenAI API)和手动停止的 assistant 消息提供继续生成的功能

📝 补充信息 | Additional Information

#5439

对于 GPT 与 Claude,当发送的 message 最后角色为 assistant 时,模型会尝试继续回复。(但因我没有 Claude API 用来测试结束原因,所以只完成了 OpenAI API 的功能)

由于网络故障会导致消息内容污染,进而很大概率导致模型直接重新生成消息,故未根据 isError 判断是否提供继续按钮。应当等待后续将 message 与 error 消息分开再行添加。

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced chat message management with new options to stop and continue messages.
    • Added support for tracking message completion reasons.
    • Introduced new methods for retrieving pending message IDs and continuing bot messages.
  • Localization

    • Added translations for "Continue" in multiple languages, improving accessibility for users.
  • Bug Fixes

    • Improved handling of chat streaming responses, allowing for more detailed updates and completion reasons.

These updates enhance user experience, providing greater control and flexibility within the chat interface.

Copy link

vercel bot commented Sep 15, 2024

@skymkmk is attempting to deploy a commit to the NextChat Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Contributor

coderabbitai bot commented Sep 15, 2024

Walkthrough

This pull request introduces several modifications across multiple files, primarily enhancing the chat functionality. Key changes include updates to the ChatOptions interface to accommodate a new optional finishedReason parameter in the onFinish callback, the introduction of new methods for managing message states in the chat controller, and adjustments to localization files to include translations for the term "Continue" in various languages. Additionally, changes to the return types of certain API methods improve the structure of the data returned.

Changes

File Change Summary
app/client/api.ts Updated onFinish callback in ChatOptions to include an optional finishedReason parameter.
app/client/controller.ts Added getPendingMessageId method to ChatControllerPool for retrieving pending message IDs.
app/client/platforms/anthropic.ts Modified return type of a function in ClaudeApi from string to an object containing delta.
app/client/platforms/moonshot.ts Modified return type of a function in MoonshotApi from string to an object containing delta.
app/client/platforms/openai.ts Added finish_reason to API response structure and updated return type to include finishReason.
app/components/chat.tsx Enhanced stopAll and added onContinue functions for managing message states in chat.
app/locales/* Added translations for "Continue" in various languages (e.g., Arabic, Spanish, French, etc.).
app/store/chat.ts Introduced finishedReason in ChatMessage, modified onFinish, and added onContinueBotMessage.
app/utils/chat.ts Updated stream function to handle new return types and include finishedReason in options.onFinish.

Possibly related PRs

  • feat: summarize model customization #5426: This PR modifies the onFinish method in useChatStore to accept an optional finishedReason parameter, which directly relates to the changes made in the main PR to the onFinish callback in the ChatOptions interface.

🐰 In the meadow where the chatbots play,
New features hop in, brightening the day.
With reasons to finish, and messages to send,
Localization blooms, making chats transcend.
So let’s celebrate with a joyful cheer,
For every new language brings friends near! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range and nitpick comments (2)
app/client/controller.ts (1)

29-31: LGTM! Consider adding error handling for unexpected key formats.

The getPendingMessageId function looks good and provides a useful way to access pending message IDs. The implementation is straightforward and relies on the key format being sessionId,messageId.

Consider adding error handling for cases where the key format is unexpected. This could help catch potential bugs early and make the function more robust.

Here's an example of how you could add error handling:

  getPendingMessageId() {
-   return Object.keys(this.controllers).map((v) => v.split(",").at(-1));
+   return Object.keys(this.controllers).map((v) => {
+     const parts = v.split(",");
+     if (parts.length !== 2) {
+       console.warn(`Unexpected key format: ${v}`);
+       return null;
+     }
+     return parts.at(-1);
+   }).filter((v): v is string => v !== null);
  },

This will log a warning for unexpected key formats and return null for those cases. The filter at the end removes any null values from the resulting array.

app/store/chat.ts (1)

477-487: The tool update logic looks fine, but let's address the static analysis hint.

The code segment correctly updates the corresponding tool in the botMessage.tools array with the latest information provided by the tool parameter. It iterates over the tools array, finds the matching tool based on the id, and updates its properties using the spread operator.

However, the static analysis tool suggests that the assignment should not be in an expression. To improve clarity and avoid potential confusion, consider extracting the assignment into a separate statement:

botMessage?.tools?.forEach((t, i, tools) => {
  if (tool.id == t.id) {
-   tools[i] = { ...tool };
+   const updatedTool = { ...tool };
+   tools[i] = updatedTool;
  }
});

This change makes the assignment explicit and separates it from the expression.

Tools
Biome

[error] 477-477: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 9e5d92d and 429e8ab.

Files ignored due to path filters (1)
  • app/icons/continue.svg is excluded by !**/*.svg
Files selected for processing (27)
  • app/client/api.ts (1 hunks)
  • app/client/controller.ts (1 hunks)
  • app/client/platforms/anthropic.ts (1 hunks)
  • app/client/platforms/moonshot.ts (1 hunks)
  • app/client/platforms/openai.ts (2 hunks)
  • app/components/chat.tsx (5 hunks)
  • app/locales/ar.ts (1 hunks)
  • app/locales/bn.ts (1 hunks)
  • app/locales/cn.ts (1 hunks)
  • app/locales/cs.ts (1 hunks)
  • app/locales/de.ts (1 hunks)
  • app/locales/en.ts (1 hunks)
  • app/locales/es.ts (1 hunks)
  • app/locales/fr.ts (1 hunks)
  • app/locales/id.ts (1 hunks)
  • app/locales/it.ts (1 hunks)
  • app/locales/jp.ts (1 hunks)
  • app/locales/ko.ts (1 hunks)
  • app/locales/no.ts (1 hunks)
  • app/locales/pt.ts (1 hunks)
  • app/locales/ru.ts (1 hunks)
  • app/locales/sk.ts (1 hunks)
  • app/locales/tr.ts (1 hunks)
  • app/locales/tw.ts (1 hunks)
  • app/locales/vi.ts (1 hunks)
  • app/store/chat.ts (4 hunks)
  • app/utils/chat.ts (4 hunks)
Files skipped from review due to trivial changes (16)
  • app/locales/ar.ts
  • app/locales/bn.ts
  • app/locales/cn.ts
  • app/locales/cs.ts
  • app/locales/de.ts
  • app/locales/en.ts
  • app/locales/es.ts
  • app/locales/fr.ts
  • app/locales/it.ts
  • app/locales/jp.ts
  • app/locales/ko.ts
  • app/locales/no.ts
  • app/locales/pt.ts
  • app/locales/ru.ts
  • app/locales/tr.ts
  • app/locales/vi.ts
Additional context used
Biome
app/store/chat.ts

[error] 477-477: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

app/components/chat.tsx

[error] 470-472: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


[error] 1060-1063: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


[error] 1193-1196: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

Additional comments not posted (19)
app/client/platforms/moonshot.ts (1)

166-166: Review and harmonize parseSSE implementations across files

The parseSSE function in app/utils/chat.ts already returns an object with both delta and finishReason properties. This implementation is more flexible than the proposed change to { delta: string }. The usage of parseSSE in this file also handles both string and object return types.

However, there seem to be multiple implementations of parseSSE across different files:

  • app/utils/chat.ts
  • app/client/platforms/anthropic.ts
  • app/client/platforms/openai.ts
  • app/client/platforms/moonshot.ts

To ensure consistency and prevent potential issues:

  1. Review all parseSSE implementations to ensure they return the same type.
  2. Consider standardizing on the more flexible return type: { delta?: string; finishReason?: string }.
  3. Verify that all callers of parseSSE handle the object return type correctly.
  4. Maintain the flexible type checking in app/utils/chat.ts as it provides better backwards compatibility.
Analysis chain

Verify the impact of the return type change.

The change in the return type of the parseSSE function from string to { delta: string } is logically correct. However, it's important to ensure that all the callers of this function are updated to handle the new object format instead of expecting a direct string value.

Run the following script to verify the usage of the parseSSE function:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all callers of the `parseSSE` function are updated to handle the new return type.

# Test: Search for the function usage. Expect: Callers should access the `delta` property of the returned object.
rg --type typescript -A 5 $'parseSSE'

Length of output: 71


Script:

#!/bin/bash
# Description: Verify all callers of the `parseSSE` function are updated to handle the new return type.

# Search for .ts files containing 'parseSSE' and show 5 lines after the match
echo "Searching with ripgrep:"
rg -t ts -A 5 'parseSSE'

echo -e "\nSearching with fd and grep:"
fd -e ts -x grep -n -A 5 'parseSSE' {}

echo -e "\nSearching with ast-grep:"
ast-grep --lang typescript --pattern 'parseSSE($_)'

Length of output: 4249

app/client/api.ts (1)

62-62: LGTM!

The addition of the optional finishedReason parameter to the onFinish callback enhances the flexibility and detail in handling chat completion events. This change is backwards compatible, as the new parameter is optional, ensuring existing implementations of the onFinish callback will continue to work without modification.

Developers can choose to leverage the new parameter to implement more nuanced logic based on the reason for the completion, such as handling user actions, errors, or natural conclusions of the conversation differently.

app/utils/chat.ts (4)

172-172: LGTM!

Explicitly typing the options parameter as ChatOptions improves code readability, maintainability, and type safety. It helps catch type-related issues at compile-time and provides better IDE support for the available options.


177-177: LGTM!

The introduction of the finishedReason variable aligns with the changes made to the parseSSE function return type. It allows for storing and utilizing the reason for the stream completion, providing more context about the completion status.


339-343: LGTM!

The updated handling of the chunk returned from parseSSE aligns with the changes made to the function's return type. By checking the type of chunk and handling the delta and finishReason properties accordingly, the code ensures that the message content is accumulated correctly and the completion reason is stored for later use.


163-166: Verify the parseSSE function usage across the codebase.

The updated return type of the parseSSE function provides more granular information about the server-sent events (SSE) response, which is a positive change. It allows for better handling of the streaming response and provides more context about the completion status.

Please ensure that all the code segments consuming the response from parseSSE have been updated to handle the new return type. You can use the following script to verify the usage:

Verification successful

parseSSE function usage verified and updated across the codebase

The verification process confirms that all occurrences of the parseSSE function have been updated to use the new return type { delta?: string; finishReason?: string }. No instances of the old usage pattern (returning just a string) were found.

However, multiple implementations or usages of parseSSE were detected in different contexts. While this doesn't affect the correctness of the current changes, it might be worth considering:

  1. Consolidating these implementations if they serve the same purpose to improve maintainability.
  2. Clearly documenting the differences if these are intentionally separate implementations for different use cases.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all usages of the `parseSSE` function have been updated to handle the new return type.

# Test: Search for the function usage. Expect: No occurrences of the old usage.
rg --type typescript -A 5 $'parseSSE\\(.*\\):\\s*string'

Length of output: 89


Script:

#!/bin/bash
# Description: Verify all usages of the `parseSSE` function have been updated to handle the new return type.

# Test 1: Search for the old function usage in .ts files
echo "Searching for old parseSSE usage in .ts files:"
fd -e ts -x rg -A 5 'parseSSE\(.*\):\s*string' {}

# Test 2: Search for the old function usage in .tsx files
echo "Searching for old parseSSE usage in .tsx files:"
fd -e tsx -x rg -A 5 'parseSSE\(.*\):\s*string' {}

# Test 3: Search for all occurrences of parseSSE to verify its usage
echo "Searching for all parseSSE occurrences:"
fd -e ts -e tsx -x rg -A 5 'parseSSE' {}

Length of output: 1840

app/client/platforms/anthropic.ts (1)

265-265: LGTM! Verify the impact on the consuming code.

The change in the return value format from a string to an object allows for future extensibility. However, ensure that the code consuming the output of this function is updated to handle the new format.

To verify the impact on the consuming code, run the following script:

Verification successful

Verification successful: The change is safe and properly handled.

The usage of parseSSE in app/utils/chat.ts correctly handles both the old (string) and new (object) return value formats. The code checks for both string and object with delta property, ensuring backward compatibility while supporting the new format. There are no other active usages of parseSSE in the codebase that need to be updated.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the `parseSSE` function and its return value.

# Test: Search for the usage of the `parseSSE` function. Expect: The consuming code handles the new return value format.
rg --type typescript -A 5 $'parseSSE'

Length of output: 71


Script:

#!/bin/bash
# Description: Verify the existence of TypeScript files and search for parseSSE usage

# Check for TypeScript files
echo "Checking for TypeScript files:"
fd -e ts

# Search for parseSSE definition and usage
echo -e "\nSearching for parseSSE definition and usage:"
rg -n "parseSSE" --type-add 'ts:*.ts' --type ts

Length of output: 2770


Script:

#!/bin/bash
# Description: Examine parseSSE usage and search for additional active usages

# Check the usage of parseSSE in app/utils/chat.ts
echo "Examining parseSSE usage in app/utils/chat.ts:"
sed -n '/const chunk = parseSSE/,+5p' app/utils/chat.ts

# Search for any other active (non-commented) usages of parseSSE
echo -e "\nSearching for other active parseSSE usages:"
rg -n "parseSSE\(" --type-add 'ts:*.ts' --type ts | grep -v "\/\/"

Length of output: 677

app/locales/tw.ts (1)

48-48: LGTM!

The addition of the "Continue": "繼續" key-value pair in the Messages object is a valid and helpful enhancement to the Traditional Chinese localization.

app/client/platforms/openai.ts (2)

269-269: LGTM!

The addition of the optional finish_reason property to the choices array in the response structure is a valid enhancement. It provides more detailed information about the completion status of the API response, which can be useful for handling different response scenarios, such as message continuation.


290-293: Looks good!

The modification to the return value of the method that processes the API response is a logical change. Returning an object containing both the delta (content) and the finishReason properties provides a more structured way to access the response data. This change aligns with the addition of the finish_reason property to the response structure.

app/locales/sk.ts (1)

50-50: LGTM!

The addition of the Slovak translation for "Continue" is correctly formatted and enhances the user interface for Slovak-speaking users.

app/locales/id.ts (1)

48-48: LGTM!

The addition of the Continue key and its Indonesian translation "Lanjutkan" is consistent with the existing localization file structure and style.

app/store/chat.ts (5)

49-49: LGTM!

The new optional property finishedReason is a clear and appropriate addition to the ChatMessage interface. It won't break existing code.


377-380: Looks good!

The onFinish method is updated appropriately to handle the new optional finishedReason parameter and store it in the botMessage object when provided.


435-521: Great work on implementing the message continuation feature!

The onContinueBotMessage method is well-structured and handles the continuation process effectively. It retrieves the necessary messages, finds the corresponding botMessage, and makes an API request with appropriate callbacks to handle updates, completion, tool usage, errors, and controller management.

The method follows a logical flow and includes error handling for cases when the botMessage is not found.

Overall, the implementation looks solid and adds a valuable feature to the chat functionality.

Tools
Biome

[error] 477-477: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


535-545: The updates to getMessagesWithMemory look good!

The changes allow the method to accept an optional messageID parameter and retrieve messages based on the provided ID. It finds the index of the message with the matching ID and sets the appropriate starting point for message retrieval.

If the message is not found, it handles the case by considering all messages. The calculation of totalMessageCount ensures the correct number of messages are retrieved based on the messageIdx and the length of session.messages.

These updates enhance the flexibility of the getMessagesWithMemory method and support the message continuation feature.


492-511: The error handling logic in the onError callback looks solid.

The code segment appropriately handles errors that occur during the continuation process. It checks if the error is due to an aborted request by looking for the "aborted" keyword in the error message.

It appends the error details to the botMessage.content property to inform the user about the error. Setting botMessage.streaming to false correctly indicates that the streaming has stopped, and setting botMessage.isError based on the nature of the error provides additional information.

Updating the session.messages array with the modified botMessage ensures that the UI reflects the error state. Removing the associated controller using ChatControllerPool.remove is a good practice to clean up resources and prevent further actions on the aborted or failed continuation.

Logging the error to the console is helpful for debugging and monitoring purposes.

Overall, the error handling logic is comprehensive and handles different scenarios appropriately.

app/components/chat.tsx (2)

465-474: LGTM!

The stopAll function correctly stops all pending messages and updates the session state by marking the stopped messages as "aborted".

Regarding the static analysis hint about the assignment in the expression at lines 470-472:

This is a common and idiomatic pattern in JavaScript for mapping over an array and conditionally updating elements based on a condition. It does not introduce any bugs or confusion in this context. The static analysis tool has raised a false positive here which can be safely ignored.

Tools
Biome

[error] 470-472: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


1190-1200: Looks good!

The onContinue function properly updates the session state to mark the message as streaming and triggers the continuation of the bot's response by calling chatStore.onContinueBotMessage with the message ID.

Regarding the static analysis hint about the assignment in the expression at lines 1193-1196:

As mentioned in the previous comment, this is a common and acceptable pattern in JavaScript for conditionally updating array elements during mapping. The static analysis tool has incorrectly flagged this as an issue. It's safe to disregard this hint.

Tools
Biome

[error] 1193-1196: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

Copy link
Contributor

Your build has completed!

Preview deployment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant