Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal of Personalized LLM Agent based on KubeEdge-Ianvs Cloud-Edge Collaboration #138

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

Frank-lilinjie
Copy link
Contributor

Personalized LLM Agent based on KubeEdge-Ianvs Cloud-Edge Collaboration

@kubeedge-bot kubeedge-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 22, 2024
Copy link
Collaborator

@MooreZheng MooreZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is related to #128 . As discussed in the routine meeting, this PR is no longer related to lifelong learning, so the current version is fine as it is.

Here are some suggestions

  1. The personalized agent is designed, using specific domain data to finetune multiple agents. This should be indicated in the detailed design.
  2. The demo session shows a user flow, indicating how to use this technique, which is great. It will be appreciated with the user flow enriched, which could directly lead to further designs on APIs.

@MooreZheng MooreZheng requested a review from hsj576 August 29, 2024 10:43
@MooreZheng MooreZheng added documentation PR kind/design Categorizes issue or PR as related to design. and removed documentation PR labels Aug 29, 2024
Copy link
Member

@hsj576 hsj576 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@kubeedge-bot kubeedge-bot added the lgtm Indicates that a PR is ready to be merged. label Aug 30, 2024
@kubeedge-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: hsj576
To complete the pull request process, please assign moorezheng after the PR has been reviewed.
You can assign the PR to them by writing /assign @moorezheng in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Signed-off-by: Frank-lilinjie <[email protected]>
@kubeedge-bot kubeedge-bot removed the lgtm Indicates that a PR is ready to be merged. label Sep 11, 2024
@Frank-lilinjie
Copy link
Contributor Author

This PR is related to #128 . As discussed in the routine meeting, this PR is no longer related to lifelong learning, so the current version is fine as it is.

Here are some suggestions

  1. The personalized agent is designed, using specific domain data to finetune multiple agents. This should be indicated in the detailed design.
  2. The demo session shows a user flow, indicating how to use this technique, which is great. It will be appreciated with the user flow enriched, which could directly lead to further designs on APIs.

I'm going to write your suggestions about the APIs in the case of README.

Copy link
Collaborator

@MooreZheng MooreZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall it looks great! As discussed on the meeting, the comparison example can be improved to made the motivation more vivid.

Signed-off-by: Frank-lilinjie <[email protected]>
@MooreZheng
Copy link
Collaborator

/lgtm

@kubeedge-bot kubeedge-bot added the lgtm Indicates that a PR is ready to be merged. label Sep 26, 2024
@kubeedge-bot kubeedge-bot removed the lgtm Indicates that a PR is ready to be merged. label Sep 26, 2024
@kubeedge-bot
Copy link
Collaborator

New changes are detected. LGTM label has been removed.

Signed-off-by: QiaoZheyu <[email protected]>

Update README.md

Update CI badge

modify proposal of LLM Agent

Signed-off-by: Frank-lilinjie <[email protected]>

:modify the image of demo

Signed-off-by: Frank-lilinjie <[email protected]>
Copy link
Collaborator

@MooreZheng MooreZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a CI issue remains to be resolved
``Run` pylint '/home/runner/work/ianvs/ianvs/core'
************* Module core.testcasecontroller.algorithm.paradigm.singletask_learning.singletask_learning_active_boost
core/testcasecontroller/algorithm/paradigm/singletask_learning/singletask_learning_active_boost.py:66:4: R0917: Too many positional arguments (7/5) (too-many-positional-arguments)
************* Module core.testenvmanager.dataset.dataset
core/testenvmanager/dataset/dataset.py:119:4: R0917: Too many positional arguments (8/5) (too-many-positional-arguments)
core/testenvmanager/dataset/dataset.py:206:4: R0917: Too many positional arguments (6/5) (too-many-positional-arguments)
core/testenvmanager/dataset/dataset.py:213:4: R0917: Too many positional arguments (7/5) (too-many-positional-arguments)
core/testenvmanager/dataset/dataset.py:246:4: R0917: Too many positional arguments (7/5) (too-many-positional-arguments)
core/testenvmanager/dataset/dataset.py:285:4: R0917: Too many positional arguments (7/5) (too-many-positional-arguments)
core/testenvmanager/dataset/dataset.py:329:4: R0917: Too many positional arguments (7/5) (too-many-positional-arguments)
core/testenvmanager/dataset/dataset.py:368:4: R0917: Too many positional arguments (6/5) (too-many-positional-arguments)


Your code has been rated at 9.95/10

Error: Process completed with exit code 8.``

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Categorizes issue or PR as related to design. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants