-
Notifications
You must be signed in to change notification settings - Fork 0
Lite #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Lite #3
Conversation
# Conflicts: # tests/core/default/test_steps.py
…nting-for-python-code Formate uploading files with black linting tool
Correct path for debug logs
Making black a main dependency
Small fixes to cli interface of gpte and bench applications
- Avoid installing packages without version - Define permissions for workflows with external actions - Avoid running CI related actions when no source code has changed - Stop running workflows when there is a newer commit in PR
Fix potential github action smells
…o installed packages feat(README.md): enhance Docker setup instructions and add debugging section fix(entrypoint.sh): change shebang from sh to bash
Hotfix/docker image
…ting-to-filestore extract linting process from file_selector
Preparation for our new release v0.3.1
Update pyproject.toml
Reviewer's GuideThis pull request overhauls the Docker setup and documentation, embeds linting support in the file selector workflow, enriches the CLI and benchmarking interfaces with new options and YAML exports, updates dependencies and entry points, fortifies core utilities (token usage, BenchConfig, prompt formatting, diff validation), refactors improve-mode error handling, integrates a Black-based linting utility, refines CI workflows, introduces a comprehensive feature-CLI module, and aligns the test suite with these enhancements. Sequence diagram for file selection and linting in improve modesequenceDiagram
actor User
participant CLI as CLI
participant FileSelector
participant FileStore
participant Linting
User->>CLI: Start improve mode
CLI->>FileSelector: ask_for_files()
FileSelector-->>CLI: (FilesDict, is_linting)
CLI->>FileStore: linting(FilesDict) (if is_linting)
FileStore->>Linting: lint_files(FilesDict)
Linting-->>FileStore: FilesDict (linted)
FileStore-->>CLI: FilesDict (linted)
CLI->>...: Continue with improved files
Class diagram for new Feature CLI moduleclassDiagram
class Feature {
+__init__(project_path, repository)
+clear_feature()
+clear_task()
+get_description() str
+set_description(feature_description)
+has_description() bool
+get_progress() dict
+update_progress(task)
+set_task(task)
+get_task() str
+has_task() bool
+complete_task()
+open_feature_in_editor()
+open_task_in_editor()
}
class Task {
+__init__(project_path)
+delete()
+set_task(task)
+get_task() str
+open_task_in_editor()
}
class FileSelection {
+included_files: List[str]
+excluded_files: List[str]
}
class Repository {
+__init__(repo_path)
+get_tracked_files() List[str]
+find_most_recent_merge_base()
+get_feature_branch_diff()
+get_unstaged_changes()
+get_git_context()
+create_branch(branch_name)
+stage_all_changes()
+undo_unstaged_changes()
}
class GitContext {
+commits: List[Commit]
+branch_changes: str
+staged_changes: str
+unstaged_changes: str
+tracked_files: List[str]
}
class Commit {
+description: str
+diff: str
}
class FileSelector {
+__init__(project_path, repository, name)
+set_to_yaml(file_selection)
+update_yaml_from_tracked_files()
+get_from_yaml() FileSelection
+get_pretty_selected_from_yaml() str
+open_yaml_in_editor()
+get_included_as_file_repository()
}
class Files {
+__init__(project_path, selected_files)
+write_to_disk(files)
}
Feature --> Repository
Feature --> FileSelection
Task --> FileSelection
Repository --> GitContext
GitContext --> Commit
FileSelector --> FileSelection
Files --> FileSelection
Class diagram for new Linting utilityclassDiagram
class Linting {
+__init__()
+lint_python(content, config)
+lint_files(files_dict, config) FilesDict
}
Linting ..> FilesDict : uses
Class diagram for updated BenchConfig and benchmark typesclassDiagram
class BenchConfig {
+apps: AppsConfig
+mbpp: MbppConfig
+gptme: GptmeConfig
+from_toml(config_file)
+from_dict(config_dict)
+recursive_resolve(data_dict)
+to_dict()
}
class AppsConfig {
+examples_per_problem: int
}
class TaskResult {
+success_rate() float
+to_dict() dict
}
BenchConfig --> AppsConfig
BenchConfig --> GptmeConfig
BenchConfig --> MbppConfig
Class diagram for Prompt class updateclassDiagram
class Prompt {
+__init__(text, image_urls, entrypoint_prompt, prefix)
+to_langchain_content() Dict[str, str]
}
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @TheoMcCabe - I've reviewed your changes and they look great!
Blocking issues:
- By not specifying a USER, a program in the container may run as 'root'. This is a security hazard. If an attacker can control a process running as root, they may have control over the container. Ensure that the last USER in a Dockerfile is a USER other than 'root'. (link)
- Detected subprocess function 'run' without a static string. If this data can be controlled by a malicious actor, it may be an instance of command injection. Audit the use of this call to ensure it is not controllable by an external resource. You may consider using 'shlex.escape()'. (link)
- Detected subprocess function 'run' with user controlled data. A malicious actor could leverage this to perform command injection. You may consider using 'shlex.quote()'. (link)
- Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here. (link)
- Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here. (link)
- Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here. (link)
- The Python documentation recommends using
defusedxmlinstead ofxmlbecause the native Pythonxmllibrary is vulnerable to XML External Entity (XXE) attacks. These attacks can leak confidential data and "XML bombs" can cause denial of service. (link) - Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here. (link)
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `gpt_engineer/applications/cli/main.py:456` </location>
<code_context>
files = FileStore(project_path)
if not no_execution:
if improve_mode:
- files_dict_before = FileSelector(project_path).ask_for_files()
+ # lint the code
+ if is_linting:
</code_context>
<issue_to_address>
Potential undefined variable: is_linting is used before assignment.
`is_linting` is not defined in this scope, which will cause a NameError. Please initialize it before use.
</issue_to_address>
### Comment 2
<location> `gpt_engineer/core/linting.py:51` </location>
<code_context>
+ config = {}
+
+ for filename, content in files_dict.items():
+ extension = filename[
+ filename.rfind(".") :
+ ].lower() # Ensure case insensitivity
+ if extension in self.linters:
+ original_content = content
</code_context>
<issue_to_address>
Extension extraction may fail for files without an extension.
If a filename lacks a dot, this logic returns the whole filename as the extension, which may not match any linter. Please add handling for files without extensions.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
for filename, content in files_dict.items():
extension = filename[
filename.rfind(".") :
].lower() # Ensure case insensitivity
if extension in self.linters:
original_content = content
=======
for filename, content in files_dict.items():
if "." in filename:
extension = filename[filename.rfind("."):].lower() # Ensure case insensitivity
else:
extension = ""
if extension in self.linters:
original_content = content
>>>>>>> REPLACE
</suggested_fix>
### Comment 3
<location> `gpt_engineer/benchmark/bench_config.py:54` </location>
<code_context>
- gpteng=GptengConfig(**config_dict.get("gpteng", {})),
)
+
+ @staticmethod
+ def recursive_resolve(data_dict):
+ for key, value in data_dict.items():
+ if isinstance(value, Integer):
+ data_dict[key] = int(value)
</code_context>
<issue_to_address>
Use of Integer type in recursive_resolve may cause NameError.
`Integer` is not defined or imported, which will cause a NameError. Use `int` or import the appropriate type.
</issue_to_address>
### Comment 4
<location> `tests/applications/feature_cli/test_file_selection.py:176` </location>
<code_context>
+
+
[email protected](reason="Skipping as test requires AI")
+def test_yaml_to_file_selection_fuzzy():
+
+ load_dotenv()
+
+ commented_yaml = """# gpt_engineer:
+# applications:
+# cli:
+ - __init__.py
+ - cli_agent.py
+# - collect.py
+ - file_selector.py
+ - learning.py
+ - main.py"""
+
+ file_selction = fuzzy_parse_file_selection(AI(), commented_yaml)
+
+ assert file_selction == FileSelection(
+ [
+ "gpt_engineer/applications/cli/__init__.py",
</code_context>
<issue_to_address>
Test for fuzzy YAML parsing is skipped and requires AI.
Consider using a mock or fixture for the AI dependency so this test can run in CI, or clearly document the requirements for running it locally.
Suggested implementation:
```python
def test_yaml_to_file_selection_fuzzy(mocker):
"""
Test fuzzy_parse_file_selection with a mock AI dependency.
"""
load_dotenv()
commented_yaml = """# gpt_engineer:
# applications:
# cli:
- __init__.py
- cli_agent.py
# - collect.py
- file_selector.py
- learning.py
- main.py"""
# Mock the AI dependency's behavior
mock_ai = mocker.Mock()
# Adjust the return value to match what fuzzy_parse_file_selection expects
mock_ai.some_method.return_value = FileSelection(
[
"gpt_engineer/applications/cli/__init__.py",
"gpt_engineer/applications/cli/cli_agent.py",
"gpt_engineer/applications/cli/file_selector.py",
"gpt_engineer/applications/cli/learning.py",
"gpt_engineer/applications/cli/main.py",
],
[
"gpt_engineer/applications/cli/collect.py",
],
)
# Patch fuzzy_parse_file_selection to use the mock AI
# If fuzzy_parse_file_selection calls a method on AI, ensure the mock matches
# If it just passes through, you may need to adjust this accordingly
file_selction = fuzzy_parse_file_selection(mock_ai, commented_yaml)
assert file_selction == FileSelection(
[
"gpt_engineer/applications/cli/__init__.py",
"gpt_engineer/applications/cli/cli_agent.py",
"gpt_engineer/applications/cli/file_selector.py",
"gpt_engineer/applications/cli/learning.py",
"gpt_engineer/applications/cli/main.py",
],
[
"gpt_engineer/applications/cli/collect.py",
],
)
```
- This change assumes you are using pytest and pytest-mock (which provides the `mocker` fixture). If not, you may need to import `unittest.mock` and patch manually.
- Adjust `mock_ai.some_method.return_value` to match the actual method called by `fuzzy_parse_file_selection` on the AI object. If the function expects a different interface, update the mock accordingly.
- If `fuzzy_parse_file_selection` expects a specific method or attribute on the AI object, ensure the mock provides it.
</issue_to_address>
### Comment 5
<location> `tests/applications/feature_cli/test_file_selection.py:81` </location>
<code_context>
+def test_file_selection_to_yaml():
</code_context>
<issue_to_address>
Test for file_selection_to_commented_yaml uses a hardcoded expected string.
Comparing parsed YAML structures instead of raw strings will make the test less brittle to formatting changes.
Suggested implementation:
```python
def test_file_selection_to_yaml():
import yaml
included_files = [
"docker/Dockerfile",
"docker/README.md",
"docker/entrypoint.sh",
]
excluded_files = [
".github/ISSUE_TEMPLATE/bug-report.md",
".github/ISSUE_TEMPLATE/documentation-clarification.md",
".github/ISSUE_TEMPLATE/feature-request.md",
```
```python
]
# ... (rest of the test setup)
yaml_str = file_selection_to_commented_yaml(included_files, excluded_files)
# Parse the YAML output and compare the resulting structure
parsed_yaml = yaml.safe_load(yaml_str)
expected_structure = {
"docker": {
"Dockerfile": "included",
"README.md": "included",
"entrypoint.sh": "included",
},
".github": {
"ISSUE_TEMPLATE": {
"bug-report.md": "excluded",
"documentation-clarification.md": "excluded",
"feature-request.md": "excluded",
}
}
}
assert parsed_yaml == expected_structure
```
</issue_to_address>
### Comment 6
<location> `tests/core/test_salvage_correct_hunks.py:83` </location>
<code_context>
salvage_correct_hunks(message_builder("create_two_new_files_chat"), files, memory)
+def test_theo_case():
+ files = FilesDict({"dockerfile": get_file_content("theo_case_code")})
+ updated_files, _ = salvage_correct_hunks(
+ message_builder("theo_case_chat"), files, memory
+ )
+ print(updated_files["dockerfile"])
+ print(updated_files["run.py"])
+
+
</code_context>
<issue_to_address>
New test 'test_theo_case' added for salvage_correct_hunks.
Please add assertions to verify that the outputs are correct, rather than just printing them.
</issue_to_address>
### Comment 7
<location> `docs/open_models.md:120` </location>
<code_context>
+Using Open Router models
+==================
+
+In case you don't posses the hardware to run local LLM's yourself you can use the hosting on [Open Router](https://openrouter.ai) and pay as you go for the tokens.
+
+To set it up you need to Sign In and load purchase 💰 the LLM credits. Pricing per token is different for (each model](https://openrouter.ai/models), but mostly cheaper then Open AI.
</code_context>
<issue_to_address>
Typo: 'posses' should be 'possess'.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
In case you don't posses the hardware to run local LLM's yourself you can use the hosting on [Open Router](https://openrouter.ai) and pay as you go for the tokens.
=======
In case you don't possess the hardware to run local LLM's yourself you can use the hosting on [Open Router](https://openrouter.ai) and pay as you go for the tokens.
>>>>>>> REPLACE
</suggested_fix>
## Security Issues
### Issue 1
<location> `docker/Dockerfile:29` </location>
<issue_to_address>
**security (dockerfile.security.missing-user-entrypoint):** By not specifying a USER, a program in the container may run as 'root'. This is a security hazard. If an attacker can control a process running as root, they may have control over the container. Ensure that the last USER in a Dockerfile is a USER other than 'root'.
```suggestion
USER non-root
ENTRYPOINT ["bash", "/app/entrypoint.sh"]
```
*Source: opengrep*
</issue_to_address>
### Issue 2
<location> `gpt_engineer/applications/cli/file_selector.py:227` </location>
<issue_to_address>
**security (python.lang.security.audit.dangerous-subprocess-use-audit):** Detected subprocess function 'run' without a static string. If this data can be controlled by a malicious actor, it may be an instance of command injection. Audit the use of this call to ensure it is not controllable by an external resource. You may consider using 'shlex.escape()'.
*Source: opengrep*
</issue_to_address>
### Issue 3
<location> `gpt_engineer/applications/cli/file_selector.py:227` </location>
<issue_to_address>
**security (python.lang.security.audit.dangerous-subprocess-use-tainted-env-args):** Detected subprocess function 'run' with user controlled data. A malicious actor could leverage this to perform command injection. You may consider using 'shlex.quote()'.
*Source: opengrep*
</issue_to_address>
### Issue 4
<location> `gpt_engineer/applications/cli/file_selector.py:234` </location>
<issue_to_address>
**security (python.lang.security.audit.dangerous-spawn-process-audit):** Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here.
*Source: opengrep*
</issue_to_address>
### Issue 5
<location> `gpt_engineer/applications/feature_cli/feature.py:178` </location>
<issue_to_address>
**security (python.lang.security.audit.dangerous-spawn-process-audit):** Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here.
*Source: opengrep*
</issue_to_address>
### Issue 6
<location> `gpt_engineer/applications/feature_cli/file_selection.py:313` </location>
<issue_to_address>
**security (python.lang.security.audit.dangerous-spawn-process-audit):** Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here.
*Source: opengrep*
</issue_to_address>
### Issue 7
<location> `gpt_engineer/applications/feature_cli/generation_tools.py:1` </location>
<issue_to_address>
**security (python.lang.security.use-defused-xml):** The Python documentation recommends using `defusedxml` instead of `xml` because the native Python `xml` library is vulnerable to XML External Entity (XXE) attacks. These attacks can leak confidential data and "XML bombs" can cause denial of service.
*Source: opengrep*
</issue_to_address>
### Issue 8
<location> `gpt_engineer/applications/feature_cli/task.py:65` </location>
<issue_to_address>
**security (python.lang.security.audit.dangerous-spawn-process-audit):** Found dynamic content when spawning a process. This is dangerous if external data can reach this function call because it allows a malicious actor to execute commands. Ensure no external data reaches here.
*Source: opengrep*
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| files = FileStore(project_path) | ||
| if not no_execution: | ||
| if improve_mode: | ||
| files_dict_before = FileSelector(project_path).ask_for_files() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Potential undefined variable: is_linting is used before assignment.
is_linting is not defined in this scope, which will cause a NameError. Please initialize it before use.
| for filename, content in files_dict.items(): | ||
| extension = filename[ | ||
| filename.rfind(".") : | ||
| ].lower() # Ensure case insensitivity | ||
| if extension in self.linters: | ||
| original_content = content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Extension extraction may fail for files without an extension.
If a filename lacks a dot, this logic returns the whole filename as the extension, which may not match any linter. Please add handling for files without extensions.
| for filename, content in files_dict.items(): | |
| extension = filename[ | |
| filename.rfind(".") : | |
| ].lower() # Ensure case insensitivity | |
| if extension in self.linters: | |
| original_content = content | |
| for filename, content in files_dict.items(): | |
| if "." in filename: | |
| extension = filename[filename.rfind("."):].lower() # Ensure case insensitivity | |
| else: | |
| extension = "" | |
| if extension in self.linters: | |
| original_content = content |
| @staticmethod | ||
| def recursive_resolve(data_dict): | ||
| for key, value in data_dict.items(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Use of Integer type in recursive_resolve may cause NameError.
Integer is not defined or imported, which will cause a NameError. Use int or import the appropriate type.
| def test_yaml_to_file_selection_fuzzy(): | ||
|
|
||
| load_dotenv() | ||
|
|
||
| commented_yaml = """# gpt_engineer: | ||
| # applications: | ||
| # cli: | ||
| - __init__.py | ||
| - cli_agent.py | ||
| # - collect.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Test for fuzzy YAML parsing is skipped and requires AI.
Consider using a mock or fixture for the AI dependency so this test can run in CI, or clearly document the requirements for running it locally.
Suggested implementation:
def test_yaml_to_file_selection_fuzzy(mocker):
"""
Test fuzzy_parse_file_selection with a mock AI dependency.
"""
load_dotenv()
commented_yaml = """# gpt_engineer:
# applications:
# cli:
- __init__.py
- cli_agent.py
# - collect.py
- file_selector.py
- learning.py
- main.py"""
# Mock the AI dependency's behavior
mock_ai = mocker.Mock()
# Adjust the return value to match what fuzzy_parse_file_selection expects
mock_ai.some_method.return_value = FileSelection(
[
"gpt_engineer/applications/cli/__init__.py",
"gpt_engineer/applications/cli/cli_agent.py",
"gpt_engineer/applications/cli/file_selector.py",
"gpt_engineer/applications/cli/learning.py",
"gpt_engineer/applications/cli/main.py",
],
[
"gpt_engineer/applications/cli/collect.py",
],
)
# Patch fuzzy_parse_file_selection to use the mock AI
# If fuzzy_parse_file_selection calls a method on AI, ensure the mock matches
# If it just passes through, you may need to adjust this accordingly
file_selction = fuzzy_parse_file_selection(mock_ai, commented_yaml)
assert file_selction == FileSelection(
[
"gpt_engineer/applications/cli/__init__.py",
"gpt_engineer/applications/cli/cli_agent.py",
"gpt_engineer/applications/cli/file_selector.py",
"gpt_engineer/applications/cli/learning.py",
"gpt_engineer/applications/cli/main.py",
],
[
"gpt_engineer/applications/cli/collect.py",
],
)- This change assumes you are using pytest and pytest-mock (which provides the
mockerfixture). If not, you may need to importunittest.mockand patch manually. - Adjust
mock_ai.some_method.return_valueto match the actual method called byfuzzy_parse_file_selectionon the AI object. If the function expects a different interface, update the mock accordingly. - If
fuzzy_parse_file_selectionexpects a specific method or attribute on the AI object, ensure the mock provides it.
| paths_to_tree, | ||
| file_selection_to_commented_yaml, | ||
| commented_yaml_to_file_selection, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Test for file_selection_to_commented_yaml uses a hardcoded expected string.
Comparing parsed YAML structures instead of raw strings will make the test less brittle to formatting changes.
Suggested implementation:
def test_file_selection_to_yaml():
import yaml
included_files = [
"docker/Dockerfile",
"docker/README.md",
"docker/entrypoint.sh",
]
excluded_files = [
".github/ISSUE_TEMPLATE/bug-report.md",
".github/ISSUE_TEMPLATE/documentation-clarification.md",
".github/ISSUE_TEMPLATE/feature-request.md", ]
# ... (rest of the test setup)
yaml_str = file_selection_to_commented_yaml(included_files, excluded_files)
# Parse the YAML output and compare the resulting structure
parsed_yaml = yaml.safe_load(yaml_str)
expected_structure = {
"docker": {
"Dockerfile": "included",
"README.md": "included",
"entrypoint.sh": "included",
},
".github": {
"ISSUE_TEMPLATE": {
"bug-report.md": "excluded",
"documentation-clarification.md": "excluded",
"feature-request.md": "excluded",
}
}
}
assert parsed_yaml == expected_structure| included_files = tree_to_paths(yaml.safe_load(commented_content)) | ||
| try: | ||
| all_files = tree_to_paths(yaml.safe_load(uncommented_content_1)) | ||
| except: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Use except Exception: rather than bare except: (do-not-use-bare-except)
| except: | |
| except Exception: |
| if stripped_line.startswith("- ") and (last_key != "(./):"): | ||
| # add 2 spaces at the begining of line or after any # | ||
|
|
||
| new_lines.append(" " + line) # Add extra indentation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Use f-string instead of string concatenation (use-fstring-for-concatenation)
| new_lines.append(" " + line) # Add extra indentation | |
| new_lines.append(f" {line}") |
|
|
||
| try: | ||
| file_selection = commented_yaml_to_file_selection(yaml_content) | ||
| except: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Use except Exception: rather than bare except: (do-not-use-bare-except)
| except: | |
| except Exception: |
| lines.append(prefix + "└── " + key) | ||
| extension = " " | ||
| else: | ||
| lines.append(prefix + "├── " + key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Use f-string instead of string concatenation [×4] (use-fstring-for-concatenation)
| # Create an instance of the response class | ||
| response = TaskResponse(planning_thoughts, tasks, closing_remarks) | ||
|
|
||
| return response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Inline variable that is immediately returned (inline-immediately-returned-variable)
| # Create an instance of the response class | |
| response = TaskResponse(planning_thoughts, tasks, closing_remarks) | |
| return response | |
| return TaskResponse(planning_thoughts, tasks, closing_remarks) |
Summary by Sourcery
Add interactive feature CLI and code linting, enhance Docker and benchmark workflows, refactor core components for more robust error handling and configurability
New Features:
Bug Fixes:
Enhancements:
CI:
Documentation:
Tests: