Skip to content

Conversation

@Acquent0
Copy link

@Acquent0 Acquent0 commented Sep 22, 2025

LLM4AD is an open-source, modular, and user-friendly platform designed for automatic algorithm development leveraging large language models (LLMs). With DeepSeek models as the default LLMs, the platform empowers users to streamline algorithm design processes efficiently.

Website: http://www.llm4ad.com/
Repository: https://github.com/Optima-CityU/llm4ad
Documentation: https://llm4ad-doc.readthedocs.io/en/latest/ Explore LLM4AD to harness the power of LLMs for innovative algorithm design!

Summary by CodeRabbit

  • New Features
    • Streamlit-based Nebius chat and image generation app.
    • Nebius LLM client integration.
    • Ready-to-run workflows (EoH/FunSearch/HillClimb/ReEvo) for online bin packing, CVRP, TSP, VRPTW, MOEA/D, and circle packing.
    • New evaluation tasks and templates for circle packing and online bin packing.
    • Fake LLM utilities for offline/debug runs.
  • Documentation
    • Comprehensive README with setup, usage, tutorials, and references.
    • Added online bin packing tutorial notebook.
  • Chores
    • Example environment file for API keys.
    • Added requirements.txt with dependencies.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 25, 2025

Walkthrough

Adds LLM4AD v1 assets: environment example, requirements, a comprehensive README, Nebius LLM clients (API and Streamlit UI), multiple evaluation tasks (circle packing, online bin packing variants, local dataset gen), and runnable scripts for EoH/FunSearch/HillClimb/ReEvo across tasks (CVRP, TSP, VRPTW, MOEA/D). Includes an OBP tutorial notebook.

Changes

Cohort / File(s) Change summary
Configuration & Docs
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/.env.example, .../requirements.txt, .../README.md
Adds env template with LLM4AD and Nebius API keys, dependency list, and a detailed README with installation, usage, and reference content.
Nebius LLM Clients
.../llm_nebuis.py, .../llm_nebius_temp.py
Introduces NebuisLLM HTTP client (chat completions) and a Streamlit-based NebiusStudioChat app supporting chat, image generation, history, and exports.
Circle Packing (EoH)
.../example_questions/circle_packing/EoH_settings&logs/evaluation.py, .../EoH_settings&logs/run_eoh.py, .../EoH_settings&logs/template.py
Adds CirclePackingEvaluation, a grid-based template program/description, and a runnable EoH script with NebuisLLM and profiler configuration.
Online Bin Packing (OBP) – Core
.../example_questions/online_bin_packing/run_eoh.py, .../online_bin_packing/run_funsearch.py, .../online_bin_packing/online_bin_packing_tutorial.ipynb
Adds EoH and FunSearch runners for OBP and a tutorial notebook covering sampler/evaluator/template usage and demo runs.
OBP – Fake LLM Pipelines
.../online_bin_packing_fake/fake_eoh.py, .../fake_funsearch.py, .../fake_hillclimb.py, .../fake_randsample.py, .../fake_reevo.py
Provides FakeLLM implementations loading pickled functions; runnable scripts for EoH, FunSearch, HillClimb, RandSample, and ReEvo with simple profiler configs.
OBP – Local Dataset & Eval
.../online_bin_packing_local/evaluation.py, .../online_bin_packing_local/generate_weibull_instances.py, .../online_bin_packing_local/run_eoh.py, .../online_bin_packing_local/template.py
Adds local OBP evaluator with helper functions, Weibull dataset generator, EoH runner, and template strings (program and task description).
Other Example Runners
.../example_questions/cvrp_construct/run_eoh.py, .../pymoo_moead/run_eoh.py, .../tsp_construct/run_eoh.py, .../vrptw_construct/run_eoh.py
Adds EoH runners for CVRP, MOEA/D (pymoo), TSP, and VRPTW using NebuisLLM and profiler setups.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User as User/Script
  participant E as EoH Orchestrator
  participant S as Sampler(s)
  participant L as NebuisLLM
  participant V as Evaluator(s)
  participant T as Task Evaluation
  participant P as Profiler

  User->>E: run()
  E->>S: request programs (pop_size × generations)
  loop per sample
    S->>L: draw_sample(prompt)
    L-->>S: program string
    S-->>E: candidate program
  end
  E->>V: dispatch evaluations
  par evaluate candidates
    V->>T: evaluate_program(program)
    T-->>V: score or error
  and
    V->>P: log metrics/events
  end
  V-->>E: aggregated results
  E->>P: record generation summary
  E-->>User: best program and scores
Loading
sequenceDiagram
  autonumber
  participant CE as CirclePackingEvaluation
  participant EVA as Program Evaluator
  participant NP as NumPy

  CE->>EVA: call with n=26
  EVA-->>CE: circles [(x,y,r)] or error
  CE->>NP: convert to ndarray
  CE->>CE: verify_circles(disjoint & in [0,1]^2)
  alt valid
    CE-->>CE: compute sum(r) as score
  else invalid
    CE-->>CE: return -inf
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

  • [FEATURE] #72 — Adds LLM4AD v1 code, docs, Nebius integration, and examples matching this PR’s introduced modules and workflows.

Possibly related PRs

Poem

A rabbit taps keys with a coder’s delight,
Packing circles, bins, and routes through the night.
Nebius whispers, samplers take flight—
EoH drums softly: score, log, rewrite.
From fake LLM burrows to real API light,
Hop, hop—new heuristics in sight! 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 48.89% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly captures the primary change, which is the introduction of the LLM4AD v1.0 platform for algorithm design and matches the extensive additions of files and features in this PR. It is specific and free of generic terms or unnecessary detail, making it clear to reviewers what the pull request delivers.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

🧹 Nitpick comments (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/generate_weibull_instances.py (1)

33-34: Drop the ineffective guard when populating the dataset.

num_items is an integer while the dictionary keys are strings ('instance_{i}'), so this condition is always true and the guard never deduplicates anything. Removing it keeps the mapping logic simple and intention‑revealing.

-        if num_items not in dataset:
-            dataset[f'instance_{i}'] = instance
+        dataset[f'instance_{i}'] = instance
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 69ce332 and 30056ef.

⛔ Files ignored due to path filters (4)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/EoH_sample_history.zip is excluded by !**/*.zip
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/_data/rand_function.pkl is excluded by !**/*.pkl
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/_data/rand_function_and_score.pkl is excluded by !**/*.pkl
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/_data/weibull_train.pkl is excluded by !**/*.pkl
📒 Files selected for processing (22)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/.env.example (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/README.md (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/template.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/online_bin_packing_tutorial.ipynb (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_funsearch.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_randsample.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/generate_weibull_instances.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/template.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/requirements.txt (1 hunks)
👮 Files not reviewed due to content moderation or server errors (3)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/requirements.txt
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/template.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/README.md
🧰 Additional context used
🧬 Code graph analysis (15)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)
  • main (11-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (1)
  • main (14-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_funsearch.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_randsample.py (5)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (2)
  • FakeLLM (14-27)
  • draw_sample (24-27)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (2)
  • FakeLLM (16-27)
  • draw_sample (26-27)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (2)
  • FakeLLM (15-28)
  • draw_sample (25-28)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (2)
  • FakeLLM (16-27)
  • draw_sample (26-27)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1)
  • CirclePackingEvaluation (12-106)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1)
  • main (14-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (3)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1)
  • main (14-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (12-30)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (1)
  • main (12-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (12-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (2)
  • FakeLLM (15-28)
  • draw_sample (25-28)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (2)
  • FakeLLM (14-27)
  • draw_sample (24-27)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (4)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1)
  • main (14-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (12-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (12-30)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (1)
  • main (12-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (5)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (2)
  • FakeLLM (14-27)
  • draw_sample (24-27)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (2)
  • FakeLLM (15-28)
  • draw_sample (25-28)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (2)
  • FakeLLM (16-27)
  • draw_sample (26-27)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_randsample.py (2)
  • FakeLLM (16-27)
  • draw_sample (26-27)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (6)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)
  • main (11-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1)
  • main (14-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (12-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (12-30)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (1)
  • main (12-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (1)
  • main (12-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/generate_weibull_instances.py (1)
  • generate_weibull_dataset (3-36)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (12-30)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (2)
  • evaluate_program (89-90)
  • evaluate (43-63)
🪛 dotenv-linter (3.3.0)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/.env.example

[warning] 1-1: [EndingBlankLine] No blank line at the end of the file

(EndingBlankLine)

🪛 markdownlint-cli2 (0.18.1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/README.md

73-73: Blank line inside blockquote

(MD028, no-blanks-blockquote)


99-99: Dollar signs used before commands without showing output

(MD014, commands-show-output)


100-100: Dollar signs used before commands without showing output

(MD014, commands-show-output)


108-108: Dollar signs used before commands without showing output

(MD014, commands-show-output)


163-163: Dollar signs used before commands without showing output

(MD014, commands-show-output)


164-164: Dollar signs used before commands without showing output

(MD014, commands-show-output)


227-227: Link text should be descriptive

(MD059, descriptive-link-text)


281-281: Bare URL used

(MD034, no-bare-urls)

🪛 Gitleaks (8.28.0)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py

[high] 69-69: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🔇 Additional comments (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/.env.example (1)

1-1: Template variable looks good.

The placeholder is clear and ready for user-specific values.

Comment on lines +93 to +106
circles = eva(self.n)

#self.plot_circles(circles)
# Convert to numpy array if not already
circles = np.array(circles, dtype=np.float64)

# Verify the solution
if not self.verify_circles(circles) or len(circles) != self.n:
return -float('inf')

# Sum of radii is our score
score = np.sum(circles[:, 2])

return score
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard evaluation against faulty candidate outputs

If a generated heuristic throws (e.g., returns None, ragged lists, or non-numeric values), the np.array(..., dtype=np.float64) cast will raise, bubbling up through the runner and killing the whole evaluation loop instead of yielding a safe failure score. We need to trap both the candidate invocation and the conversion to numpy, reject malformed shapes, and fall back to -inf. Otherwise a single bad sample can derail the entire workflow.

-        circles = eva(self.n)
-
-        #self.plot_circles(circles)
-        # Convert to numpy array if not already
-        circles = np.array(circles, dtype=np.float64)
-
-        # Verify the solution
-        if not self.verify_circles(circles) or len(circles) != self.n:
-            return -float('inf')
-
-        # Sum of radii is our score
-        score = np.sum(circles[:, 2])
+        try:
+            raw_circles = eva(self.n)
+        except Exception:
+            return -float('inf')
+
+        try:
+            circles = np.array(raw_circles, dtype=np.float64)
+        except (TypeError, ValueError):
+            return -float('inf')
+
+        if circles.ndim != 2 or circles.shape[0] != self.n or circles.shape[1] < 3:
+            return -float('inf')
+
+        circles = circles[:, :3]
+
+        # Verify the solution
+        if not self.verify_circles(circles):
+            return -float('inf')
+
+        # Sum of radii is our score
+        score = float(np.sum(circles[:, 2]))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
circles = eva(self.n)
#self.plot_circles(circles)
# Convert to numpy array if not already
circles = np.array(circles, dtype=np.float64)
# Verify the solution
if not self.verify_circles(circles) or len(circles) != self.n:
return -float('inf')
# Sum of radii is our score
score = np.sum(circles[:, 2])
return score
try:
raw_circles = eva(self.n)
except Exception:
return -float('inf')
try:
circles = np.array(raw_circles, dtype=np.float64)
except (TypeError, ValueError):
return -float('inf')
if circles.ndim != 2 or circles.shape[0] != self.n or circles.shape[1] < 3:
return -float('inf')
circles = circles[:, :3]
# Verify the solution
if not self.verify_circles(circles):
return -float('inf')
# Sum of radii is our score
score = float(np.sum(circles[:, 2]))
return score
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/evaluation.py
around lines 93-106, guard the candidate invocation and numpy conversion so
malformed outputs don't crash the runner: wrap circles = eva(self.n) in a
try/except and return -float('inf') on any exception or if it returns None; then
attempt conversion with np.asarray inside a try/except (or try np.array with
dtype=np.float64 and catch TypeError/ValueError), reject ragged or wrong-shaped
results by checking the array.ndim == 2 and shape[0] == self.n and shape[1] ==
3, ensure all values are finite and numeric (e.g., np.isfinite), and only then
call self.verify_circles and compute the score; on any failure or validation
error return -float('inf').

Comment on lines 8 to 27
sys.path.append('../../../') # This is for finding all the modules

from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
from llm4ad.base import LLM
from llm4ad.method.eoh import EoH, EoHProfiler, EoHTensorboardProfiler

class FakeLLM(LLM):
"""We select random functions from rand_function.pkl
This sampler can help you debug your method even if you don't have an LLM API / deployed local LLM.
"""

def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)

def draw_sample(self, prompt: str | Any, *args, **kwargs) -> str:
fake_thought = '{This is a fake thought for the code}\n'
rand_func = random.choice(self._functions)
return fake_thought + rand_func
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix relative path handling for imports and data loads.

sys.path.append('../../../') and open('_data/rand_function.pkl', ...) resolve against the current working directory, so running the script from the repo root (or any other directory) breaks with ModuleNotFoundError/FileNotFoundError. Please anchor both paths to the file location via Path(__file__).resolve().

-from __future__ import annotations
-
-import pickle
-import random
-from typing import Any
-import sys
-
-sys.path.append('../../../')  # This is for finding all the modules
+from __future__ import annotations
+
+import pickle
+import random
+import sys
+from pathlib import Path
+from typing import Any
+
+ROOT = Path(__file__).resolve().parents[2]
+if str(ROOT) not in sys.path:
+    sys.path.append(str(ROOT))
@@
-        with open('_data/rand_function.pkl', 'rb') as f:
+        data_file = Path(__file__).resolve().parent / '_data' / 'rand_function.pkl'
+        with data_file.open('rb') as f:
             self._functions = pickle.load(f)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sys.path.append('../../../') # This is for finding all the modules
from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
from llm4ad.base import LLM
from llm4ad.method.eoh import EoH, EoHProfiler, EoHTensorboardProfiler
class FakeLLM(LLM):
"""We select random functions from rand_function.pkl
This sampler can help you debug your method even if you don't have an LLM API / deployed local LLM.
"""
def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)
def draw_sample(self, prompt: str | Any, *args, **kwargs) -> str:
fake_thought = '{This is a fake thought for the code}\n'
rand_func = random.choice(self._functions)
return fake_thought + rand_func
from __future__ import annotations
import pickle
import random
import sys
from pathlib import Path
from typing import Any
ROOT = Path(__file__).resolve().parents[2]
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT))
from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
from llm4ad.base import LLM
from llm4ad.method.eoh import EoH, EoHProfiler, EoHTensorboardProfiler
class FakeLLM(LLM):
"""We select random functions from rand_function.pkl
This sampler can help you debug your method even if you don't have an LLM API / deployed local LLM.
"""
def __init__(self):
super().__init__()
data_file = Path(__file__).resolve().parent / '_data' / 'rand_function.pkl'
with data_file.open('rb') as f:
self._functions = pickle.load(f)
def draw_sample(self, prompt: str | Any, *args, **kwargs) -> str:
fake_thought = '{This is a fake thought for the code}\n'
rand_func = random.choice(self._functions)
return fake_thought + rand_func
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py
around lines 8-27, the code uses sys.path.append('../../../') and
open('_data/rand_function.pkl', ...) which are relative to the current working
directory and break when the script is run from elsewhere; update the file to
compute paths from the file location (use Path(__file__).resolve().parent to get
the module directory), then: (1) derive the project/repo root or package root
from that path and append it to sys.path (or preferably ensure imports are
package-relative) instead of a hardcoded '../../../', and (2) open the pickle
file via the resolved path (Path(__file__).resolve().parent / '_data' /
'rand_function.pkl') so file loading is deterministic regardless of CWD; also
add necessary imports (from pathlib import Path) at the top.

Comment on lines 3 to 34
import pickle
import random
from typing import Any
import sys

from llm4ad.method.funsearch.profiler import FunSearchProfiler

sys.path.append('../../../') # This is for finding all the modules

from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
from llm4ad.base import LLM
from llm4ad.method.funsearch import FunSearch

class FakeLLM(LLM):
"""We select random functions from rand_function.pkl
This sampler can help you debug your method even if you don't have an LLM API / deployed local LLM.
"""

def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)

def draw_sample(self, prompt: str | Any, *args, **kwargs) -> str:
return random.choice(self._functions)


if __name__ == '__main__':
llm = FakeLLM()
task = OBPEvaluation()
method = FunSearch(
llm=FakeLLM(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Import order and resource paths break offline debugging

Right now you import llm4ad modules before tweaking sys.path, so the script fails unless llm4ad is already installed. On top of that, both the module hack and _data/rand_function.pkl rely on the caller’s CWD, so invoking the sample from elsewhere explodes with ModuleNotFoundError/FileNotFoundError. Move the path setup ahead of the imports and derive all paths from __file__.

-from __future__ import annotations
-
-import pickle
-import random
-from typing import Any
-import sys
-
-from llm4ad.method.funsearch.profiler import FunSearchProfiler
-
-sys.path.append('../../../')  # This is for finding all the modules
-
-from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
+from __future__ import annotations
+
+import pickle
+import random
+import sys
+from pathlib import Path
+from typing import Any
+
+BASE_DIR = Path(__file__).resolve().parent
+PROJECT_ROOT = BASE_DIR.parents[2]
+if str(PROJECT_ROOT) not in sys.path:
+    sys.path.append(str(PROJECT_ROOT))
+
+from llm4ad.method.funsearch.profiler import FunSearchProfiler
+from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
@@
     def __init__(self):
         super().__init__()
-        with open('_data/rand_function.pkl', 'rb') as f:
+        with open(BASE_DIR / '_data' / 'rand_function.pkl', 'rb') as f:
             self._functions = pickle.load(f)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import pickle
import random
from typing import Any
import sys
from llm4ad.method.funsearch.profiler import FunSearchProfiler
sys.path.append('../../../') # This is for finding all the modules
from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
from llm4ad.base import LLM
from llm4ad.method.funsearch import FunSearch
class FakeLLM(LLM):
"""We select random functions from rand_function.pkl
This sampler can help you debug your method even if you don't have an LLM API / deployed local LLM.
"""
def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)
def draw_sample(self, prompt: str | Any, *args, **kwargs) -> str:
return random.choice(self._functions)
if __name__ == '__main__':
llm = FakeLLM()
task = OBPEvaluation()
method = FunSearch(
llm=FakeLLM(),
from __future__ import annotations
import pickle
import random
import sys
from pathlib import Path
from typing import Any
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
from llm4ad.method.funsearch.profiler import FunSearchProfiler
from llm4ad.task.optimization.online_bin_packing import OBPEvaluation
from llm4ad.base import LLM
from llm4ad.method.funsearch import FunSearch
class FakeLLM(LLM):
"""We select random functions from rand_function.pkl
This sampler can help you debug your method even if you don't have an LLM API / deployed local LLM.
"""
def __init__(self):
super().__init__()
# Load functions using an absolute path based on this file’s location
with open(BASE_DIR / '_data' / 'rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)
if __name__ == '__main__':
llm = FakeLLM()
task = OBPEvaluation()
method = FunSearch(
llm=llm,
# …
)
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py
around lines 3-34, the script mutates sys.path and opens _data/rand_function.pkl
using the caller's CWD and it currently performs those actions after importing
llm4ad modules which can cause ModuleNotFoundError/FileNotFoundError when run
from elsewhere; move the sys.path.append call to the top of the file before any
llm4ad imports, compute an absolute project/module root using
Path(__file__).resolve().parent (or parent.parent as needed) and append that
path to sys.path, then open the pickle using an absolute path derived from
__file__ (e.g., Path(__file__).resolve().parent / '_data' / 'rand_function.pkl')
so imports and file loading work regardless of the current working directory.

import os
import sys

sys.path.append('../../../') # This is for finding all the modules
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix the relative import path handling

sys.path.append('../../../') is resolved relative to the current working directory, not the script’s location. Running this script from the repo root (a common scenario) jumps three levels above the repo, so the llm4ad imports fail with ModuleNotFoundError. Compute the path from __file__ instead.

-import sys
-
-sys.path.append('../../../')  # This is for finding all the modules
+import sys
+
+sys.path.append(
+    os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../'))
+)  # This is for finding all the modules

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py
around line 7, the script appends a relative path with
sys.path.append('../../../') which is resolved against the current working
directory and can point outside the repo when run from the repo root; replace
this with computing the repo path relative to the script file by using __file__:
determine the script directory with os.path.dirname(__file__), climb the
required number of parent directories with os.path.abspath/os.path.join or
os.path.normpath, convert that to an absolute path, and insert it into sys.path
(preferably at index 0) only if it’s not already present so imports resolve
correctly regardless of the CWD.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)

12-13: Remove the duplicate import os

os is already imported on Line 2; the second import is redundant.

-import os
 from evaluation import OBPEvaluation
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)

20-23: Validate API key and make endpoint/model configurable via env

Guard against missing API key and allow host/model/timeout to be configured without code edits.

-    llm = HttpsApi(host='xxx',  # your host endpoint, e.g., 'api.openai.com', 'api.deepseek.com'
-                   key=os.getenv("LLM4AD_API_KEY"),  # your key, e.g., 'sk-abcdefghijklmn'
-                   model='xxx',  # your llm, e.g., 'gpt-3.5-turbo'
-                   timeout=60)
+    host = os.getenv("LLM4AD_API_HOST", "api.openai.com")
+    model = os.getenv("LLM4AD_MODEL", "gpt-3.5-turbo")
+    timeout = int(os.getenv("LLM4AD_TIMEOUT", "60"))
+    key = os.environ.get("LLM4AD_API_KEY")
+    if not key:
+        raise EnvironmentError("LLM4AD_API_KEY is not set")
+    llm = HttpsApi(host=host, key=key, model=model, timeout=timeout)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 477dfb5 and c5e33c5.

📒 Files selected for processing (13)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_funsearch.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_randsample.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_randsample.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_funsearch.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py
🧰 Additional context used
🧬 Code graph analysis (6)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (19-37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (5)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (2)
  • FakeLLM (19-30)
  • draw_sample (29-30)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (2)
  • FakeLLM (20-31)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_randsample.py (2)
  • FakeLLM (20-31)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (2)
  • FakeLLM (19-32)
  • draw_sample (29-32)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (5)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (2)
  • FakeLLM (16-29)
  • draw_sample (26-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (2)
  • FakeLLM (20-31)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_randsample.py (2)
  • FakeLLM (20-31)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_reevo.py (2)
  • FakeLLM (19-32)
  • draw_sample (29-32)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (18-37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (6)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)
  • main (17-35)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1)
  • main (17-34)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (18-37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (19-37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (1)
  • main (18-35)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (1)
  • main (18-35)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (66-90)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (2)
  • FakeLLM (19-30)
  • draw_sample (29-30)
🔇 Additional comments (4)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_hillclimb.py (1)

25-28: Use BASE_DIR when loading the bundled pickle

You already computed BASE_DIR, but the open('_data/…') call still relies on the caller’s CWD. Running python …/fake_hillclimb.py from the repo root blows up with FileNotFoundError. Please resolve the path from BASE_DIR so the sample works regardless of where it’s launched.

     def __init__(self):
         super().__init__()
-        with open('_data/rand_function.pkl', 'rb') as f:
-            self._functions = pickle.load(f)
+        data_path = BASE_DIR / '_data' / 'rand_function.pkl'
+        with data_path.open('rb') as f:
+            self._functions = pickle.load(f)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_eoh.py (1)

8-24: Normalize the pickle path to this file’s directory

open('_data/rand_function.pkl', …) is still relative to the process CWD, so invoking this script from the repo root raises FileNotFoundError. Capture the module directory (just like you do for the project root) and build the absolute path before opening.

-ROOT = Path(__file__).resolve().parents[2]
+BASE_DIR = Path(__file__).resolve().parent
+ROOT = BASE_DIR.parents[1]
 if str(ROOT) not in sys.path:
     sys.path.append(str(ROOT))
@@
     def __init__(self):
         super().__init__()
-        with open('_data/rand_function.pkl', 'rb') as f:
-            self._functions = pickle.load(f)
+        data_path = BASE_DIR / '_data' / 'rand_function.pkl'
+        with data_path.open('rb') as f:
+            self._functions = pickle.load(f)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_fake/fake_funsearch.py (1)

24-27: Anchor _data to BASE_DIR

Despite wiring up BASE_DIR, the code still opens _data/rand_function.pkl via a relative path. Launching the script from anywhere but online_bin_packing_fake/ fails. Please read the pickle through BASE_DIR so the offline harness is usable.

     def __init__(self):
         super().__init__()
-        with open('_data/rand_function.pkl', 'rb') as f:
-            self._functions = pickle.load(f)
+        data_path = BASE_DIR / '_data' / 'rand_function.pkl'
+        with data_path.open('rb') as f:
+            self._functions = pickle.load(f)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)

40-41: LGTM: proper entry-point guard

Clean main-guard; keeps module import-safe.

Comment on lines +7 to +10
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix sys.path root so llm4ad can be imported

BASE_DIR.parents[2] resolves to starter_ai_agents/llm_for_algorithm_design, which does not contain the llm4ad package. Running the script from the repo root (python starter_ai_agents/.../run_eoh.py) still raises ModuleNotFoundError: No module named 'llm4ad'. We need the path that actually contains the llm4ad directory—llm4ad_v1.0, i.e. BASE_DIR.parents[1].

-PROJECT_ROOT = BASE_DIR.parents[2]
+PROJECT_ROOT = BASE_DIR.parents[1]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[1]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py
around lines 7 to 10, the script adds BASE_DIR.parents[2] to sys.path which
points to starter_ai_agents/llm_for_algorithm_design (missing the llm4ad_v1.0
package); change the root to the directory that actually contains the llm4ad
package by using BASE_DIR.parents[1] (i.e., the llm4ad_v1.0 directory) before
appending to sys.path, keeping the existing guard that only appends if the path
is not already present.

Comment on lines +6 to +11
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix package root detection and prepend to sys.path

parents[2] resolves to the llm_for_algorithm_design folder, not the package root that actually contains the llm4ad package (llm4ad_v1.0). This will cause ModuleNotFoundError unless cwd happens to make it work. Make the detection robust and use insert(0) to prioritize local sources.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path before any llm4ad imports
+BASE_DIR = Path(__file__).resolve().parent
+# Search upwards to find a directory that contains the llm4ad package
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+# Fallback to the expected llm4ad_v1.0 root if discovery fails
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[1]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
# Derive package root containing "llm4ad" and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
# Search upwards to find a directory that contains the llm4ad package
PACKAGE_ROOT = None
for candidate in (BASE_DIR, *BASE_DIR.parents):
if (candidate / 'llm4ad' / '__init__.py').exists():
PACKAGE_ROOT = candidate
break
# Fallback to the expected llm4ad_v1.0 root if discovery fails
if PACKAGE_ROOT is None:
PACKAGE_ROOT = BASE_DIR.parents[1]
if str(PACKAGE_ROOT) not in sys.path:
sys.path.insert(0, str(PACKAGE_ROOT))

Comment on lines +27 to +35
method = EoH(llm=llm,
profiler=ProfilerBase(log_dir='logs/eoh', log_style='simple'),
evaluation=task,
max_sample_nums=20000,
max_generations=10,
pop_size=4,
num_samplers=1,
num_evaluators=1,
debug_mode=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid accidental high-cost runs; ensure logs directory exists

  • max_sample_nums=20000 risks long runtime and high API cost; default to a modest value and allow env override.
  • Ensure logs directory exists before constructing the profiler.
-    method = EoH(llm=llm,
-                 profiler=ProfilerBase(log_dir='logs/eoh', log_style='simple'),
-                 evaluation=task,
-                 max_sample_nums=20000,
+    log_dir = Path('logs/eoh')
+    log_dir.mkdir(parents=True, exist_ok=True)
+    method = EoH(llm=llm,
+                 profiler=ProfilerBase(log_dir=str(log_dir), log_style='simple'),
+                 evaluation=task,
+                 max_sample_nums=int(os.getenv("LLM4AD_MAX_SAMPLES", "20")),
                  max_generations=10,
                  pop_size=4,
                  num_samplers=1,
                  num_evaluators=1,
                  debug_mode=False)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py
around lines 27 to 35, the default max_sample_nums is set very high (20000) and
the ProfilerBase is constructed without ensuring the logs directory exists;
change the code to read max_sample_nums from an environment variable (e.g.,
MAX_SAMPLE_NUMS) with a modest default like 1000 and safe int parsing/fallback,
and before creating ProfilerBase ensure the log directory ('logs/eoh') exists by
creating it (os.makedirs(..., exist_ok=True) or
pathlib.Path(...).mkdir(parents=True, exist_ok=True)); then pass the
parsed/validated max_sample_nums into EoH and construct ProfilerBase after the
directory is created.

Comment on lines +7 to +10
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
Copy link
Contributor

@coderabbitai coderabbitai bot Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Point sys.path at the actual llm4ad package root

Line 8 adds starter_ai_agents/llm_for_algorithm_design to sys.path, but the llm4ad package lives under llm4ad_v1.0. Running the script will still raise ModuleNotFoundError: No module named 'llm4ad'. Please append the directory that actually contains the llm4ad folder.

-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
+BASE_DIR = Path(__file__).resolve().parent
+PROJECT_ROOT = BASE_DIR.parents[1]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[1]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To use the llm4ad package, you must first install it with pip install llm4ad. After installation, you can import it using import llm4ad. ​Importantly, the package name llm4ad is unrelated to the direction name you might also call llm4ad in your code.​​ This potential naming conflict is likely the reason you're encountering the "llm4ad module not found" error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@Acquent0, thank you for that clarification! You're absolutely right - I misunderstood the package structure. If llm4ad is an external package installed via pip install llm4ad, then my original comment about the sys.path manipulation was based on incorrect assumptions.

Since llm4ad is an external package, you might want to add it to the requirements.txt file to ensure consistent installation across environments. Also, if the sys.path manipulation in the script isn't needed for importing the external llm4ad package, it could potentially be removed unless it's serving another purpose for local imports.


🏁 Script executed:

#!/bin/bash
# Check if llm4ad is mentioned in requirements.txt
cat starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/requirements.txt | grep -i llm4ad

Length of output: 0


The grep result confirms that llm4ad is indeed missing from the requirements.txt file. Let me check what specific components are being imported from the llm4ad package in the example scripts to better understand the dependencies.


🏁 Script executed:

#!/bin/bash
# Check what imports are being used from llm4ad in the example scripts
rg -n "from llm4ad" starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/ -A2 -B1

# Also check for direct llm4ad imports
rg -n "import llm4ad" starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/ -A2 -B1

Length of output: 15097


Perfect! Now I can see the complete picture. The scripts are importing extensively from the external llm4ad package (components like llm4ad.task.optimization, llm4ad.tools.llm.llm_api_https, llm4ad.method.eoh, etc.), but some scripts also import local modules like from evaluation import CirclePackingEvaluation.

You're absolutely right about my original comment being incorrect. The sys.path manipulation in the script is likely needed for importing local Python files within the example directories (like evaluation.py, template.py), not for the external llm4ad package.

However, I notice that llm4ad is missing from the requirements.txt file despite being heavily used throughout all the example scripts. You might want to consider adding it to ensure users can easily install all dependencies:

This would help users who clone the repository understand they need to install llm4ad as a prerequisite for running the examples.


🏁 Script executed:

#!/bin/bash
# Check the current requirements.txt content to see where llm4ad could be added
echo "Current requirements.txt:"
cat starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/requirements.txt

Length of output: 216


Thank you for the clarification! You're absolutely right, and I apologize for my initial misunderstanding.

Looking at all the example scripts, they extensively import from the external llm4ad package (components like llm4ad.task.optimization, llm4ad.tools.llm, llm4ad.method.eoh, etc.). The sys.path manipulation in the scripts is for importing local files like evaluation.py and template.py, not for the external llm4ad package itself.

Since llm4ad is used throughout all the example scripts but isn't listed in requirements.txt, I'd recommend adding it to make the setup process smoother for users:

numpy<2.0.0
scipy
numba
requests
openai
+ llm4ad
torch
tensorboard
wandb
pytz
pandas
matplotlib
ttkbootstrap
codebleu
tree-sitter-python==0.21
gym
datasets
huggingface_hub
networkx
# langchain_ollama

This way, users can install all dependencies with a single pip install -r requirements.txt command rather than needing to separately install llm4ad.


✏️ Learnings added
Learnt from: Acquent0
PR: Arindam200/awesome-ai-apps#71
File: starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py:7-10
Timestamp: 2025-09-25T07:54:05.919Z
Learning: The `llm4ad` package is an external Python package that needs to be installed via `pip install llm4ad`, not a local package within the repository structure. The package name `llm4ad` is unrelated to any local directory names that might also be called `llm4ad`.

Learnt from: Acquent0
PR: Arindam200/awesome-ai-apps#71
File: starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py:7-10
Timestamp: 2025-09-25T07:54:05.919Z
Learning: When reviewing scripts that manipulate sys.path, check what imports are actually being used. The sys.path manipulation may be for local module imports rather than external packages that should be installed via pip.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_funsearch.py (3)

6-10: Make sys.path derivation robust and prioritize the local package

Detect the llm4ad package root by presence and insert it at the front of sys.path to avoid resolving to an installed/global package.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path before any llm4ad imports
+BASE_DIR = Path(__file__).resolve().parent
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[2]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))

28-35: Ensure logs directory exists before initializing ProfilerBase

Create the directory to avoid sporadic failures on first run.

-    method = FunSearch(
+    log_dir = Path('logs/funsearch')
+    log_dir.mkdir(parents=True, exist_ok=True)
+    method = FunSearch(
         llm=llm,
-        profiler=ProfilerBase(log_dir='logs/funsearch', log_style='simple'),
+        profiler=ProfilerBase(log_dir=str(log_dir), log_style='simple'),
         evaluation=task,
         max_sample_nums=20,
         num_samplers=1,
         num_evaluators=1,
     )

21-24: Read host/model/timeout from environment to avoid hardcoded placeholders

Allows configuration without code edits and prevents accidental calls to invalid hosts.

-    llm = HttpsApi(host='xxx',  # your host endpoint, e.g., 'api.openai.com', 'api.deepseek.com'
-                   key=os.getenv("LLM4AD_API_KEY"),  # your key, e.g., 'sk-abcdefghijklmn'
-                   model='xxx',  # your llm, e.g., 'gpt-3.5-turbo'
-                   timeout=60)
+    llm = HttpsApi(
+        host=os.getenv("LLM4AD_HOST", "xxx"),  # e.g., 'api.openai.com', 'api.deepseek.com'
+        key=os.getenv("LLM4AD_API_KEY"),
+        model=os.getenv("LLM4AD_MODEL", "xxx"),
+        timeout=int(os.getenv("LLM4AD_TIMEOUT", "60")),
+    )
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)

30-38: Ensure logs directory exists and make max_sample_nums configurable

Create logs directory proactively and allow tuning via env var.

-    method = EoH(llm=llm,
-                 profiler=ProfilerBase(log_dir='logs/eoh', log_style='simple'),
-                 evaluation=task,
-                 max_sample_nums=20,
+    log_dir = Path('logs/eoh')
+    log_dir.mkdir(parents=True, exist_ok=True)
+    method = EoH(llm=llm,
+                 profiler=ProfilerBase(log_dir=str(log_dir), log_style='simple'),
+                 evaluation=task,
+                 max_sample_nums=int(os.getenv("LLM4AD_MAX_SAMPLES", "20")),
                  max_generations=10,
                  pop_size=4,
                  num_samplers=1,
                  num_evaluators=1,
                  debug_mode=False)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/generate_weibull_instances.py (1)

3-41: Vectorize item generation, clarify Weibull params, and remove no-op guard

  • Use RNG and vectorization; drop per-item loop.
  • Fix misleading comment (shape vs scale).
  • Remove the ineffective guard when storing instances.
-def generate_weibull_dataset(num_instances, num_items, capacity_limit):
+def generate_weibull_dataset(num_instances, num_items, capacity_limit, seed=2024):
@@
-    np.random.seed(2024)
+    rng = np.random.default_rng(seed)
@@
-        items = []
-
-        # Generate random samples from Weibull(45, 3) distribution
-        samples = np.random.weibull(3, num_items) * 45
+        # Generate random samples from Weibull(shape=3, scale=45)
+        samples = rng.weibull(3, num_items) * 45
@@
-        # Add the items to the instance
-        for size in sizes:
-            items.append(size)
-
-        instance['items'] = np.array(items)
+        # Add the items to the instance
+        instance['items'] = sizes
@@
-        if num_items not in dataset:
-            dataset[f'instance_{i}'] = instance
+        dataset[f'instance_{i}'] = instance
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)

22-26: Make LLM client configurable via environment

Read host/model/timeout from env to avoid editing the script and reduce misconfiguration.

-    llm = HttpsApi(host='xxx',  # your host endpoint, e.g., 'api.openai.com', 'api.deepseek.com'
-                   key=os.getenv("LLM4AD_API_KEY"),  # your key, e.g., 'sk-abcdefghijklmn'
-                   model='xxx',  # your llm, e.g., 'gpt-3.5-turbo'
-                   timeout=60)
+    llm = HttpsApi(
+        host=os.getenv("LLM4AD_HOST", "xxx"),
+        key=os.getenv("LLM4AD_API_KEY"),
+        model=os.getenv("LLM4AD_MODEL", "xxx"),
+        timeout=int(os.getenv("LLM4AD_TIMEOUT", "60")),
+    )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5e33c5 and 7d6d8fc.

📒 Files selected for processing (11)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/online_bin_packing_tutorial.ipynb (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_funsearch.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/generate_weibull_instances.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/online_bin_packing_tutorial.ipynb
🧰 Additional context used
🧬 Code graph analysis (7)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_funsearch.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (18-39)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/generate_weibull_instances.py (1)
  • generate_weibull_dataset (3-42)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/evaluation.py (2)
  • evaluate (91-106)
  • evaluate_program (36-37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (3)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)
  • main (17-38)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1)
  • main (17-37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (18-39)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py (3)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/cvrp_construct/run_eoh.py (1)
  • main (17-37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (19-40)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/tsp_construct/run_eoh.py (1)
  • main (18-38)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)
  • main (19-40)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1)
  • CirclePackingEvaluation (12-106)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (1)
  • main (18-39)
🪛 Gitleaks (8.28.0)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py

[high] 89-89: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🔇 Additional comments (6)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py (1)

7-10: Fix sys.path to include the actual llm4ad package root

BASE_DIR.parents[2] resolves to .../llm_for_algorithm_design, which does not contain the top-level llm4ad/ package. As a result, import llm4ad... still fails with ModuleNotFoundError. You need to append the llm4ad_v1.0 directory instead.

 BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
+PROJECT_ROOT = BASE_DIR.parents[1]
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/evaluation.py (2)

12-13: Fix package-relative imports so the module loads predictably

Importing this module via its package path currently raises ModuleNotFoundError: No module named 'template'. Switch to explicit relative imports to keep it importable regardless of the caller’s cwd.

-from template import template_program, task_description
-from generate_weibull_instances import generate_weibull_dataset
+from .template import template_program, task_description
+from .generate_weibull_instances import generate_weibull_dataset

89-107: Honor the documented dataset parameters

data_file and data_key are advertised but never used; the constructor always regenerates a fixed dataset, silently ignoring caller input. Please load the requested dataset (raising the documented exceptions) before falling back to synthetic generation.

-        self._datasets = generate_weibull_dataset(5, 5000, 100)
+        data_path = Path(data_file)
+        if not data_path.is_absolute():
+            data_path = Path(__file__).with_name(data_file)
+
+        if data_path.exists():
+            with data_path.open('rb') as fh:
+                store = pickle.load(fh)
+            try:
+                self._datasets = store[data_key]
+            except KeyError as exc:
+                raise AttributeError(
+                    f"Dataset key '{data_key}' not found in '{data_path.name}'."
+                ) from exc
+        else:
+            if data_file != 'weibull_train.pkl':
+                raise FileNotFoundError(f"Dataset file '{data_file}' not found.")
+            self._datasets = generate_weibull_dataset(5, 5000, 100)
# Add near the top of the file
import pickle
from pathlib import Path
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing_local/run_eoh.py (1)

6-10: Robustly locate the llm4ad package and prepend to sys.path

Search upward for the llm4ad package; insert at index 0 to prefer the local copy.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path before any llm4ad imports
+BASE_DIR = Path(__file__).resolve().parent
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[2]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/online_bin_packing/run_eoh.py (2)

6-10: Robustly find the package root and prepend to sys.path

Prefer discovery over fixed parent index; insert at 0 to prioritize local sources.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path before any llm4ad imports
+BASE_DIR = Path(__file__).resolve().parent
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[2]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))

29-37: Avoid accidental high-cost runs and ensure logs dir exists

20000 samples can be very expensive; make it env-configurable and create logs dir beforehand.

-    method = EoH(llm=llm,
-                 profiler=ProfilerBase(log_dir='logs/eoh', log_style='simple'),
-                 evaluation=task,
-                 max_sample_nums=20000,
+    log_dir = Path('logs/eoh')
+    log_dir.mkdir(parents=True, exist_ok=True)
+    method = EoH(llm=llm,
+                 profiler=ProfilerBase(log_dir=str(log_dir), log_style='simple'),
+                 evaluation=task,
+                 max_sample_nums=int(os.getenv("LLM4AD_MAX_SAMPLES", "20")),
                  max_generations=10,
                  pop_size=4,
                  num_samplers=1,
                  num_evaluators=1,
                  debug_mode=False)

Comment on lines +6 to +10
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure the in-repo llm4ad takes precedence on sys.path.

Using sys.path.append lets any site-packages installation of llm4ad shadow the freshly added v1.0 code, which can break the script or load mismatched APIs. Insert the project root at the front so the local checkout always wins.

 if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+    sys.path.insert(0, str(PROJECT_ROOT))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.insert(0, str(PROJECT_ROOT))
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/circle_packing/EoH_settings&logs/run_eoh.py
around lines 6 to 10, sys.path.append is used which lets installed packages
shadow the local checkout; change this to ensure the project root is placed at
the front of sys.path (use sys.path.insert(0, str(PROJECT_ROOT))). If the path
may already exist, remove any existing occurrence and then insert it at index 0
so the in-repo llm4ad always takes precedence.

Comment on lines +7 to +10
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix the package root: current path resolution breaks imports.

Line 8 resolves to starter_ai_agents/llm_for_algorithm_design, but the llm4ad package actually lives under starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0. As written, running this script raises ModuleNotFoundError: No module named 'llm4ad'. Please point PROJECT_ROOT to the directory that directly contains the llm4ad/ package (one level higher than BASE_DIR’s parent) before mutating sys.path.

Apply this fix:

-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
+BASE_DIR = Path(__file__).resolve().parent
+PROJECT_ROOT = BASE_DIR.parents[1]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[1]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/vrptw_construct/run_eoh.py
around lines 7 to 10, the PROJECT_ROOT is computed incorrectly (points to
starter_ai_agents/llm_for_algorithm_design) which breaks imports; change
PROJECT_ROOT to the directory one level higher than BASE_DIR.parent (i.e.,
BASE_DIR.parent.parent) so it points at the folder that directly contains the
llm4ad package, then append that path to sys.path if missing.

Copy link
Collaborator

@shivaylamba shivaylamba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

U need to use Nebius models

@Acquent0
Copy link
Author

U need to use Nebius models

Thanks! I‘ve updated the code to use the Nebius models as suggested.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 27

🧹 Nitpick comments (37)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/requirements.txt (1)

1-19: Pin the mission-critical packages before merging.

Right now most entries are unpinned, so any upstream release (e.g., openai, torch, scipy, datasets) can subtly break the workflows you’re shipping. Since LLM4AD relies on deterministic evaluation loops, we need to lock the versions you validated locally in this PR to keep CI and users from chasing moving targets. Please freeze the tested versions (or adopt a constraints file) before we merge.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebius_temp.py (4)

189-189: Move chat input inside the Chat branch

Top-level st.chat_input runs regardless of mode and couples with sidebar logic.

-user_input = st.chat_input("Ask your Questions.")
+# Moved into Chat branch to avoid cross-mode side effects

382-409: Create chat input where it’s used

Keeps UI scoped; prevents accidental reuse elsewhere.

-        # Handle new user input
-        if user_input and user_input.strip():
+        # Handle new user input
+        user_input = st.chat_input("Ask your Questions.")
+        if user_input and user_input.strip():
             # Add user message
             with st.chat_message("user"):
                 st.markdown(user_input.strip())
             # Generate response
             with st.chat_message("assistant"):
                 with st.spinner("Nebius is thinking..."):
                     response, error, usage = chat.send_message(
                         user_input.strip(),
                         model_id,
                         temperature,
                         max_tokens,
                         top_p,
                         # presence_penalty and top_k use defaults
                     )

418-441: Decouple image prompt from chat input

Image generation should have its own prompt field.

-        prompt = user_input or ""
+        prompt = st.text_input("Image prompt", value="", placeholder="Describe the image to generate...")
@@
-        if user_input and user_input.strip():
+        if prompt and prompt.strip():
             with st.spinner("Generating image..."):
                 image_b64, error = chat.generate_image(
-                    prompt=prompt.strip(),
+                    prompt=prompt.strip(),
                     model=image_model_id,
                     response_format=response_format,
                     response_extension=response_extension,

47-63: Optional: expose presence_penalty and top_k in UI or remove from payload

Hidden parameters reduce transparency; either surface them or omit if not supported.

Please confirm Nebius accepts presence_penalty/top_k in this shape. If not, drop them from payload.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/template.py (1)

4-13: Optional: coerce bins to ndarray to handle list inputs
Verified numpy<2.0.0 is declared in starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/requirements.txt.

 def priority(item: float, bins: np.ndarray) -> np.ndarray:
@@
-    return item - bins
+    bins = np.asarray(bins, dtype=float)
+    return item - bins
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/.env.example (1)

1-2: Update llm4ad_v1/.env.example: add newline, fix placeholder casing, and document optional base URL

  • Add missing final newline
  • Standardize “API key” placeholder to “API key”
  • Document optional NEBIUS_BASE_URL override

Apply:

-LLM4AD_API_KEY="Your LLM4AD Api key"
-NEBIUS_API_KEY="Your Nebius API KEY"
+LLM4AD_API_KEY="Your LLM4AD API key"
+NEBIUS_API_KEY="Your Nebius API key"
+# Optional: override Nebius base URL (defaults in code)
+# NEBIUS_BASE_URL="https://api.studio.nebius.com/v1"
+
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/README.md (7)

71-77: Fix callout formatting (GitHub style) and remove blank line in blockquotes

Use GitHub callouts with uppercase keywords and no blank line after the marker.

- > [!Important]
- > The Python version **MUST** be larger or equal to Python 3.9, and less than Python 3.13.
+ > [!IMPORTANT]
+ > The Python version must be >= 3.9 and < 3.13.

- > [!Important]
- > If you are testing machine learning tasks or using GUI, please install gym via `pip install gym`. 
- > Please note that the gym version may be conflict with your own Python environment, please refer to gym's docs to obtain appropriate version.
+ > [!IMPORTANT]
+ > If you are testing machine learning tasks or using GUI, please install gym via `pip install gym`.
+ > Refer to Gym’s docs for a compatible version with your Python environment.

98-101: Drop leading “$” prompts in command examples

Lint flagged MD014. Remove “$” so users can copy/paste easily, or annotate with “console” and include expected output separately.

 ```bash
-$ cd LLM4AD
-$ pip install .
+cd LLM4AD
+pip install .

```diff
 ```bash
-$ pip install llm4ad
+pip install llm4ad

```diff
 ```shell
-$ cd GUI
-$ python run_gui.py
+cd GUI
+python run_gui.py


Also applies to: 107-110, 162-165

---

`225-228`: **Replace “here” with descriptive link text**

MD059: use meaningful link text.


```diff
-A Step-by-step Tutorial on using LLM4AD to solve your algorithm design task is provided [here](https://llm4ad-doc.readthedocs.io/en/latest/dev/run_new_task.html#)
+A step‑by‑step tutorial on using LLM4AD to solve your algorithm design task is provided in the [Run a new task guide](https://llm4ad-doc.readthedocs.io/en/latest/dev/run_new_task.html#).

233-236: Potentially broken LICENSE link

This path is relative to this README; if no local LICENSE exists, link to the upstream license.

-This project is licensed under the **MIT License** - see the [LICENSE](./LICENSE) file for details.
+This project is licensed under the **MIT License** — see the upstream [LICENSE](https://github.com/Optima-CityU/LLM4AD/blob/main/LICENSE).

183-185: Remove duplicated “Others — Coming soon” row

There are two identical rows in the methods table.

-| Others                            | Coming soon                                                  |
-| Others                            | Coming soon                                                  |
+| Others                            | Coming soon                                                  |

17-17: Prefer HTTPS where possible

If https is supported, switch http links to https to avoid mixed‑content issues.

-[Website](http://www.llm4ad.com/)
+[Website](https://www.llm4ad.com/)
-1. Visit our website [LLM4AD Web](http://www.llm4ad.com)
+1. Visit our website [LLM4AD Web](https://www.llm4ad.com)

Also applies to: 271-271


281-281: Make email a mailto link (and avoid bare text)

This also resolves the MD034 lint.

-5. Contact us through email [email protected]
+5. Contact us via email: <mailto:[email protected]>
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/tsp_construct/run_eoh.py (1)

18-27: Load .env and read host/model from env instead of hardcoding placeholders

You already import dotenv; call it and wire env vars to reduce code edits across examples.

-from dotenv import load_dotenv
+from dotenv import load_dotenv
@@
-def main():
+def main():
@@
-    llm = NebuisLLM(host='xxx',  # your host endpoint, e.g., 'api.openai.com', 'api.deepseek.com'
-                   key=os.getenv("LLM4AD_API_KEY"),  # your key, e.g., 'sk-abcdefghijklmn'
-                   model='xxx',  # your llm, e.g., 'gpt-3.5-turbo'
-                   timeout=60)
+    load_dotenv()
+    llm = NebuisLLM(
+        host=os.getenv("LLM4AD_API_HOST", "api.deepseek.com"),
+        key=os.getenv("LLM4AD_API_KEY"),
+        model=os.getenv("LLM4AD_API_MODEL", "deepseek-chat"),
+        timeout=int(os.getenv("LLM4AD_API_TIMEOUT", "60")),
+    )
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (3)

57-64: Tighten evaluate signature

Clarify the callable type for better editor/type‑checker help.

-def evaluate(instances: dict, priority: callable) -> float:
+def evaluate(instances: dict, priority: Callable[[float, np.ndarray], np.ndarray]) -> float:

109-110: Return type should be float; parameter program_str is unused

Align annotation with actual return; consider logging/using program_str in the base evaluator if needed.

-    def evaluate_program(self, program_str: str, callable_func: callable) -> Any | None:
-        return evaluate(self._datasets, callable_func)
+    def evaluate_program(self, program_str: str, callable_func: Callable[[float, np.ndarray], np.ndarray]) -> float:
+        return float(evaluate(self._datasets, callable_func))

45-52: Optional guard: handle case with no valid bin

Given the current bin allocation, this shouldn’t occur, but a defensive check avoids hard crashes if inputs change.

-        priorities = priority(item, bins[valid_bin_indices])
-        # Add item to bin with highest priority.
-        best_bin = valid_bin_indices[np.argmax(priorities)]
+        priorities = priority(item, bins[valid_bin_indices])
+        if valid_bin_indices.size == 0 or len(priorities) == 0:
+            # Should not happen with preallocated bins; fallback to first unused bin
+            best_bin = int(np.argmax(bins))
+        else:
+            # Add item to bin with highest priority.
+            best_bin = int(valid_bin_indices[np.argmax(priorities)])
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/pymoo_moead/run_eoh.py (1)

17-26: Call load_dotenv and use env for host/model

Keep examples configurable without code edits; mirror the TSP script suggestion.

-from dotenv import load_dotenv
+from dotenv import load_dotenv
@@
-def main():
+def main():
@@
-    llm = NebuisLLM(host='xxx',  # your host endpoint, e.g., 'api.openai.com', 'api.deepseek.com'
-                   key=os.getenv("LLM4AD_API_KEY"),  # your key, e.g., 'sk-abcdefghijklmn'
-                   model='xxx',  # your llm, e.g., 'gpt-3.5-turbo'
-                   timeout=100)
+    load_dotenv()
+    llm = NebuisLLM(
+        host=os.getenv("LLM4AD_API_HOST", "api.deepseek.com"),
+        key=os.getenv("LLM4AD_API_KEY"),
+        model=os.getenv("LLM4AD_API_MODEL", "deepseek-chat"),
+        timeout=int(os.getenv("LLM4AD_API_TIMEOUT", "100")),
+    )
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_funsearch.py (1)

31-38: Create logs directory before constructing Profiler

Avoid runtime errors when the logs path doesn’t exist.

-    method = FunSearch(
+    log_dir = Path('logs/funsearch')
+    log_dir.mkdir(parents=True, exist_ok=True)
+    method = FunSearch(
         llm=llm,
-        profiler=ProfilerBase(log_dir='logs/funsearch', log_style='simple'),
+        profiler=ProfilerBase(log_dir=str(log_dir), log_style='simple'),
         evaluation=task,
         max_sample_nums=20,
         num_samplers=1,
         num_evaluators=1,
     )
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/run_eoh.py (1)

32-40: Ensure logs directory exists before profiler

Pre-create logs/eoh.

-    method = EoH(llm=llm,
-                 profiler=ProfilerBase(log_dir='logs/eoh', log_style='simple'),
+    log_dir = Path('logs/eoh'); log_dir.mkdir(parents=True, exist_ok=True)
+    method = EoH(llm=llm,
+                 profiler=ProfilerBase(log_dir=str(log_dir), log_style='simple'),
                  evaluation=task,
                  max_sample_nums=20,
                  max_generations=10,
                  pop_size=4,
                  num_samplers=1,
                  num_evaluators=1,
                  debug_mode=False)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/cvrp_construct/run_eoh.py (1)

30-37: Create logs dir before profiler

Ensure logs/ exists for EoHProfiler.

-    method = EoH(llm=llm,
-                 profiler=EoHProfiler(log_dir='logs', log_style='complex'),
+    log_dir = Path('logs'); log_dir.mkdir(parents=True, exist_ok=True)
+    method = EoH(llm=llm,
+                 profiler=EoHProfiler(log_dir=str(log_dir), log_style='complex'),
                  evaluation=task,
                  max_sample_nums=100,
                  max_generations=10,
                  pop_size=20,
                  num_samplers=4,
                  num_evaluators=4)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/generate_weibull_instances.py (2)

22-38: Vectorize and simplify item generation

Avoid the loop and assign the numpy array directly; also correct the distribution comment.

-        items = []
-
-        # Generate random samples from Weibull(45, 3) distribution
-        samples = np.random.weibull(3, num_items) * 45
-
-        # Clip the samples at the specified limit
-        samples = np.clip(samples, 1, capacity_limit)
-
-        # Round the item sizes to the nearest integer
-        sizes = np.round(samples).astype(int)
-
-        # Add the items to the instance
-        for size in sizes:
-            items.append(size)
-
-        instance['items'] = np.array(items)
+        # Generate Weibull(k=3, scale=45) samples
+        samples = np.random.weibull(3, num_items) * 45
+        sizes = np.clip(samples, 1, capacity_limit).round().astype(int)
+        instance['items'] = sizes

3-11: Optional: Allow seeding via parameter and update docstring

Expose seed for reproducibility without hardcoding; keep default 2024.

-def generate_weibull_dataset(num_instances, num_items, capacity_limit):
+def generate_weibull_dataset(num_instances, num_items, capacity_limit, seed: int = 2024):
@@
-    np.random.seed(2024)
+    np.random.seed(seed)
@@
-        # Generate random samples from Weibull(45, 3) distribution
+        # Generate random samples from Weibull(k=3, scale=45)

Also applies to: 24-26

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (3)

17-18: Load environment variables before reading the API key.

load_dotenv is imported but never called.

-from dotenv import load_dotenv
+from dotenv import load_dotenv
+
+load_dotenv()

23-26: Parameterize host/model via env and fail fast on missing API key.

Avoid hard‑coding and give clearer error if the key is absent.

-    llm = NebuisLLM(host='api.bltcy.ai',  # your host endpoint, e.g., 'api.openai.com', 'api.deepseek.com'
-                   key=os.getenv("LLM4AD_API_KEY"),  # your key
-                   model='deepseek-v3',  # your llm, e.g., 'gpt-3.5-turbo'
-                   timeout=120)
+    host = os.getenv("LLM4AD_HOST", "api.bltcy.ai")
+    model = os.getenv("LLM4AD_MODEL", "deepseek-v3")
+    key = os.getenv("LLM4AD_API_KEY")
+    if not key:
+        raise RuntimeError("LLM4AD_API_KEY is not set")
+    llm = NebuisLLM(host=host, key=key, model=model, timeout=120)

30-38: Consider saner defaults for local runs.

max_sample_nums=15000 and max_generations=10000 can lead to very long runs. Expose these via env/CLI or reduce defaults.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/vrptw_construct/run_eoh.py (1)

18-19: Load .env early.

load_dotenv is imported but not called.

-from dotenv import load_dotenv
+from dotenv import load_dotenv
+load_dotenv()
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/template.py (1)

1-31: Tighten the template: drop unused import and honor seed note.

math isn’t used. Since the note asks to set seeds, add a no-op seed to avoid lints and future randomness drift.

 template_program = '''
 import numpy as np
-import math
 def pack_circles(n: int) -> np.ndarray:
     """
     Pack n circles in a unit square to maximize sum of radii.
@@
-    Important: Set "all" random seeds to 2025, including the packages (such as scipy sub-packages) involving random seeds.
+    Important: Set "all" random seeds to 2025, including the packages (such as scipy sub-packages) involving random seeds.
     """
 
+    # Determinism (kept even if not used now)
+    try:
+        np.random.seed(2025)
+    except Exception:
+        pass
+
     grid_size = int(np.ceil(np.sqrt(n)))
     radius = 0.5 / grid_size
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_reevo.py (1)

50-51: Avoid reaching into private attributes.

method._evaluator._debug_mode = False breaks encapsulation and may change across versions. Prefer a public setter/flag if available; otherwise add a small helper in the framework to toggle evaluator debug mode.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_funsearch.py (1)

33-41: Reuse the created FakeLLM instance.

Minor nit: you instantiate FakeLLM() twice.

 if __name__ == '__main__':
-    llm = FakeLLM()
+    llm = FakeLLM()
     task = OBPEvaluation()
     method = FunSearch(
-        llm=FakeLLM(),
+        llm=llm,
         profiler=FunSearchProfiler(log_dir='logs/funsearch', log_style='simple', program_db_register_interval=50),
         evaluation=task,
         max_sample_nums=100,
         num_samplers=4,
         num_evaluators=4,
     )
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (3)

21-24: Make data file path robust (avoid CWD fragility).

Opening _data/rand_function.pkl via a bare relative path will break when the script is executed from a different working directory. Prefer resolving relative to this file.

Apply this diff:

-        with open('_data/rand_function.pkl', 'rb') as f:
-            self._functions = pickle.load(f)
+        # Resolve data file relative to repository root derived above.
+        data_path = (ROOT / '_data' / 'rand_function.pkl')
+        with data_path.open('rb') as f:
+            self._functions = pickle.load(f)

8-11: Avoid sys.path mutation in importable modules.

Directly appending to sys.path is brittle and can mask packaging issues. If this is meant to be run as a script, keep it behind the main guard or switch to package-relative imports (recommended).

If you intend to run via python -m starter_ai_agents...fake_eoh, we can remove the sys.path hack and fix imports accordingly. Confirm your run mode and I’ll provide a patch.


33-44: Reuse the constructed FakeLLM instance (avoid duplicate instantiation).

llm = FakeLLM() is created but not used; another instance is passed into EoH. Minor inefficiency and confusing.

Apply this diff:

-    llm = FakeLLM()
+    llm = FakeLLM()
     task = OBPEvaluation()
     method = EoH(
-        llm=FakeLLM(),
+        llm=llm,
         profiler=EoHTensorboardProfiler(log_dir='logs/eoh', log_style='simple'),
         evaluation=task,
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/evaluation.py (2)

49-56: Clarify “disjoint” policy (touching allowed?).

Current check allows tangent circles (center_distance == radii_sum). If “disjoint” means strictly no contact, switch to <= in the invalid condition.

-                if center_distance < radii_sum:
+                if center_distance <= radii_sum:
                     return False

57-62: Add basic radius and bounds validation.

Safeguard against non-positive radii and out-of-bounds due to NaNs/Infs.

-            # Check all circles lie inside the unit square [0,1]x[0,1]
+            # Radii must be strictly positive
+            if np.any(circles[:, 2] <= 0) or not np.isfinite(circles).all():
+                return False
+
+            # Check all circles lie inside the unit square [0,1]x[0,1]
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)

3-9: Remove unused imports.

http.client is never used.

-import http.client
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7d6d8fc and 3e93fe8.

⛔ Files ignored due to path filters (4)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/EoH_sample_history.zip is excluded by !**/*.zip
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/_data/rand_function.pkl is excluded by !**/*.pkl
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/_data/rand_function_and_score.pkl is excluded by !**/*.pkl
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/_data/weibull_train.pkl is excluded by !**/*.pkl
📒 Files selected for processing (24)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/.env.example (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/README.md (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/template.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/cvrp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/online_bin_packing_tutorial.ipynb (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_funsearch.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_funsearch.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_hillclimb.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_randsample.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_reevo.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/generate_weibull_instances.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/template.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/pymoo_moead/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/tsp_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/vrptw_construct/run_eoh.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebius_temp.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1 hunks)
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/requirements.txt (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-25T07:54:05.919Z
Learnt from: Acquent0
PR: Arindam200/awesome-ai-apps#71
File: starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py:7-10
Timestamp: 2025-09-25T07:54:05.919Z
Learning: When reviewing scripts that manipulate sys.path, check what imports are actually being used. The sys.path manipulation may be for local module imports rather than external packages that should be installed via pip.

Applied to files:

  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/run_eoh.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/pymoo_moead/run_eoh.py
  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_eoh.py
📚 Learning: 2025-09-25T07:54:05.919Z
Learnt from: Acquent0
PR: Arindam200/awesome-ai-apps#71
File: starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py:7-10
Timestamp: 2025-09-25T07:54:05.919Z
Learning: The `llm4ad` package is an external Python package that needs to be installed via `pip install llm4ad`, not a local package within the repository structure. The package name `llm4ad` is unrelated to any local directory names that might also be called `llm4ad`.

Applied to files:

  • starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/pymoo_moead/run_eoh.py
🧬 Code graph analysis (17)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/tsp_construct/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)
  • main (19-40)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/vrptw_construct/run_eoh.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_reevo.py (2)
  • FakeLLM (19-32)
  • draw_sample (29-32)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (5)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (1)
  • draw_sample (26-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_funsearch.py (1)
  • draw_sample (29-30)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_hillclimb.py (1)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_randsample.py (1)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_reevo.py (1)
  • draw_sample (29-32)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_reevo.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (2)
  • FakeLLM (16-29)
  • draw_sample (26-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1)
  • CirclePackingEvaluation (12-106)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/cvrp_construct/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)
  • main (19-40)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/pymoo_moead/run_eoh.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/generate_weibull_instances.py (1)
  • generate_weibull_dataset (3-42)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/run_eoh.py (3)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_eoh.py (1)
  • main (20-41)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_randsample.py (3)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (2)
  • FakeLLM (16-29)
  • draw_sample (26-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_funsearch.py (2)
  • FakeLLM (19-30)
  • draw_sample (29-30)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_eoh.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_funsearch.py (5)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (2)
  • FakeLLM (16-29)
  • draw_sample (26-29)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_hillclimb.py (2)
  • FakeLLM (20-31)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_randsample.py (2)
  • FakeLLM (20-31)
  • draw_sample (30-31)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_reevo.py (2)
  • FakeLLM (19-32)
  • draw_sample (29-32)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (2)
  • evaluate_program (109-110)
  • evaluate (57-83)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_hillclimb.py (1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_funsearch.py (2)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py (1)
  • OBPEvaluation (86-110)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)
  • NebuisLLM (11-85)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebius_temp.py (2)
advance_ai_agents/finance_service_agent/controllers/ask.py (1)
  • nebius_chat (24-34)
advance_ai_agents/finance_service_agent/routes/agentRoutes.py (1)
  • chat (92-133)
🪛 Gitleaks (8.28.0)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/evaluation.py

[high] 89-89: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🪛 markdownlint-cli2 (0.18.1)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/README.md

73-73: Blank line inside blockquote

(MD028, no-blanks-blockquote)


99-99: Dollar signs used before commands without showing output

(MD014, commands-show-output)


100-100: Dollar signs used before commands without showing output

(MD014, commands-show-output)


108-108: Dollar signs used before commands without showing output

(MD014, commands-show-output)


163-163: Dollar signs used before commands without showing output

(MD014, commands-show-output)


164-164: Dollar signs used before commands without showing output

(MD014, commands-show-output)


227-227: Link text should be descriptive

(MD059, descriptive-link-text)


281-281: Bare URL used

(MD034, no-bare-urls)

🪛 dotenv-linter (3.3.0)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/.env.example

[warning] 2-2: [EndingBlankLine] No blank line at the end of the file

(EndingBlankLine)

🔇 Additional comments (15)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebius_temp.py (1)

67-84: Verification complete: Nebius chat completions response matches OpenAI schema
Confirmed that POST https://api.studio.nebius.ai/v1/chat/completions returns an OpenAI-compatible object with choices[0].message.content and usage.{prompt_tokens,completion_tokens,total_tokens}. No changes needed.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/pymoo_moead/run_eoh.py (1)

6-11: Same sys.path issue as other scripts — point to repo root or use module execution

Use the repository root on sys.path so absolute starter_ai_agents.* imports resolve.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Ensure repository root is on sys.path
+BASE_DIR = Path(__file__).resolve().parent
+REPO_ROOT = BASE_DIR.parents[4]
+if str(REPO_ROOT) not in sys.path:
+    sys.path.insert(0, str(REPO_ROOT))
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_local/run_eoh.py (2)

12-12: Remove duplicate import

os is already imported on Line 2.

-import os

6-11: Make sys.path discovery robust and prioritize local sources

Replace hardcoded parents[2] with discovery; use insert(0). Based on learnings.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path
+BASE_DIR = Path(__file__).resolve().parent
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[1]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_eoh.py (3)

31-39: Avoid accidental high-cost runs; ensure logs dir exists

max_sample_nums=20000 is risky. Read from env with a safe default and create the logs dir first.

-    method = EoH(llm=llm,
-                 profiler=ProfilerBase(log_dir='logs/eoh', log_style='simple'),
-                 evaluation=task,
-                 max_sample_nums=20000,
+    log_dir = Path('logs/eoh'); log_dir.mkdir(parents=True, exist_ok=True)
+    method = EoH(llm=llm,
+                 profiler=ProfilerBase(log_dir=str(log_dir), log_style='simple'),
+                 evaluation=task,
+                 max_sample_nums=int(os.getenv("LLM4AD_MAX_SAMPLES", "20")),
                  max_generations=10,
                  pop_size=4,
                  num_samplers=1,
                  num_evaluators=1,
                  debug_mode=False)

6-11: Fix package root detection and precedence

Use discovery and insert(0) to avoid import errors. Based on learnings.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path
+BASE_DIR = Path(__file__).resolve().parent
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[1]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))

12-16: Your script will locate any definitions of OBPEvaluation to confirm whether the imported module exists.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing/run_funsearch.py (1)

6-11: Fix package root detection and prepend to sys.path

parents[2] may not contain the llm4ad package; use discovery and insert(0) to prefer local code. Based on learnings.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path before any llm4ad imports
+BASE_DIR = Path(__file__).resolve().parent
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[1]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))
⛔ Skipped due to learnings
Learnt from: Acquent0
PR: Arindam200/awesome-ai-apps#71
File: starter_ai_agents/llm_for_algorithm_design/llm4ad_v1.0/example_questions/pymoo_moead/run_eoh.py:7-10
Timestamp: 2025-09-25T07:54:05.919Z
Learning: The `llm4ad` package is an external Python package that needs to be installed via `pip install llm4ad`, not a local package within the repository structure. The package name `llm4ad` is unrelated to any local directory names that might also be called `llm4ad`.
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/run_eoh.py (1)

6-11: Ensure local llm4ad wins on import; also add repo root for starter_ai_agents import.

Using sys.path.append risks site-packages shadowing local code, and the absolute import starter_ai_agents... won’t resolve when running this file directly unless the repo root is on sys.path. Insert both paths at the front and dedupe. Based on learnings.

 # Derive project root and ensure it's on sys.path before any llm4ad imports
 BASE_DIR = Path(__file__).resolve().parent
 PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+REPO_ROOT = PROJECT_ROOT.parents[2]
+for p in [str(REPO_ROOT), str(PROJECT_ROOT)]:
+    if p in sys.path:
+        sys.path.remove(p)
+    sys.path.insert(0, p)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/vrptw_construct/run_eoh.py (1)

6-11: Make imports robust: prefer local package and repo root precedence.

Same sys.path concern as the circle_packing script; ensure local code isn’t shadowed and absolute imports work when run from this folder. Based on learnings.

 # Derive project root and ensure it's on sys.path before any llm4ad imports
 BASE_DIR = Path(__file__).resolve().parent
 PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+REPO_ROOT = PROJECT_ROOT.parents[2]
+for p in [str(REPO_ROOT), str(PROJECT_ROOT)]:
+    if p in sys.path:
+        sys.path.remove(p)
+    sys.path.insert(0, p)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_reevo.py (1)

8-13: Ensure local package precedence and absolute imports work.

Same sys.path guidance as other scripts; insert at front and include repo root. Based on learnings.

 BASE_DIR = Path(__file__).resolve().parent
 PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+REPO_ROOT = PROJECT_ROOT.parents[2]
+for p in [str(REPO_ROOT), str(PROJECT_ROOT)]:
+    if p in sys.path:
+        sys.path.remove(p)
+    sys.path.insert(0, p)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_funsearch.py (1)

8-13: Harden sys.path handling.

Insert at front and add repo root so absolute imports work when running from this folder. Based on learnings.

 BASE_DIR = Path(__file__).resolve().parent
 PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+REPO_ROOT = PROJECT_ROOT.parents[2]
+for p in [str(REPO_ROOT), str(PROJECT_ROOT)]:
+    if p in sys.path:
+        sys.path.remove(p)
+    sys.path.insert(0, p)
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_eoh.py (1)

21-25: Note: pickle trust boundary.

Loading pickle is unsafe if the file can be tampered with. Ensure _data/rand_function.pkl is repo-controlled and never user-supplied.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py (1)

28-36: Propagate errors from batch method and keep typing accurate.

draw_samples should preserve failures (raise) or return consistent list[str]. Current implementation is fine once draw_sample is fixed; just flagging to verify downstream expectations.

starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/circle_packing/EoH_settings&logs/evaluation.py (1)

91-106: Guard evaluation against malformed candidate outputs to prevent runner crashes.

eva(self.n) and the numpy conversion/indexing can raise and crash the loop. Return -inf on any malformed output instead. This was flagged previously and remains.

Apply this diff:

-    def evaluate(self, eva: callable) -> float:
-        """Evaluate the circle packing solution."""
-        circles = eva(self.n)
-
-        #self.plot_circles(circles)
-        # Convert to numpy array if not already
-        circles = np.array(circles, dtype=np.float64)
-
-        # Verify the solution
-        if not self.verify_circles(circles) or len(circles) != self.n:
-            return -float('inf')
-
-        # Sum of radii is our score
-        score = np.sum(circles[:, 2])
-
-        return score
+    def evaluate(self, eva: callable) -> float:
+        """Evaluate the circle packing solution."""
+        try:
+            raw = eva(self.n)
+        except Exception:
+            return -float('inf')
+
+        try:
+            circles = np.asarray(raw, dtype=np.float64)
+        except (TypeError, ValueError):
+            return -float('inf')
+
+        if circles is None or circles.ndim != 2 or circles.shape[0] != self.n or circles.shape[1] < 3:
+            return -float('inf')
+
+        circles = circles[:, :3]
+        if not np.isfinite(circles).all():
+            return -float('inf')
+
+        if not self.verify_circles(circles):
+            return -float('inf')
+
+        return float(np.sum(circles[:, 2]))

Based on learnings

Comment on lines +6 to +11
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Robust package root discovery and sys.path precedence

Same path issue; use discovery + insert(0). Based on learnings.

-# Derive project root and ensure it's on sys.path before any llm4ad imports
-BASE_DIR = Path(__file__).resolve().parent
-PROJECT_ROOT = BASE_DIR.parents[2]
-if str(PROJECT_ROOT) not in sys.path:
-    sys.path.append(str(PROJECT_ROOT))
+# Derive package root containing "llm4ad" and ensure it's on sys.path
+BASE_DIR = Path(__file__).resolve().parent
+PACKAGE_ROOT = None
+for candidate in (BASE_DIR, *BASE_DIR.parents):
+    if (candidate / 'llm4ad' / '__init__.py').exists():
+        PACKAGE_ROOT = candidate
+        break
+if PACKAGE_ROOT is None:
+    PACKAGE_ROOT = BASE_DIR.parents[1]
+if str(PACKAGE_ROOT) not in sys.path:
+    sys.path.insert(0, str(PACKAGE_ROOT))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Derive project root and ensure it's on sys.path before any llm4ad imports
BASE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = BASE_DIR.parents[2]
if str(PROJECT_ROOT) not in sys.path:
sys.path.append(str(PROJECT_ROOT))
# Derive package root containing "llm4ad" and ensure it's on sys.path
BASE_DIR = Path(__file__).resolve().parent
PACKAGE_ROOT = None
for candidate in (BASE_DIR, *BASE_DIR.parents):
if (candidate / 'llm4ad' / '__init__.py').exists():
PACKAGE_ROOT = candidate
break
if PACKAGE_ROOT is None:
PACKAGE_ROOT = BASE_DIR.parents[1]
if str(PACKAGE_ROOT) not in sys.path:
sys.path.insert(0, str(PACKAGE_ROOT))
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/cvrp_construct/run_eoh.py
around lines 6 to 11, the current PROJECT_ROOT discovery and sys.path.append
approach can pick the wrong directory and places the project at the end of
sys.path; change discovery to find the repository/project root robustly (e.g.,
walk up parents until you find a project marker like pyproject.toml, setup.cfg,
or .git) and then insert that path at the front of sys.path using
sys.path.insert(0, project_root_str) so imports from the project take
precedence.

from starter_ai_agents.llm_for_algorithm_design.llm4ad_v1.llm_nebuis import NebuisLLM
from llm4ad.method.eoh import EoH, EoHProfiler

from dotenv import load_dotenv
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Load .env and remove placeholders; validate config

Avoid 'xxx' defaults and ensure required env vars.

 from dotenv import load_dotenv
@@
 def main():
@@
-    llm = NebuisLLM(host='xxx',  # your host endpoint, e.g., 'api.openai.com', 'api.deepseek.com'
-                   key=os.getenv("LLM4AD_API_KEY"),  # your key, e.g., 'sk-abcdefghijklmn'
-                   model='xxx',  # your llm, e.g., 'gpt-3.5-turbo'
-                   timeout=60)
+    load_dotenv()
+    host = (os.getenv("LLM4AD_LLM_HOST") or "").rstrip("/")
+    api_key = os.getenv("LLM4AD_API_KEY")
+    model = os.getenv("LLM4AD_LLM_MODEL") or ""
+    if not host or not api_key or not model:
+        raise RuntimeError("Missing LLM config: set LLM4AD_LLM_HOST, LLM4AD_API_KEY, LLM4AD_LLM_MODEL")
+    llm = NebuisLLM(host=host, key=api_key, model=model, timeout=int(os.getenv("LLM4AD_LLM_TIMEOUT", "60")))

Also applies to: 23-26

Comment on lines +24 to +27
def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use file-relative path for the pickle.

Avoid CWD dependence.

-        with open('_data/rand_function.pkl', 'rb') as f:
+        with open(BASE_DIR / '_data' / 'rand_function.pkl', 'rb') as f:
             self._functions = pickle.load(f)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_funsearch.py
around lines 24 to 27, the code opens '_data/rand_function.pkl' using a
relative-to-CWD path which can break when the working directory differs; change
it to compute a file-relative path using the current module file location (e.g.,
base = os.path.dirname(__file__); path = os.path.join(base, '_data',
'rand_function.pkl')) and open that path instead, and add an import for os at
the top if not already present.

Comment on lines +25 to +31
def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)

def draw_sample(self, prompt: str | Any, *args, **kwargs) -> str:
return random.choice(self._functions)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Fix relative path to pickle; add clear error if missing; avoid CWD dependency

Current path '_data/rand_function.pkl' will fail when run outside the file’s directory.

     def __init__(self):
         super().__init__()
-        with open('_data/rand_function.pkl', 'rb') as f:
-            self._functions = pickle.load(f)
+        pkl_path = BASE_DIR / '_data' / 'rand_function.pkl'
+        if not pkl_path.exists():
+            raise FileNotFoundError(f"Missing rand_function.pkl at {pkl_path}. "
+                                    "Ensure the file exists relative to this script.")
+        with open(pkl_path, 'rb') as f:
+            self._functions = pickle.load(f)

Also note that unpickling arbitrary files is unsafe. If any part of this path is user-controlled, switch to a safe format (e.g., JSON of function strings) or gate with a trusted source check.

Confirm import paths resolve at runtime:


🏁 Script executed:

#!/bin/bash
# Check that the llm4ad package exists under PROJECT_ROOT and OBPEvaluation is importable
fd -a 'llm4ad/__init__.py'
rg -nP 'class\s+OBPEvaluation\b' -g '**/online_bin_packing*/**/*.py' -C2

Length of output: 652


Use absolute Path for rand_function.pkl and add error handling
In fake_hillclimb.py::__init__, replace the hard-coded open('_data/rand_function.pkl', 'rb') with a Path-based lookup from the module’s location (e.g. pkl_path = Path(__file__).resolve().parent.parent / '_data' / 'rand_function.pkl'), raise a clear FileNotFoundError if it doesn’t exist, and avoid relying on the CWD. Ensure your BASE_DIR (or equivalent) is defined or import Path(__file__).resolve().parent directly. Unpickling arbitrary files is unsafe—validate the source or switch to a safer format.

🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_hillclimb.py
around lines 25–31, replace the hard-coded open('_data/rand_function.pkl', 'rb')
with a Path-based absolute lookup using the module location (e.g. pkl_path =
Path(__file__).resolve().parent.parent / '_data' / 'rand_function.pkl'), check
pkl_path.exists() and raise a clear FileNotFoundError with a descriptive message
if missing, then open and unpickle inside a try/except to catch and rewrap
pickle errors (provide context) to avoid silent failures and not rely on CWD;
additionally validate the pickle source or prefer loading a safer format
(JSON/JSONL) and document that unpickling arbitrary files is unsafe.

Comment on lines +25 to +28
def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix data file path resolution.

Opening _data/rand_function.pkl with a bare relative path breaks as soon as this script is launched from the repo root (the common pattern: python starter_ai_agents/.../fake_randsample.py). The working directory no longer matches the script’s folder, so the file lookup raises FileNotFoundError and the runner dies before any fake sampling happens. Resolve the path relative to BASE_DIR so the data loads regardless of where the command is executed.

     def __init__(self):
         super().__init__()
-        with open('_data/rand_function.pkl', 'rb') as f:
+        data_path = BASE_DIR / '_data' / 'rand_function.pkl'
+        with open(data_path, 'rb') as f:
             self._functions = pickle.load(f)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def __init__(self):
super().__init__()
with open('_data/rand_function.pkl', 'rb') as f:
self._functions = pickle.load(f)
def __init__(self):
super().__init__()
data_path = BASE_DIR / '_data' / 'rand_function.pkl'
with open(data_path, 'rb') as f:
self._functions = pickle.load(f)
🤖 Prompt for AI Agents
In
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/example_questions/online_bin_packing_fake/fake_randsample.py
around lines 25 to 28, the code opens '_data/rand_function.pkl' with a bare
relative path which fails when the script is run from the repo root; change this
to resolve the file against BASE_DIR (or compute BASE_DIR if not available) and
open the file using the absolute path. Specifically, build the full path by
joining BASE_DIR (or Path(__file__).resolve().parent if you must) with
'_data/rand_function.pkl', ensure you import os or pathlib as needed, and use
that resolved path in the open(...) call so the pickle loads regardless of
current working directory.

Comment on lines +44 to +49
if not self.api_key:
return None, "API key not configured", {}

if isinstance(prompt, str):
prompt = [{'role': 'user', 'content': prompt.strip()}]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix API key check and return type.

self.api_key doesn’t exist (should be self._key), and returning a (None, msg, {}) tuple violates the declared -> str type and downstream expectations.

Apply this diff:

-        if not self.api_key:
-            return None, "API key not configured", {}
+        if not self._key:
+            raise RuntimeError("NebuisLLM: API key not configured")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if not self.api_key:
return None, "API key not configured", {}
if isinstance(prompt, str):
prompt = [{'role': 'user', 'content': prompt.strip()}]
if not self._key:
raise RuntimeError("NebuisLLM: API key not configured")
if isinstance(prompt, str):
prompt = [{'role': 'user', 'content': prompt.strip()}]

Comment on lines +50 to +67
while True:
try:
url = f"{self._host}/chat/completions"
payload = {
"model": self._model,
"messages": prompt,
"max_tokens": self._kwargs.get('max_tokens', 4096),
"temperature": self._kwargs.get('temperature', 1.0),
"top_p": self._kwargs.get('top_p', None),
"presence_penalty": self._kwargs.get('presence_penalty', None),
"extra_body": {"top_k": self._kwargs.get('top_k', None)},
}
headers = {
'Authorization': f'Bearer {self._key}',
'User-Agent': 'Apifox/1.0.0 (https://apifox.com)',
'Content-Type': 'application/json'
}
response = requests.post(url, json=payload, headers=headers)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Build a valid HTTPS URL and honor request timeout.

Docs say host excludes scheme, but requests requires one. Also, the per‑client timeout is not passed.

Apply this diff:

-                url = f"{self._host}/chat/completions"
+                base = self._host.strip()
+                if not (base.startswith("http://") or base.startswith("https://")):
+                    base = f"https://{base}"
+                url = f"{base.rstrip('/')}/chat/completions"
@@
-                response = requests.post(url, json=payload, headers=headers)
+                response = requests.post(url, json=payload, headers=headers, timeout=self._timeout)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
while True:
try:
url = f"{self._host}/chat/completions"
payload = {
"model": self._model,
"messages": prompt,
"max_tokens": self._kwargs.get('max_tokens', 4096),
"temperature": self._kwargs.get('temperature', 1.0),
"top_p": self._kwargs.get('top_p', None),
"presence_penalty": self._kwargs.get('presence_penalty', None),
"extra_body": {"top_k": self._kwargs.get('top_k', None)},
}
headers = {
'Authorization': f'Bearer {self._key}',
'User-Agent': 'Apifox/1.0.0 (https://apifox.com)',
'Content-Type': 'application/json'
}
response = requests.post(url, json=payload, headers=headers)
while True:
try:
# Ensure host has a scheme and no trailing slash
base = self._host.strip()
if not (base.startswith("http://") or base.startswith("https://")):
base = f"https://{base}"
url = f"{base.rstrip('/')}/chat/completions"
payload = {
"model": self._model,
"messages": prompt,
"max_tokens": self._kwargs.get('max_tokens', 4096),
"temperature": self._kwargs.get('temperature', 1.0),
"top_p": self._kwargs.get('top_p', None),
"presence_penalty": self._kwargs.get('presence_penalty', None),
"extra_body": {"top_k": self._kwargs.get('top_k', None)},
}
headers = {
'Authorization': f'Bearer {self._key}',
'User-Agent': 'Apifox/1.0.0 (https://apifox.com)',
'Content-Type': 'application/json'
}
# Honor the client‐configured timeout
response = requests.post(
url,
json=payload,
headers=headers,
timeout=self._timeout,
)
🤖 Prompt for AI Agents
In starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/llm_nebuis.py around
lines 50 to 67, the code builds the request URL from self._host without a scheme
and calls requests.post without honoring the client's timeout; update the URL
construction to ensure it includes the HTTPS scheme (e.g., if self._host does
not start with "http://" or "https://", prepend "https://") so requests receives
a valid URL, and pass the configured timeout (e.g., timeout=self._timeout or a
sensible default) into requests.post so the per-client timeout is honored.

Comment on lines +50 to +85
while True:
try:
url = f"{self._host}/chat/completions"
payload = {
"model": self._model,
"messages": prompt,
"max_tokens": self._kwargs.get('max_tokens', 4096),
"temperature": self._kwargs.get('temperature', 1.0),
"top_p": self._kwargs.get('top_p', None),
"presence_penalty": self._kwargs.get('presence_penalty', None),
"extra_body": {"top_k": self._kwargs.get('top_k', None)},
}
headers = {
'Authorization': f'Bearer {self._key}',
'User-Agent': 'Apifox/1.0.0 (https://apifox.com)',
'Content-Type': 'application/json'
}
response = requests.post(url, json=payload, headers=headers)

if response.status_code == 200:
result = response.json()
result_response = result["choices"][0]["message"]["content"].strip()
if self.debug_mode:
self._cumulative_error = 0
return result_response
except Exception as e:
self._cumulative_error += 1
if self.debug_mode:
if self._cumulative_error == 10:
raise RuntimeError(f'{self.__class__.__name__} error: {traceback.format_exc()}.'
f'You may check your API host and API key.')
else:
print(f'{self.__class__.__name__} error: {traceback.format_exc()}.'
f'You may check your API host and API key.')
time.sleep(2)
continue No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Avoid infinite retries, handle non-200 responses, and use backoff.

while True loops forever on non-200 without sleeping. Add bounded retries with exponential backoff, and surface API error messages.

Apply this diff to rewrite the retry loop:

-        while True:
+        max_retries = int(self._kwargs.get('max_retries', 5))
+        for attempt in range(max_retries):
             try:
@@
-                if response.status_code == 200:
+                if response.status_code == 200:
                     result = response.json()
                     result_response = result["choices"][0]["message"]["content"].strip()
-                    if self.debug_mode:
-                        self._cumulative_error = 0
+                    if self.debug_mode:
+                        self._cumulative_error = 0
                     return result_response
+                else:
+                    # Try to extract error details
+                    err_text = None
+                    try:
+                        err = response.json()
+                        err_text = err.get("error") or err
+                    except Exception:
+                        err_text = response.text
+                    msg = f'NebuisLLM HTTP {response.status_code}: {err_text}'
+                    if self.debug_mode and attempt == max_retries - 1:
+                        raise RuntimeError(msg)
+                    time.sleep(min(2 ** attempt, 8))
             except Exception as e:
                 self._cumulative_error += 1
-                if self.debug_mode:
-                    if self._cumulative_error == 10:
-                        raise RuntimeError(f'{self.__class__.__name__} error: {traceback.format_exc()}.'
-                                           f'You may check your API host and API key.')
-                else:
-                    print(f'{self.__class__.__name__} error: {traceback.format_exc()}.'
-                          f'You may check your API host and API key.')
-                    time.sleep(2)
-                continue
+                msg = f'{self.__class__.__name__} error: {traceback.format_exc()}'
+                if self.debug_mode and attempt == max_retries - 1:
+                    raise RuntimeError(msg)
+                time.sleep(min(2 ** attempt, 8))
+        raise RuntimeError('NebuisLLM: exceeded max retries')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
while True:
try:
url = f"{self._host}/chat/completions"
payload = {
"model": self._model,
"messages": prompt,
"max_tokens": self._kwargs.get('max_tokens', 4096),
"temperature": self._kwargs.get('temperature', 1.0),
"top_p": self._kwargs.get('top_p', None),
"presence_penalty": self._kwargs.get('presence_penalty', None),
"extra_body": {"top_k": self._kwargs.get('top_k', None)},
}
headers = {
'Authorization': f'Bearer {self._key}',
'User-Agent': 'Apifox/1.0.0 (https://apifox.com)',
'Content-Type': 'application/json'
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
result = response.json()
result_response = result["choices"][0]["message"]["content"].strip()
if self.debug_mode:
self._cumulative_error = 0
return result_response
except Exception as e:
self._cumulative_error += 1
if self.debug_mode:
if self._cumulative_error == 10:
raise RuntimeError(f'{self.__class__.__name__} error: {traceback.format_exc()}.'
f'You may check your API host and API key.')
else:
print(f'{self.__class__.__name__} error: {traceback.format_exc()}.'
f'You may check your API host and API key.')
time.sleep(2)
continue
max_retries = int(self._kwargs.get('max_retries', 5))
for attempt in range(max_retries):
try:
url = f"{self._host}/chat/completions"
payload = {
"model": self._model,
"messages": prompt,
"max_tokens": self._kwargs.get('max_tokens', 4096),
"temperature": self._kwargs.get('temperature', 1.0),
"top_p": self._kwargs.get('top_p', None),
"presence_penalty": self._kwargs.get('presence_penalty', None),
"extra_body": {"top_k": self._kwargs.get('top_k', None)},
}
headers = {
'Authorization': f'Bearer {self._key}',
'User-Agent': 'Apifox/1.0.0 (https://apifox.com)',
'Content-Type': 'application/json'
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
result = response.json()
result_response = result["choices"][0]["message"]["content"].strip()
if self.debug_mode:
self._cumulative_error = 0
return result_response
else:
# Try to extract error details
err_text = None
try:
err = response.json()
err_text = err.get("error") or err
except Exception:
err_text = response.text
msg = f'NebuisLLM HTTP {response.status_code}: {err_text}'
if self.debug_mode and attempt == max_retries - 1:
raise RuntimeError(msg)
time.sleep(min(2 ** attempt, 8))
except Exception as e:
self._cumulative_error += 1
msg = f'{self.__class__.__name__} error: {traceback.format_exc()}'
if self.debug_mode and attempt == max_retries - 1:
raise RuntimeError(msg)
time.sleep(min(2 ** attempt, 8))
raise RuntimeError('NebuisLLM: exceeded max retries')

@@ -0,0 +1,313 @@
<div align="center">
<h1 align="center">
<img src="./assets/figs/logo.png" alt="LLM4AD Logo" style="width: 90%; height: auto;">
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify image asset paths (risk of broken images)

The README references local images under ./assets/figs. Confirm they exist in this repo path; otherwise switch to absolute URLs in the upstream repo.

Also applies to: 39-39, 169-169, 279-279


🏁 Script executed:

#!/bin/bash
# Verify assets relative to this README’s directory
base="starter_ai_agents/llm_for_algorithm_design/llm4ad_v1"
missing=0
for p in assets/figs/logo.png assets/figs/overview.png assets/figs/gui.gif assets/figs/qq.png; do
  if [ ! -f "$base/$p" ]; then echo "Missing: $p"; missing=1; fi
done
exit $missing

Length of output: 120


Include missing image assets or update image paths in README
All referenced images under starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/assets/figs (logo.png, overview.png, gui.gif, qq.png) are missing—links at lines 3, 39, 169, and 279 will be broken. Add those files to the repo or replace the paths with valid URLs.

🤖 Prompt for AI Agents
In starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/README.md (refs at lines
3, 39, 169, 279), the image references (assets/figs/logo.png, overview.png,
gui.gif, qq.png) are missing and will break the README; fix by either adding the
missing image files into
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/assets/figs with those
exact filenames or updating each <img> tag to point to valid existing relative
paths or absolute URLs, then verify the links render locally and commit the
added files or path changes.

Comment on lines +103 to +109
### Install LLM4AD using PiPy

We suggest to install and run LLM4AD in [conda](https://conda.io/projects/conda/en/latest/index.html) env with python>=3.9, <3.13

```bash
$ pip install llm4ad
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo: “PiPy” → “PyPI”

Also avoid repeating the same conda sentence twice.

-### Install LLM4AD using PiPy
+### Install LLM4AD from PyPI
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Install LLM4AD using PiPy
We suggest to install and run LLM4AD in [conda](https://conda.io/projects/conda/en/latest/index.html) env with python>=3.9, <3.13
```bash
$ pip install llm4ad
```
### Install LLM4AD from PyPI
We suggest to install and run LLM4AD in [conda](https://conda.io/projects/conda/en/latest/index.html) env with python>=3.9, <3.13
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

108-108: Dollar signs used before commands without showing output

(MD014, commands-show-output)

🤖 Prompt for AI Agents
starter_ai_agents/llm_for_algorithm_design/llm4ad_v1/README.md around lines 103
to 109: fix the typo "PiPy" to "PyPI" and remove the duplicated conda sentence
so the installation section reads concisely; keep a single mention that we
recommend using a conda environment with python>=3.9,<3.13 and then show the pip
install command.

@Acquent0 Acquent0 changed the title add llm4ad v1.0 Add Large Language Models for Algorithm Design v1.0 Oct 11, 2025
@Acquent0
Copy link
Author

Acquent0 commented Oct 11, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants