You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Q1: How effective is LLM4AD in solving specific problems in algorithm design?
Overall, extracting and abstracting concise problem descriptions from complex problems remains challenging. For specific problems, the effectiveness of LLM4AD depends on various factors. Beyond template design, what's crucial is how you abstract key components from algorithm design, such as heuristic rules or formulas, for large language models to process. Our experience shows that extracting these key elements and providing them to LLMs typically yields better results than having the models design entire algorithm frameworks.
Q2: Why might the deepseek-v3 interface run normally while the r1 interface fails?
The r1 interface may fail because its response time exceeds the platform's default time limit for large model responses. We recommend increasing the response time limit on the platform to resolve this issue. Note that response times vary depending on the specific task design and model type, so for r1, it's advisable to set a larger timeout value.
Q3: How can I modify the timeout setting in HttpsApi?
You can modify the timeout setting in HttpsApi by setting it to a sufficiently large value. If you're running run_gui or a program directly from GitHub, you should make this modification in the corresponding configuration section.
Q4: What are some possible reasons for OpenAI API request failures?
Common reasons for OpenAI API request failures include incorrect URL addresses and unstable network connections. According to user feedback, network instability is frequently the culprit, and problems have been successfully resolved by addressing connectivity issues.
Q5: How can I write testing functionality for my own problems?
You need to create two primary files: evaluation.py and template.py. Implement a class that inherits from llm4ad.base.Evaluation. In this class, override the evaluate_program function, which defines how algorithms are scored (higher scores are better). When initializing the class, pass template_program to its parent class to standardize the function style, ensuring generated algorithms conform to a specific format (function name, parameter list, docstring, etc.). You can refer to video tutorials on Bilibili for detailed guidance.
Q6: Why might the GUI show empty results despite normal run logs after setting up the LLM?
This likely occurs because the LLM's response time is long and results haven't returned yet. Check if sample files exist in the samples/samples_1~200.json path. If they don't exist, it indicates that the LLM hasn't returned a response yet. Note that the samples folder and run_log.txt file are typically located in the same directory.
Q7: Where in randsample.py should the settings for the run example be configured?
The documentation may be outdated, and some adjustments in the repository haven't been updated. If you want to try search methods, you can refer to the run_eoh.py file. If you prefer to use the random sample method, you can try our public tutorial notebook on Colab: online_bin_packing_tutorial.ipynb.
Q8: Can API keys generated in OpenAI and DeepSeek be used directly for testing on the LLM4AD platform?
If you've purchased DeepSeek API quota, you may need to modify the corresponding host address to api.deepseek.com and set the API key and model name (e.g., deepseek-chat). Whether you need to add funds depends on the specific policies of the large model platform you're using. If you're running the tutorial notebook on Colab, testing will be fast because the default sample size is usually 10.
Q9: What are the advantages of LLMs in algorithm design?
LLMs are not always superior to neural networks, simulated annealing, or expert experience in algorithm design. However, compared to neural networks, LLMs offer these advantages:
Generally higher degree of automation
No need for model training
Algorithms presented in language and code form, providing better interpretability compared to black-box neural networks
Q10: Are there cases or work applying LLMs to solve practical problems in real-world scenarios?
There are numerous practical application cases. You can search based on your areas of interest to find relevant examples. Additionally, we've written a survey on applications in algorithm design that you might find interesting.
Q11: Why does the GUI display "please configure the settings of LLM" even after setting host, key, and model in run_gui.py?
You need to configure the host, key, and model in the GUI interface itself, not in run_gui.py.
Q12: How does the EoH implementation in LLM4AD differ from the original EoH?
The implementations in the EoH code and the LLM4AD platform are fundamentally the same, with two minor differences:
Operators used: Our experiments show that the four operators (m1, m2, e1, and e2) are critical for performance, while the remaining one has negligible impact on the results.
Algorithm sampling method: Whether you sample N algorithms from one operator or iterate four times to sample N/4 algorithms per operator, the outcome is the same—as long as the total number of evaluations is fixed, all four operators are used with equal probability. The frequency of updating the population has limited influence on final performance once you have enough samples. The updated implementation in LLM4AD simplifies parallel sampling and evaluation.
We recommend using the LLM4AD implementation, as it is more robust and modular.
Q13: How do I set whether to maximize or minimize the objective function?
The LLM4AD platform defaults to maximizing objectives. If you want to minimize, you can add a negative sign to the value returned in the evaluation.
Q14: Are all comments in template.py helpful for EoH to generate prompts?
Comments are helpful for EoH, but the main purpose is to clearly explain what the inputs and outputs are. While clarity is important, simpler is better once the essentials are explained.
Q15: Why are there different pack_circles() function definitions in template.py and evaluation.py?
EoH uses the function definition in template.py. The content under if __name__ == '__main__': in evaluation.py serves no purpose except for debugging.
Q16: What principles guide the selection or design of the initial function definition in template.py?
The principle for initial definitions is to keep them as simple as possible, or to use common simple algorithms as initial values if available. EoH doesn't actually need this initial value; it just needs a template. Regarding complexity, there's a time limit for running algorithms, and those exceeding this limit are discarded (e.g., task = CirclePackingEvaluation(timeout_seconds=1200) means algorithms running longer than 1200 seconds will be discarded; you can modify this timeout_seconds to limit the maximum runtime of algorithms).
Q17: Where are the results and the algorithms saved after EoH optimization?
They're saved in the log_dir. For example, with profiler=EoHProfiler(log_dir='logs/eoh', log_style='simple'), they'll be saved in logs/eoh.
Q18: How to add existing algorithms or manually designed algorithms into the population?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
🔍 LLM4AD: Frequently Asked Questions
Q1: How effective is LLM4AD in solving specific problems in algorithm design?
Overall, extracting and abstracting concise problem descriptions from complex problems remains challenging. For specific problems, the effectiveness of LLM4AD depends on various factors. Beyond template design, what's crucial is how you abstract key components from algorithm design, such as heuristic rules or formulas, for large language models to process. Our experience shows that extracting these key elements and providing them to LLMs typically yields better results than having the models design entire algorithm frameworks.
Q2: Why might the deepseek-v3 interface run normally while the r1 interface fails?
The r1 interface may fail because its response time exceeds the platform's default time limit for large model responses. We recommend increasing the response time limit on the platform to resolve this issue. Note that response times vary depending on the specific task design and model type, so for r1, it's advisable to set a larger timeout value.
Q3: How can I modify the timeout setting in HttpsApi?
You can modify the timeout setting in HttpsApi by setting it to a sufficiently large value. If you're running run_gui or a program directly from GitHub, you should make this modification in the corresponding configuration section.
Q4: What are some possible reasons for OpenAI API request failures?
Common reasons for OpenAI API request failures include incorrect URL addresses and unstable network connections. According to user feedback, network instability is frequently the culprit, and problems have been successfully resolved by addressing connectivity issues.
Q5: How can I write testing functionality for my own problems?
You need to create two primary files:
evaluation.pyandtemplate.py. Implement a class that inherits fromllm4ad.base.Evaluation. In this class, override theevaluate_programfunction, which defines how algorithms are scored (higher scores are better). When initializing the class, passtemplate_programto its parent class to standardize the function style, ensuring generated algorithms conform to a specific format (function name, parameter list, docstring, etc.). You can refer to video tutorials on Bilibili for detailed guidance.Q6: Why might the GUI show empty results despite normal run logs after setting up the LLM?
This likely occurs because the LLM's response time is long and results haven't returned yet. Check if sample files exist in the
samples/samples_1~200.jsonpath. If they don't exist, it indicates that the LLM hasn't returned a response yet. Note that the samples folder andrun_log.txtfile are typically located in the same directory.Q7: Where in randsample.py should the settings for the run example be configured?
The documentation may be outdated, and some adjustments in the repository haven't been updated. If you want to try search methods, you can refer to the run_eoh.py file. If you prefer to use the random sample method, you can try our public tutorial notebook on Colab: online_bin_packing_tutorial.ipynb.
Q8: Can API keys generated in OpenAI and DeepSeek be used directly for testing on the LLM4AD platform?
If you've purchased DeepSeek API quota, you may need to modify the corresponding host address to
api.deepseek.comand set the API key and model name (e.g.,deepseek-chat). Whether you need to add funds depends on the specific policies of the large model platform you're using. If you're running the tutorial notebook on Colab, testing will be fast because the default sample size is usually 10.Q9: What are the advantages of LLMs in algorithm design?
LLMs are not always superior to neural networks, simulated annealing, or expert experience in algorithm design. However, compared to neural networks, LLMs offer these advantages:
Q10: Are there cases or work applying LLMs to solve practical problems in real-world scenarios?
There are numerous practical application cases. You can search based on your areas of interest to find relevant examples. Additionally, we've written a survey on applications in algorithm design that you might find interesting.
Q11: Why does the GUI display "please configure the settings of LLM" even after setting host, key, and model in run_gui.py?
You need to configure the host, key, and model in the GUI interface itself, not in
run_gui.py.Q12: How does the EoH implementation in LLM4AD differ from the original EoH?
The implementations in the EoH code and the LLM4AD platform are fundamentally the same, with two minor differences:
Operators used: Our experiments show that the four operators (m1, m2, e1, and e2) are critical for performance, while the remaining one has negligible impact on the results.
Algorithm sampling method: Whether you sample N algorithms from one operator or iterate four times to sample N/4 algorithms per operator, the outcome is the same—as long as the total number of evaluations is fixed, all four operators are used with equal probability. The frequency of updating the population has limited influence on final performance once you have enough samples. The updated implementation in LLM4AD simplifies parallel sampling and evaluation.
We recommend using the LLM4AD implementation, as it is more robust and modular.
Q13: How do I set whether to maximize or minimize the objective function?
The LLM4AD platform defaults to maximizing objectives. If you want to minimize, you can add a negative sign to the value returned in the evaluation.
Q14: Are all comments in template.py helpful for EoH to generate prompts?
Comments are helpful for EoH, but the main purpose is to clearly explain what the inputs and outputs are. While clarity is important, simpler is better once the essentials are explained.
Q15: Why are there different pack_circles() function definitions in template.py and evaluation.py?
EoH uses the function definition in
template.py. The content underif __name__ == '__main__':inevaluation.pyserves no purpose except for debugging.Q16: What principles guide the selection or design of the initial function definition in template.py?
The principle for initial definitions is to keep them as simple as possible, or to use common simple algorithms as initial values if available. EoH doesn't actually need this initial value; it just needs a template. Regarding complexity, there's a time limit for running algorithms, and those exceeding this limit are discarded (e.g.,
task = CirclePackingEvaluation(timeout_seconds=1200)means algorithms running longer than 1200 seconds will be discarded; you can modify thistimeout_secondsto limit the maximum runtime of algorithms).Q17: Where are the results and the algorithms saved after EoH optimization?
They're saved in the
log_dir. For example, withprofiler=EoHProfiler(log_dir='logs/eoh', log_style='simple'), they'll be saved inlogs/eoh.Q18: How to add existing algorithms or manually designed algorithms into the population?
We provide an example of adding seed algorithms to the initial population in this path: https://github.com/Optima-CityU/LLM4AD/tree/main/example/method_add_seeds
Q19: How to resume computation based on half-completed results or existing results?
We provide an example of resuming computation in this path: https://github.com/Optima-CityU/LLM4AD/tree/main/example/method_Resume
Beta Was this translation helpful? Give feedback.
All reactions