-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GNN-based Meta-Learning for Sparse Portfolio Optimization #52
Comments
Hi @kayuksel! I saw your post on Reddit I think, very cool stuff :) we'd always love to integrate new techniques into EvoTorch, so if you have time, maybe you'd consider reformatting your code to work in the EvoTorch searchalgorithm API and open a pull request? Otherwise I'll see if I can get to doing that myself when I find time. |
Thank you very much, Timothy. Would it help you if I provided a highly simplified version (~75 lines) as below? FYI, I also created a game-theoretic or adversarial version of the generative model for global optimization that utilizes a positive surprise mechanism obtained through a surrogate model (critic) trained simultaneously with Adaptive Gradient Clipping. It is quite competitive against the best optimizers in Nevergrad in large-scale non-convex optimization problems, especially with noisy rewards. Note: The only bottleneck seems to be the random seed sensitivity, for which GradInit (arxiv.org/abs/2102.08098) seems to be a solution.
|
It is also good and applicable for combinatorial optimization, simply by sampling with the sigmoid of logits from the generator. |
Hello again, I've made a quick comparison on 30-dim Schwefel (as the hardest math function) against Nevergrad here. Sincerely, K |
fyi, I have also applied it to MovieLens 1M matrix factorization problem (with 500K parameters), the codes are in the repository. |
Hello,
Let me start by saying that I am a fan of your work here. I have recently open-sourced by GNN-based meta-learning method for optimization. I have applied it to the sparse index-tracking problem from real-world (after an initial benchmarking on Schwefel function), and it seems to outperform Fast CMA-ES significantly both in terms of producing robust solutions on the blind test set and also in terms of time (total duration and iterations) and space complexity. I include the link to my repository here, in case you would consider adding the method or the benchmarking problem to your repository. Note: GNN, which learns how to generate populations of solutions at each iteration, is trained using gradients of the loss function, as opposed to black-box algorithms here.
Sincerely, K
The text was updated successfully, but these errors were encountered: