Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers #118

Open
trigaten opened this issue Dec 21, 2022 · 0 comments

Comments

@trigaten
Copy link
Owner

trigaten commented Dec 21, 2022

https://arxiv.org/abs/2212.10559

@trigaten trigaten added the topic label Dec 21, 2022
@trigaten trigaten self-assigned this Dec 21, 2022
@trigaten trigaten changed the title https://arxiv.org/abs/2212.10559 Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers Dec 22, 2022
@trigaten trigaten added the paper label Jan 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant