-
Notifications
You must be signed in to change notification settings - Fork 16
Docs Content Part 1: Homepage and getting started #448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…rehensive getting started with dependency details, and expanded concepts page with Monarch, Services, TorchStore, and RL workflows
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
- Enhanced homepage with Monarch foundation emphasis, technology stack highlights, validated examples, and clear navigation paths - Expanded getting started with detailed dependency explanations (Monarch, vLLM, TorchTitan, TorchStore, PyTorch Nightly) - Converted installation and verification steps to numbered lists for better readability - Removed FAQ references as FAQ page has been removed - Fixed GPU/process terminology in code examples
5151f64
to
f9b136a
Compare
Need to clean up those references to usage.md: https://github.com/meta-pytorch/forge/actions/runs/18579946858/job/52972683886?pr=448#step:11:103 |
conda activate forge | ||
``` | ||
|
||
3. **Run Installation Script** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this may be ok for now, but even possibly as soon as EOD today we may have different instructions cc @joecummings
If we keep a script, what the script does will be different. I think we can ship this for now, and update this once we're done
Fine-tune Llama 3 8B on your data. **Requires: 2+ GPUs** | ||
|
||
1. **Download the Model** | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ebsmothers @daniellepintz - could you two please review these commands for SFT and ensure this is what we want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlannaBurke let's use this command
hf download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir /tmp/Meta-Llama-3.1-8B-Instruct --exclude "original/consolidated.00.pth"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually there is no need to download the model anymore, we simplified that in the configs. users just need to run this cmd once they have acc to the model:
python -m apps.sft.main --config apps/sft/llama3_8b.yaml
this is an easier workflow for users
- Steps to reproduce | ||
- Expected vs actual behavior | ||
|
||
**Diagnostic command:** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a really good idea - let's keep this, and I think we should come up with a script for this in our issue templates..
cc @joecummings @daniellepintz ? not sure who to tag here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @AlannaBurke !
Fine-tune Llama 3 8B on your data. **Requires: 2+ GPUs** | ||
|
||
1. **Download the Model** | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlannaBurke let's use this command
hf download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir /tmp/Meta-Llama-3.1-8B-Instruct --exclude "original/consolidated.00.pth"
I split the docs work into 2 PRs, this is the first. Let's get this merged ASAP.