Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Notebook showcasing how to perform prompt tuning using the PEFT library. #33

Merged
merged 11 commits into from
Feb 26, 2024

Conversation

peremartra
Copy link
Contributor

What does this PR do?

Is a new notebook showcasing the use of the PEFT library to perform Prompt Tuning. I also provide a brief explanation of what prompt-tuning is and how it works.

Since this is my first PR request, I hope I have created the notebook in the correct style. If not, I will try to adapt and make any necessary modifications.

Thank you for reviewing it. @MKhalusova

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@MKhalusova
Copy link
Contributor

@pacman100 Can you please review a notebook on prompt-tuning with peft?

@MKhalusova
Copy link
Contributor

MKhalusova commented Feb 19, 2024

@peremartra Thank you for contributing a notebook! This looks very useful! There are a few more steps required:

  • Add the notebook to the _toctree.yml
  • Add the notebook to the list of the latest notebooks in the index.md. As this is the most recent addition, put it at the top of the list.
  • Right after the first header (notebook title), add yourself as an author, like this: _Authored by: [Your Name](https://huggingface.co/your_profile)_ Feel free to use either your Hugging Face profile, or GitHub profile, it's up to you which one to link.

I also fixed the link to the other notebooks on index.md, all of them are missing the .ipynb and the links were broken.
@peremartra
Copy link
Contributor Author

@peremartra Thank you for contributing a notebook! This looks very useful! There are a few more steps required:

  • Add the notebook to the _toctree.yml
  • Add the notebook to the list of the latest notebooks in the index.md. As this is the most recent addition, put it at the top of the list.
  • Right after the first header (notebook title), add yourself as an author, like this: _Authored by: [Your Name](https://huggingface.co/your_profile)_ Feel free to use either your Hugging Face profile, or GitHub profile, it's up to you which one to link.

@MKhalusova Modifications Done! Thanks for the feedback :-)

@peremartra
Copy link
Contributor Author

Hi @MKhalusova, I just finished another notebook about fine-tuning with QLoRA. I don't know how to proceed, should I do a second PR, or just wait to close this PR before opening a new one?

@MKhalusova
Copy link
Contributor

Hi @MKhalusova, I just finished another notebook about fine-tuning with QLoRA. I don't know how to proceed, should I do a second PR, or just wait to close this PR before opening a new one?

Feel free to open a separate PR for a different notebook. We can review them in parallel and it's better to keep one notebook per PR.

@@ -0,0 +1,7324 @@
{
Copy link

@pacman100 pacman100 Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice overview!


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@@ -0,0 +1,7324 @@
{
Copy link

@pacman100 pacman100 Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using sampling with temperature=0.2 and top_p=0.95 would result in better generations.

def get_outputs(model, inputs, max_new_tokens=100):
    outputs = model.generate(
        input_ids=inputs["input_ids"],
        attention_mask=inputs["attention_mask"],
        max_new_tokens=max_new_tokens,
        temperature=0.2,
        top_p=0.95,
        do_sample=True,
        repetition_penalty=1.2, #Avoid repetition.
        early_stopping=True, #The model can stop before reach the max_length
        eos_token_id=tokenizer.eos_token_id
    )
    return outputs

Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much! I was aiming for a response as consistent as possible to ensure the explanation in the notebook consistently aligns with the generated answer. With these values, a different response is obtained in each execution.

Would it be acceptable if I leave it as is, with these parameters commented, indicating that for better but more varied responses, users can uncomment and adjust the parameters?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's ok to leave as is, and add this context in markdown before the code. It's always great to see the reasoning behind parameter choices, and alternative options.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,7324 @@
{
Copy link

@pacman100 pacman100 Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"I want you to act as an English translator," -> "I want you to act as a motivational coach. "


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ups... modified!

@@ -0,0 +1,7324 @@
{
Copy link

@pacman100 pacman100 Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of this, you can do the below which would avoid loading a second copy of the base model in memory.

loaded_model_prompt.load_adapter(output_directory_sentences, adapter_name="quotes")
loaded_model_prompt.set_adapter("quotes")

Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing! Thaks I changed the code!

@@ -0,0 +1,7324 @@
{
Copy link

@pacman100 pacman100 Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this generation is nsfw.


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the input sentence to avoid this generation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also update the output?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! For sure, I'm just executing the notebook again, and checking the responses obtained with different configurations. I will upload the notification in few minutes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the generation: 'There are two nice things that should matter to you: the weather and your health.' now.

I checked with different configurations and all responses are sfw :-)

Copy link

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @peremartra for the insightful notebook on prompt tuning! 🔥

Left few comments.

@@ -0,0 +1,7324 @@
{
Copy link
Contributor

@MKhalusova MKhalusova Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Peft->PEFT


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,7324 @@
{
Copy link
Contributor

@MKhalusova MKhalusova Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove the output of this cell


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,7324 @@
{
Copy link
Contributor

@MKhalusova MKhalusova Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, let's remove the output


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,7324 @@
{
Copy link
Contributor

@MKhalusova MKhalusova Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, we can remove the output of this cell


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,7324 @@
{
Copy link
Contributor

@MKhalusova MKhalusova Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest placing the last line (display(train_sample_prompt)) into a separate cell, so that we could show it's output but remove the output of all the downloads


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,7324 @@
{
Copy link
Contributor

@MKhalusova MKhalusova Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, if you place the last line in a separate cell, we can show the output of the display, and remove the output with downloading


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,7324 @@
{
Copy link
Contributor

@MKhalusova MKhalusova Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine-tuning (capitalization)


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@MKhalusova
Copy link
Contributor

I left a few comments to polish things, and we can aim to publish the notebook early next week :)

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor Author

Thanks for your corrections! I think all is fixed now.

Copy link
Contributor

@MKhalusova MKhalusova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! Let's merge :)

@MKhalusova MKhalusova merged commit 8e662a7 into huggingface:main Feb 26, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants