diff --git a/notebooks/en/rag_with_hugging_face_gemma_elasticsearch.ipynb b/notebooks/en/rag_with_hugging_face_gemma_elasticsearch.ipynb index 8f1aad4..42bf562 100644 --- a/notebooks/en/rag_with_hugging_face_gemma_elasticsearch.ipynb +++ b/notebooks/en/rag_with_hugging_face_gemma_elasticsearch.ipynb @@ -10,7 +10,7 @@ "\n", "Authored By: [lloydmeta](https://huggingface.co/lloydmeta)\n", "\n", - "This notebook walks you through building a Retrieve-Augmented Generation (RAG) powered by Elasticsearch (ES) and Hugging Face models, letting you toggle between ES-vectorising (your ES cluster vectorises for you when ingesting and querying) vs self-vectorising (you vectorise all your data before sending it to ES).\n", + "This notebook walks you through building a Retrieval-Augmented Generation (RAG) powered by Elasticsearch (ES) and Hugging Face models, letting you toggle between ES-vectorising (your ES cluster vectorises for you when ingesting and querying) vs self-vectorising (you vectorise all your data before sending it to ES).\n", "\n", "What should you use for your use case? *It depends* 🤷‍♂️. ES-vectorising means your clients don't have to implement it, so that's the default here; however, if you don't have any ML nodes, or your own embedding setup is better/faster, feel free to set `USE_ELASTICSEARCH_VECTORISATION` to `False` in the `Choose data and query vectorisation options` section below!\n", "\n",