diff --git a/chapter3.md b/chapter3.md index 8b13789..1c1055d 100644 --- a/chapter3.md +++ b/chapter3.md @@ -1 +1,24 @@ +## Chapter 3: Prompt Engineering +Prompt engineering is a very important concept in using LangChain. Designing the right prompts can help you get more accurate and useful answers from your AI model. +``` +from langchain import PromptTemplate + + +template = """ +You are a helpful assistant that translates {input_language} to {output_language}. + +Translate the following text: +{text} + +Translation: +""" + +prompt = PromptTemplate( + input_variables=["input_language", "output_language", "text"], + template=template +) + + +print(prompt.format(input_language="English", output_language="French", text="Hello, how are you?")) +``` \ No newline at end of file diff --git a/chapter4.md b/chapter4.md new file mode 100644 index 0000000..9a8ea2e --- /dev/null +++ b/chapter4.md @@ -0,0 +1,29 @@ +## Chapter 4: Building the Chain +Chains are one of the core concepts of LangChain: multiple components can be chained together to perform more complex tasks. + +``` +from langchain.chains import SimpleSequentialChain +from langchain.llms import OpenAI +from langchain.prompts import PromptTemplate + +llm = OpenAI(temperature=0.7) + +first_prompt = PromptTemplate( + input_variables=["product"], + template="What is a good name for a company that makes {product}?", +) +chain_one = LLMChain(llm=llm, prompt=first_prompt) + +second_prompt = PromptTemplate( + input_variables=["company_name"], + template="Write a catchphrase for the following company: {company_name}", +) +chain_two = LLMChain(llm=llm, prompt=second_prompt) + + +overall_chain = SimpleSequentialChain(chains=[chain_one, chain_two], verbose=True) + + +print(overall_chain.run("eco-friendly water bottles")) + +``` \ No newline at end of file diff --git a/chapter5.md b/chapter5.md new file mode 100644 index 0000000..cf149a5 --- /dev/null +++ b/chapter5.md @@ -0,0 +1,23 @@ +## Chapter 5: Utilizing memory +The memory component helps preserve the context of the conversation, making for a more natural dialogue. + +``` +from langchain.memory import ConversationBufferMemory +from langchain.llms import OpenAI +from langchain.chains import ConversationChain + + +memory = ConversationBufferMemory() + + +conversation = ConversationChain( + llm=OpenAI(temperature=0), + memory=memory, + verbose=True +) + + +print(conversation.predict(input="Hi, my name is Alice")) +print(conversation.predict(input="What's my name?")) +print(conversation.predict(input="What have we talked about so far?")) +``` \ No newline at end of file diff --git a/chapter6.md b/chapter6.md new file mode 100644 index 0000000..d205498 --- /dev/null +++ b/chapter6.md @@ -0,0 +1,29 @@ +## Chapter 6: Document Retrieval and Question Answering +You can use LangChain to build a system that searches through large amounts of documents for information and answers questions. + +``` +from langchain.embeddings.openai import OpenAIEmbeddings +from langchain.vectorstores import Chroma +from langchain.text_splitter import CharacterTextSplitter +from langchain.llms import OpenAI +from langchain.chains import RetrievalQA + + +with open('your_document.txt', 'r') as file: + raw_text = file.read() + + +text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) +texts = text_splitter.split_text(raw_text) + + +embeddings = OpenAIEmbeddings() +docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]) + + +qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever()) + + +query = "What is the main topic of this document?" +print(qa.run(query)) +``` \ No newline at end of file diff --git a/chapter7.md b/chapter7.md new file mode 100644 index 0000000..e69de29 diff --git a/chapter8.md b/chapter8.md new file mode 100644 index 0000000..216a7c5 --- /dev/null +++ b/chapter8.md @@ -0,0 +1 @@ +chapter4.md \ No newline at end of file diff --git a/chapter9.md b/chapter9.md new file mode 100644 index 0000000..e69de29