Skip to content

Commit

Permalink
added
Browse files Browse the repository at this point in the history
  • Loading branch information
yogananda-muthaiah authored Dec 1, 2024
1 parent f45c7a2 commit 5138f82
Show file tree
Hide file tree
Showing 7 changed files with 105 additions and 0 deletions.
23 changes: 23 additions & 0 deletions chapter3.md
Original file line number Diff line number Diff line change
@@ -1 +1,24 @@
## Chapter 3: Prompt Engineering
Prompt engineering is a very important concept in using LangChain. Designing the right prompts can help you get more accurate and useful answers from your AI model.

```
from langchain import PromptTemplate
template = """
You are a helpful assistant that translates {input_language} to {output_language}.
Translate the following text:
{text}
Translation:
"""
prompt = PromptTemplate(
input_variables=["input_language", "output_language", "text"],
template=template
)
print(prompt.format(input_language="English", output_language="French", text="Hello, how are you?"))
```
29 changes: 29 additions & 0 deletions chapter4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
## Chapter 4: Building the Chain
Chains are one of the core concepts of LangChain: multiple components can be chained together to perform more complex tasks.

```
from langchain.chains import SimpleSequentialChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0.7)
first_prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain_one = LLMChain(llm=llm, prompt=first_prompt)
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a catchphrase for the following company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
overall_chain = SimpleSequentialChain(chains=[chain_one, chain_two], verbose=True)
print(overall_chain.run("eco-friendly water bottles"))
```
23 changes: 23 additions & 0 deletions chapter5.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## Chapter 5: Utilizing memory
The memory component helps preserve the context of the conversation, making for a more natural dialogue.

```
from langchain.memory import ConversationBufferMemory
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=OpenAI(temperature=0),
memory=memory,
verbose=True
)
print(conversation.predict(input="Hi, my name is Alice"))
print(conversation.predict(input="What's my name?"))
print(conversation.predict(input="What have we talked about so far?"))
```
29 changes: 29 additions & 0 deletions chapter6.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
## Chapter 6: Document Retrieval and Question Answering
You can use LangChain to build a system that searches through large amounts of documents for information and answers questions.

```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
with open('your_document.txt', 'r') as file:
raw_text = file.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(raw_text)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))])
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())
query = "What is the main topic of this document?"
print(qa.run(query))
```
Empty file added chapter7.md
Empty file.
1 change: 1 addition & 0 deletions chapter8.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
chapter4.md
Empty file added chapter9.md
Empty file.

0 comments on commit 5138f82

Please sign in to comment.