RAG和Ollama简单示例-python
RAG with OLLAMA
In the world of natural language processing (NLP), combining retrieval and generation capabilities has led to significant advancements.
Retrieval-Augmented Generation (RAG) enhances the quality of generated text by integrating external information sources.
This article demonstrates how to create a RAG system using a free Large Language Model (LLM).
We will be using OLLAMA and the LLaMA 3 model, providing a practical approach to leveraging cutting-edge NLP techniques without incurring costs.
Whether you're a developer, researcher, or enthusiast, this guide will help you implement a RAG system efficiently and effectively.
Retrieval-Augmented Generation (RAG) enhances the quality of generated text by integrating external information sources.
This article demonstrates how to create a RAG system using a free Large Language Model (LLM).
We will be using OLLAMA and the LLaMA 3 model, providing a practical approach to leveraging cutting-edge NLP techniques without incurring costs.
Whether you're a developer, researcher, or enthusiast, this guide will help you implement a RAG system efficiently and effectively.
Note: Before proceeding further you need to download and run Ollama, you can do so by clicking here.
The following is an example on how to setup a very basic yet intuitive RAG
Import Libraries
import os from langchain_community.llms import Ollama from dotenv import load_dotenv from langchain_community.embeddings import OllamaEmbeddings from langchain.document_loaders import TextLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Chroma from langchain.chains import create_retrieval_chain from langchain import hub from langchain.chains.combine_documents import create_stuff_documents_chain
Loading The LLM (Language Model)
llm = Ollama(model="llama3", base_url="http://127.0.0.1:11434")
Setting Ollama Embeddings
embed_model = OllamaEmbeddings( model="llama3", base_url='http://127.0.0.1:11434' )
Loading Text
text = """ In the lush canopy of a tropical rainforest, two mischievous monkeys, Coco and Mango, swung from branch to branch, their playful antics echoing through the trees. They were inseparable companions, sharing everything from juicy fruits to secret hideouts high above the forest floor. One day, while exploring a new part of the forest, Coco stumbled upon a beautiful orchid hidden among the foliage. Entranced by its delicate petals, Coco plucked it and presented it to Mango with a wide grin. Overwhelmed by Coco's gesture of friendship, Mango hugged Coco tightly, cherishing the bond they shared. From that day on, Coco and Mango ventured through the forest together, their friendship growing stronger with each passing adventure. As they watched the sun dip below the horizon, casting a golden glow over the treetops, they knew that no matter what challenges lay ahead, they would always have each other, and their hearts brimmed with joy. """
Splitting Text into Chunks
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=128) chunks = text_splitter.split_text(text)
Creating a Vector Store (Chroma) from Text
vector_store = Chroma.from_texts(chunks, embed_model)
Creating a Retriever
retriever = vector_store.as_retriever()
Creating a Retrieval Chain
chain = create_retrieval_chain(combine_docs_chain=llm,retriever=retriever)
Retrieval-QA Chat Prompt
retrieval_qa_chat_prompt = hub.pull("langchain-ai/retrieval-qa-chat")
Combining Documents
combine_docs_chain = create_stuff_documents_chain( llm, retrieval_qa_chat_prompt )
Final Retrieval Chain
retrieval_chain = create_retrieval_chain(retriever, combine_docs_chain)
Invoking the Retrieval Chain
response = retrieval_chain.invoke({"input": "Tell me name of monkeys and where do they live"}) print(response['answer'])
阅读量: 1319
发布于:
修改于:
发布于:
修改于: