The project enables chatting and extracting information and insights from internal documents using a locally deployed LLM.
Key Features
- Uses Retrieval-Augmented Generation (RAG) and LangChain techniques
- Supports various document formats
- Provides accurate and context-aware responses based on document content
๐ก Goal: Facilitate efficient and intelligent information retrieval from internal documents.
Technologies Used: LLM, RAG, LangChain, Streamlit, Ollama
๐ GitHub
๐ Read on LinkedIn