The project enables chatting and extracting information and insights from internal documents using a locally deployed LLM.

Key Features

  • Uses Retrieval-Augmented Generation (RAG) and LangChain techniques
  • Supports various document formats
  • Provides accurate and context-aware responses based on document content

๐Ÿ’ก Goal: Facilitate efficient and intelligent information retrieval from internal documents.

Technologies Used: LLM, RAG, LangChain, Streamlit, Ollama

๐Ÿ”— GitHub

๐Ÿ“ Read on LinkedIn