Self-Fed Memory: A "Second Brain" AI.

I try to document everything in my life: notes, journals, epiphanies, project ideas, etc. But like most humans, I forget things. To solve this, I built Self-Fed Memory, a personalized AI assistant designed to have a "perfect memory" of my life. Unlike generic LLMs, this system ingests my personal markdown notes and conversation history to provide responses deeply grounded in my specific context, preferences, and past experiences.

The core of the system is a Retrieval-Augmented Generation (RAG) pipeline built with LangChain. It converts my personal data into semantic embeddings stored in a Pinecone vector database. To ensure the AI doesn't just keyword-match but actually understands context, I engineered an advanced multi-query retriever. This splits user questions into multiple semantic variations and applies time-decay scoring, ensuring that recent memories are weighted higher than outdated ones, mimicking how human memory actually works.

On the engineering side, I prioritized a clean, deployable architecture. The backend is a FastAPI service that handles memory ingestion, embedding, and preference extraction. It connects to a Streamlit frontend for the UI and uses Supabase for persistent chat history. The entire stack is containerized with Docker Compose, making it easy to spin up a local instance that is completely isolated and privacy-focused.

This project bridges the gap between a static notebook and an active assistant. It features an intelligent "Preference Tracker" that automatically extracts and saves my likes and dislikes from conversations, allowing the model to continuously learn and adapt to me over time without manual updates.

Code