Conversational AI expert for Tailwind CSS powered by Retrieval-Augmented Generation (RAG)
Tailwind GPT is an intelligent chatbot specialized in answering technical questions about Tailwind CSS. It leverages Retrieval-Augmented Generation (RAG) to enhance Large Language Model (LLM) responses with up-to-date, accurate information sourced directly from Tailwind CSS documentation.
RAG (Retrieval-Augmented Generation) addresses two major challenges faced by LLMs:
- Lack of sources when answering questions
- Outdated information as models aren't continuously updated
This framework enables LLMs to access the latest information from indexed documentation and provide referenced responses, delivering significant added value for users. It excels particularly in knowledge-intensive tasks.
- RAG-Powered Responses: Combines retrieval from indexed Tailwind documentation with GPT-3.5-turbo generation
- Conversational Interface: Interactive chat built with Streamlit
- Accurate & Referenced: Responses grounded in official Tailwind CSS documentation
- Vector Search: Fast semantic search using Pinecone vector database
- Comprehensive Evaluations: Multiple evaluation notebooks for quality assessment:
- Correctness evaluation
- Faithfulness evaluation
- Relevancy evaluation
- Similarity evaluation
- Production-Ready: Structured codebase with clear separation of concerns
The project implements a two-stage RAG architecture:
- Data Source: Tailwind CSS documentation (180+ text files)
- Embedding Model: HuggingFace
sentence-transformers/all-mpnet-base-v2 - Vector Store: Pinecone cloud-based vector database
- Framework: LlamaIndex for document processing and indexing
- Query Processing: User questions processed through LlamaIndex
- Retrieval: Semantic search across indexed documentation
- Generation: OpenAI GPT-3.5-turbo generates contextual responses
- Chat Mode: React-based conversational agent with memory
| Component | Technology |
|---|---|
| LLM | OpenAI GPT-3.5-turbo |
| Orchestration | LlamaIndex |
| Vector Database | Pinecone |
| Embeddings | HuggingFace Sentence Transformers |
| Frontend | Streamlit |
| Language | Python 3.8+ |
| NLP | NLTK |
- Python 3.8 or higher
- OpenAI API key (Get one here)
- Pinecone API key (Sign up here)
git clone https://github.com/figlesias221/tailwind-gpt.git
cd tailwind-gptpip install -r requirements.txtDependencies include:
streamlit- Web interfaceopenai- OpenAI API clientllama-index- RAG orchestration frameworknltk- Natural language processingpinecone-client- Vector database client
Create a .env file or configure Streamlit secrets:
# For local development
OPENAI_API_KEY=your_openai_api_key_here
PINECONE_API_KEY=your_pinecone_api_key_hereFor Streamlit Cloud deployment, add to .streamlit/secrets.toml:
openai_key = "your_openai_api_key_here"
pinecone_key = "your_pinecone_api_key_here"Run the ingestion script to index Tailwind CSS documentation into Pinecone:
python ingestion.pyThis will:
- Read all documentation files from the
data/directory - Generate embeddings using HuggingFace model
- Store vectors in Pinecone index named "tailwind-hugging"
streamlit run streamlit_app.pyThe app will be available at http://localhost:8501
tailwind-gpt/
βββ streamlit_app.py # Main Streamlit application
βββ ingestion.py # Document indexing pipeline
βββ requirements.txt # Python dependencies
βββ .env.example # Environment variables template
βββ data/ # Tailwind CSS documentation (180+ files)
β βββ accent-color.txt
β βββ animation.txt
β βββ aspect-ratio.txt
β βββ ...
βββ evals/ # Evaluation notebooks
β βββ correctness_eval.ipynb
β βββ faith_eval.ipynb
β βββ relevancy_eval.ipynb
β βββ similarity_eval.ipynb
βββ arch.png # Architecture diagram
βββ demo.png # Demo screenshot
- Start the app:
streamlit run streamlit_app.py - Ask questions about Tailwind CSS in the chat interface
- Get accurate answers grounded in official documentation
Example Questions:
- "What is the border-radius utility in Tailwind?"
- "How do I create a responsive grid layout?"
- "Explain the difference between padding and margin utilities"
- "What are the available color classes for backgrounds?"
The project includes comprehensive evaluation notebooks in the evals/ directory:
Measures how accurately the model answers questions compared to ground truth.
Assesses whether responses are grounded in the retrieved context without hallucination.
Evaluates if retrieved documents are relevant to the user's query.
Compares semantic similarity between generated and expected responses.
Run evaluations:
jupyter notebook evals/correctness_eval.ipynbModify the ServiceContext in streamlit_app.py:
service_context = ServiceContext.from_defaults(
llm=OpenAI(
model="gpt-4", # Change model
temperature=0.5, # Adjust creativity
system_prompt="Your custom system prompt"
)
)Change the chat mode:
chat_engine = index.as_chat_engine(
chat_mode="react", # Options: "simple", "react", "condense_question"
verbose=True
)To re-index documentation after updates:
- Add/modify files in
data/directory - Run
python ingestion.py - Restart the Streamlit app
- User Query: User asks a question about Tailwind CSS
- Embedding: Question is converted to a vector using the same embedding model
- Retrieval: Pinecone searches for semantically similar documentation chunks
- Context Formation: Retrieved chunks are formatted as context
- Generation: GPT-3.5-turbo generates a response using the context
- Response: User receives an accurate, referenced answer
RAG combines the power of large language models with information retrieval. Instead of relying solely on the model's training data, RAG:
- Retrieves relevant information from a knowledge base
- Augments the prompt with retrieved context
- Generates more accurate, up-to-date responses
Pinecone stores document embeddings as high-dimensional vectors, enabling:
- Fast semantic search
- Scalable document retrieval
- Real-time updates
LlamaIndex (formerly GPT Index) provides:
- Document loading and parsing
- Embedding generation
- Index management
- Query engines and chat interfaces
|
Federico Iglesias π» |
Francisco Rossi π» |
Francisco Decurnex π» |
- LlamaIndex Documentation
- Pinecone Documentation
- OpenAI API Reference
- Streamlit Documentation
- Tailwind CSS Documentation
- RAG Paper (Lewis et al., 2020)
This project is available for educational and research purposes.
- Tailwind CSS Team for the excellent documentation
- LlamaIndex for the powerful RAG framework
- OpenAI for GPT-3.5-turbo
- Pinecone for vector database infrastructure
- Streamlit for the intuitive UI framework
Built with modern AI technologies to make Tailwind CSS expertise accessible through conversation

