A real-time AI chat application built with the Gemini API and Server-Sent Events (SSE) to stream responses dynamically.
- 🤖 Chat with an AI powered by Gemini API
- ⚡ Instant, streamed responses via Server-Sent Events (SSE)
- 🌐 Frontend + Backend architecture
- 🛠️ Deployed on AWS Amplify
- Frontend: React (TypeScript, TailwindCSS)
- Backend: Node.js (Express)
- Streaming: Server-Sent Events (SSE)
- Hosting: AWS Amplify
- Node.js (v18+ recommended)
- Yarn or npm
# Clone the repository
git clone https://github.com/parkashay/gemini-chat.git
cd gemini-chat
# Install backend dependencies
cd backend
yarn install
# Install frontend dependencies
cd ../frontend
yarn installTo get the application running on your local machine, follow these steps:
Start the Backend:
-
Open your terminal and navigate to the
backend/directory:cd backend -
Start the backend server:
npm run dev
Start the Frontend:
-
Open a new terminal window or tab.
-
Navigate to the
frontend/directory:cd frontend -
Start the frontend development server:
npm run dev
Access the Application:
-
Once both the backend and frontend servers are running, open your web browser and visit the following URL:
http://localhost:3000
The backend requires an environment variable to access the Gemini API.
Create .env File:
- Inside the
backend/directory, create a file named.env.
Set the API Key:
-
Open the
.envfile and add your Gemini API key:GEMINI_API_KEY=your-gemini-api-key -
Replace
your-gemini-api-keywith your actual API key. You can obtain this key from Google AI Studio.
The project has the following directory structure:
gemini-chat/
├── backend/ # Express server (handles API and SSE)
└── frontend/ # React client (chat interface)Contributions, issues, and feature requests are welcome! Feel free to contribute by:
- Opening a pull request with your changes.
- Submitting an issue to report bugs or suggest new features.
This project is licensed under the MIT License.