A conversational AI assistant designed to help you sound smooth and charismatic in dating conversations. The assistant listens to your conversations, analyzes the context, and provides witty, thoughtful responses that make you sound impressive.
- Active Listening Mode: Automatically provides response suggestions when your date speaks
- Speech Recognition: Uses advanced speech recognition with speaker diarization to identify different speakers
- Smooth Responses: Generates charismatic, contextually relevant responses
- User Knowledge Base: Incorporates your personal information to make responses more authentic
- Voice Generation: Converts responses to speech with natural-sounding voices
- Facial Recognition: Optional feature to detect and identify faces during conversations
- Python 3.8+
- OpenAI API key (for voice generation and embeddings)
- Groq API key (for text generation)
- Microphone access
- Camera access (optional, for facial recognition)
openai>=1.0.0
groq>=0.4.0
tiktoken>=0.5.0
numpy>=1.22.0
nltk>=3.8.1
python-dotenv>=1.0.0
supabase>=1.0.0 (optional, for storage)
opencv-python>=4.6.0 (optional, for facial recognition)
pyautogui>=0.9.53 (optional, for screen capture)
-
Clone the repository:
git clone https://github.com/https://github.com/Ch3mson/RizzKhalifa.git cd meta_rizz -
Install required packages:
pip install -r requirements.txt
-
Set up environment variables:
Create a
.envfile in the project root with the following variables:LANGSMITH_TRACING=true LANGSMITH_ENDPOINT=your_langsmit_endpoint LANGSMITH_API_KEY=your_langsmith_api_key LANGSMITH_PROJECT=your_langsmith_project OPENAI_API_KEY=your_openai_api_key GROQ_API_KEY=your_groq_api_key SUPABASE_URL=your_supabase_url (optional) SUPABASE_KEY=your_supabase_key (optional)
The application has two main components that need to run simultaneously:
cursor_main.py- The conversation assistant that listens and generates responsesmain.py- The primary application (referred to in cursor_main.py)
To run the application:
-
Start the main application:
python ./api_server.py
-
In a separate terminal, start the cursor assistant:
python cursor_main.py
The cursor assistant supports several command-line options:
--diarization: Enable speaker diarization (default: enabled)--no-diarization: Disable speaker diarization--speakers N: Set the expected number of speakers (default: 2)--screen: Enable screen capture for facial recognition (default: disabled)--debug: Enable debug mode with verbose logging
Example with options:
python cursor_main.py --no-diarization --screen --debug-
Start both applications as described above
-
Voice Reference: If diarization is enabled, you'll be prompted to provide a 10-second voice sample
-
Activate Listening Mode: Say the trigger phrase (default: "hey cursor") to activate active listening mode
-
Conversation Flow:
- The assistant will listen to the conversation
- When your date speaks, the assistant will analyze their message
- After a short cooldown period (15 seconds), the assistant will suggest a smooth response
- You can say "let me think" at any time to get an immediate response suggestion
-
Deactivate Listening Mode: Say the stop phrase (default: "stop cursor") to exit active listening mode
-
Facial Recognition (optional):
-
If using Meta glasses, call via messenger to allow faces to be captured through the computer screen. This allows the app to use --screen for capturing faces on your device
- No Audio Detected: Check your microphone settings and permissions
- Speech Recognition Issues: Ensure you're in a quiet environment with clear speech
- API Key Errors: Verify your OpenAI and Groq API keys in the .env file
- Facial Recognition Issues: Check camera permissions and lighting conditions
- Activation Phrase Not Detected: Speak clearly and directly when saying the activation phrase
This project is licensed under the MIT License - see the LICENSE file for details.