The Cyberbullying Detection System is a web-based application designed to detect harmful language in real time. It integrates OpenAI's API with the Joi validation library to analyze user inputs and flag potential instances of cyberbullying. The system provides users with a chat interface built using React.js, where inputs are processed through a Node.js backend. The application aims to promote safer online interactions by leveraging NLP techniques and validation frameworks.
- React.js: For building a responsive and interactive user interface.
- Node.js: To handle server-side logic and API requests.
- Joi Validation Library: To validate and filter user inputs in real time.
- Redis: For caching API responses, optimizing performance.
- gpt-3.5-turbo model: Provides advanced natural language processing capabilities to detect cyberbullying.
- Real-time detection of cyberbullying using OpenAI's GPT models.
- Intuitive web-based chat interface for user interaction.
- Scalable backend with caching support to handle high request volumes.
- High accuracy in identifying various types of harmful content, including direct insults and offensive sarcasm.
Ensure you have the following installed on your system:
- Node.js (v14 or higher)
- npm (Node Package Manager)
- Redis (for caching API responses)
- OpenAI API Key (Sign up for OpenAI and obtain an API key)
- Clone the repository:
git clone https://github.com/ttmtu2003/SafeSpaceAI.git- Navigate to the directory:
cd SafeSpaceAI- Install dependencies:
npm install- Create a .env file for configuration with the following:
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
REDIS_HOST=localhost
REDIS_PORT=6379
CHAT_GPT_TRAINING_CONTENT_DIR=./dataFiles/gpt/- Start the backend server:
node server.js- In another terminal, navigate to frontend directory:
cd SafeSpaceAI/frontend- Install dependencies:
npm install- Start the frontend server:
npm start- Bianca Cervantes
- Ekansh Gupta
- Travis Lincoln
- Tu Tran
- Our advisor, Andrew Bond, for his invaluable guidance throughout the project.