Jusoor. A plural of "Bridge" in Arabic, is a platform for management of mental health appointments equipped with dynamic survey builder, and AI-driven therapist support system targeted at KFUPM students
- Used Technologies
- Project Structure
- Application Structure
- How to run the project for development
- How to deploy for production
- Limitations
Jusoor Backend depends on a myriad of amazing open-source technologies that empower its system
djangofor main HTTP server handling and Database ORMdjango-rest-frameworkeasy-to-use wrapper for building HTTP APIs on top ofdjangodjangorestframework-simplejwtutility for creation and management of user JWTs in conjunction withdjango-rest-frameworkdrf-yasg: OpenAPI swagger/redoc API documentation generationdjango-filter: utility for django ORM-based HTTP query parameter filteringcelery/django-celery-results: background task execution handling withredisbackendpydantic: typesafe type-hints for business logic and cross-app communicationslangchain: easy-to-use abstraction layer when using third-party LLMs asopenaifaker: utility used to easily generate mocked values for end-2-end testingsagemaker: AWS SDK to deploy AI models to your AWS accountchannels: Django Async server and WebSocket support for DjangoPostgreSQL: Database engine
Jusoor follows standard django app-based directory structure with the following main directories
appointments: Patient Appointment scheduling and Therapist working hour availability managementauthentication: user authentication and authorization, along some hashing and encryption utilitieschat: LangChain-based LLM chatbot on top of apgvectorbased RAG for interacting with student patientscore: a collection of utilies for DRF class viewset enhancement like action-based permissions and dynamically scopes querysets. formatted HTTP response, and soft-deleted querysetssentiment_ai: serving of sentiment and mental disorder reports using deployed AI modelsdjango-extensions: a utility set for easier development likeshell_plusinteractive notebook terminal interface and many more
Each application is structured uniformly and may contain the following files:
view.pydefinition of HTTP endpoints along serialization, permission and queryset scoping configurationsmodels.pyschema definition of the application using Django's ORMtasks.pydefinition of Celery background tasksserializers.pydefinition of DRF HTTP payload and resposne serializationtests.pyfile containing all the created unit tests for the apptypes.pycustom pydantic types used to ensure type-safety and minimize bugs in business logicmock.pyutilities dedicated to mock a group of related ORM objects for easier unit and end-2-end testingenums.pyconstants to be used in business logic and schema field validation choicesurls.pyrouting configuration for HTTP endpoints defined inviews.py
- Install the python dependencies from the
Pipfile(usingpipenvis recommended to preserve used versioning, but you can use other dependency managers likepoetry)
pipenv install- add the following environmental variables as shown in
.env.example
Warning
You must add the populated environmental variables in a new file named .env to be able to run the application
- run the migrations after configuring your database settings
python manage.py migrate- upload model weights in the
/model_weightsdirectory to AWS SageMaker and deploy it with the following configuration command
python manage.py deploy-model --model-data <s3-model-artifact-path> --role <configured-sagemaker-role>Note
You need to have a valid IAM role to execute needed SageMaker actions to deploy the model. More info here. Deploying on other platforms is also possible, but it requires updating the interfacing classes on sentiment_ai/agents directory
- run the main server process
python manage.py- spawn a redis instance to be used for background task scheduling and caching
docker run --rm -p 6379:6379 redis:7Note
Make sure that docker is running on your device, before running the command!
- run a
celeryprocess on a second terminal to start handling async tasks using the currently spawnedredisfor cross-process communication (I usewatchdogto continue refreshing the process on every application source code update)
watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery -A jusoor_backend worker -l INFO -P soloNote
to the chatbot. Make sure to create a new DB record with specified prompt, and supply its ID to the chatbot wrapper class, as the current implementation uses relies on a dynamic DB-stored chatbot configuration when using the chatbot
After running the main process, you can find the swagger API docs at localhost:8000/swagger/
- App server: The deployment configuration supplied works seemlessly with Heroku using
Pipfileto specify needed dependencies, andProcfileto specify 2 processes for main HTTP server and background tasks - Database: You can use any database provider. Make sure that the DB dialect supports
JSON/JSONBfields (I personally recommend usingPostgreSQL) - AI models: I recommend using AWS SageMaker, with the configured command
deploy-model - Background task scheduling: You need to provision a Redis instance to be used for Caching, WebSocket connection handling, as well as Background task execution.
- Both Emotion and Mental disorder models can only work on
Englishlanguage, and are trained on relatively long text passages (Reddit posts), which can hinder their performance on normal short chat message texts - The reporting endpoint can take first 15 chat messages max from the specified time frame to avoid overwhelming the repoerting LLM context window
- We implemented minimal measrues to avoid abusing the chatbot system prompt, but a dedicated text abuse classifier pipeline would be more suitable for handling such task
- unit tests are available for only
/appointmentsand/surveysapps