Skip to content

abir2py/oralnetv1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Oral Cancer Detection (Deep Learning + ML Fusion)

This repository contains a Flask-based REST API designed to provide a combined diagnostic assessment for oral lesions by integrating deep learning and machine learning models. The system analyzes an uploaded oral image along with clinical tabular data to predict the likelihood of Oral Squamous Cell Carcinoma (OSCC), the severity of dysplasia, and a final fused interpretation that synthesizes both sources of information into a meaningful diagnostic message.

The backend relies on a fine-tuned PyTorch ResNet-50 model that performs image classification to distinguish between OSCC and Leukoplakia. The original network architecture is retained while the fully connected layers are replaced with a customized classifier designed specifically for this task, ending in a sigmoid activation that returns the probability of OSCC. In parallel, a Scikit-learn Random Forest model processes structured patient information such as lesion localization, lesion size, lifestyle factors, demographic attributes, and preliminary dysplasia grading. This model outputs a numerical dysplasia severity score ranging from 0 to 4.

Both models are loaded at server startup for fast inference, and the API is designed with robust error handling to manage missing files, malformed inputs, base64 decoding failures, and mismatched tabular feature formats. The feature columns are validated carefully to ensure exact alignment with the model’s training configuration, minimizing the risk of incorrect inference due to missing or inconsistent data.

The core functionality is exposed through a single endpoint, /predict_diagnosis, which accepts a JSON body containing a base64-encoded image and a dictionary of clinical parameters. The endpoint returns three components: the image-based prediction with OSCC probabilities, the dysplasia severity score from the tabular model, and a final combined diagnostic interpretation. The fusion logic evaluates the confidence of the deep learning model and the severity of the clinical model to determine a medically meaningful message. For instance, if the image model shows high probability of OSCC and the clinical model indicates high-grade dysplasia, the system emphasizes an urgent recommendation. Conversely, if the image and tabular predictions disagree, the message highlights the discrepancy and encourages further examination.

Running the application only requires installing the dependencies listed in the requirements file, placing the model files (resnet_model.pth and random_forest_model.pkl) in the project directory, and executing python app.py. The server hosts the API at 0.0.0.0:5000, making it accessible both locally and across a network. The project is entirely modular, so the included models can be replaced with updated or custom-trained versions without modifying the API workflow.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published