Skip to content

EVDIO/MLOPS-Exam-Project

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fake news detection

==============================

Description of the project

Following the intention of the Kaggle competition, we aim to build a system capable of classifying unreliable news articles.

Framework

We plan on using the Huggingface Transformer framework. We will be using a pre-trained model included in the framework.

Data description

The data is made available through the competition. This dataset contains the following features for each news article: A unique ID, the title, the author, the textual content, and finally a binary label of 0 or 1 corresponding to “reliable” and unreliable respectively. The model will be trained using DTU HPC.

Models

We expect to use a transformer model made for Natural Language Processing - specifically a pre-trained BERT. The model used is the base version with 12 layers and 12 self-attention heads. The hidden size is 784.

Usage

To use this project, you will need to install the required packages listed in requirements.txt. You can then run the scripts in the src directory to process the data, build features, train models, and make predictions. The Makefile includes commands for running these scripts, such as make data, make train, and make predict.

License

This project is licensed under the terms of the MIT License.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── tox.ini            <- tox file with settings for running tox; see tox.readthedocs.io

Project based on the cookiecutter data science project template. #cookiecutterdatascience

About

Repository for the exam project for the DTU MLOps course

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 81.2%
  • Makefile 15.9%
  • Dockerfile 2.9%