Skip to content

trunghieuvu02/transformers_RUL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Transformer architectures in RUL

This is a special repository for the various transformer architectures in remaining useful life (RUL) and predictive maintenance.

In this repository we release models from papers:

Dataset

We explore NASAs turbofan dataset. The dataset consists of four separate challenges of engines with increasing complexity. These engines operate normally in the beginning and run to failure. The goal is to predict the RUL of each turbofan engine in the test set. See the table below for a short overview of the challenges.

Dataset Operating conditions Fault modes Train size (number of engines) Test size (number of engines)
FD001 1 1 100 100
FD002 6 1 260 259
FD003 1 2 100 100
FD004 6 2 248 249

Methodology

We can use single methods or combine methods to achieve better performances. If you try to these methods with the following command:

Vanilla Transformer

Change parameters with attention_layer_types = "full"

python src/main.py

Log Sparse Transformer

Change parameters with attention_layer_types = "log"

python src/main.py

Prob Sparse Transformer

Change parameters with attention_layer_types = "prob"

python src/main.py

Auto Former

Change parameters with attention_layer_types = "auto"

python src/main.py

Experiments

We ran several Transformer-based architectures with default parameters from corresponding papers. Here are the results:

model dataset rmse s_score link
Transformer FD001 13.49 359.75 [Tensorboard_link]
Log Sparse Transformer FD001 - - [TensorBoard_link]
Prob Sparse Transformer FD001 - - [TensorBoard_link]
Full + Log Transformer FD001 - - [TensorBoard_link]
Prob + Log Transformer FD001 13.11 311.91 [http://127.0.0.1:8080/dashboard/studies/15]
AutoFormer FD001 13.10 292.57 [http://127.0.0.1:8080/dashboard/studies/11]
Auto + Log Transformer FD001 12.61 224.70 [http://127.0.0.1:8080/dashboard/studies/13]
Auto + Full Transformer FD001 12.37 234.32 [http://127.0.0.1:8080/dashboard/studies/7]
Auto + Prob Transformer FD001 12.21 263.15 [http://127.0.0.1:8080/dashboard/studies/14]

Optimization by using Optuna Framework

We can optimize the hyperparameter network using Optuna Framework.

Install Optuna, follow the instructions provided in the corresponding repository linked here.

Installation

Optuna is available at the Python Package Index.

# PyPI
$ pip install optuna

Run Optimization

Run optimize to find the best parameters:

python src/optimization.py

Web Dashboard

Optuna Dashboard is a real-time web dashboard for Optuna. Run the command in another terminal:

optuna-dashboard sqlite:///db.sqlite3

Docker

Run DockerFile (to be completed)

Step 1: Build the Docker image

You can build the Docker image by navigating to the directory containing the Dockerfile and running the following command:

docker build -t pytorch-gpu . -f Dockerfile

Step 2: Run the Docker Container

You have successfully built a Docker image, now you can run a container by executing the following command:

docker run -it -v /home/ktp_user/Documents/Github_repo/transformers_RUL:/app --name hieuvt-container --gpus '"device=1"' pytorch-gpu

Step 3:

Copy the container folder into the server machine

docker cp <container_name>:/app /local/path/for/directory

Bibtex

@inproceedings{zerveas2021transformer,
  title={A transformer-based framework for multivariate time series representation learning},
  author={Zerveas, George and Jayaraman, Srideepika and Patel, Dhaval and Bhamidipaty, Anuradha and Eickhoff, Carsten},
  booktitle={Proceedings of the 27th ACM SIGKDD conference on knowledge discovery \& data mining},
  pages={2114--2124},
  year={2021}
}

@inproceedings{NEURIPS2019_6775a063,
 author = {Li, Shiyang and Jin, Xiaoyong and Xuan, Yao and Zhou, Xiyou and Chen, Wenhu and Wang, Yu-Xiang and Yan, Xifeng},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
 pages = {},
 publisher = {Curran Associates, Inc.},
 title = {Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting},
 url = {https://proceedings.neurips.cc/paper_files/paper/2019/file/6775a0635c302542da2c32aa19d86be0-Paper.pdf},
 volume = {32},
 year = {2019}
}

@inproceedings{zhou2021informer,
  title={Informer: Beyond efficient transformer for long sequence time-series forecasting},
  author={Zhou, Haoyi and Zhang, Shanghang and Peng, Jieqi and Zhang, Shuai and Li, Jianxin and Xiong, Hui and Zhang, Wancai},
  booktitle={Proceedings of the AAAI conference on artificial intelligence},
  volume={35},
  number={12},
  pages={11106--11115},
  year={2021}
}

@inproceedings{NEURIPS2021_bcc0d400,
 author = {Wu, Haixu and Xu, Jiehui and Wang, Jianmin and Long, Mingsheng},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {M. Ranzato and A. Beygelzimer and Y. Dauphin and P.S. Liang and J. Wortman Vaughan},
 pages = {22419--22430},
 publisher = {Curran Associates, Inc.},
 title = {Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting},
 url = {https://proceedings.neurips.cc/paper_files/paper/2021/file/bcc0d400288793e8bdcd7c19a8ac0c2b-Paper.pdf},
 volume = {34},
 year = {2021}
}


@InProceedings{pmlr-v162-zhou22g,
  title = 	 {{FED}former: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting},
  author =       {Zhou, Tian and Ma, Ziqing and Wen, Qingsong and Wang, Xue and Sun, Liang and Jin, Rong},
  booktitle = 	 {Proceedings of the 39th International Conference on Machine Learning},
  pages = 	 {27268--27286},
  year = 	 {2022},
  editor = 	 {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
  volume = 	 {162},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {17--23 Jul},
  publisher =    {PMLR},
  pdf = 	 {https://proceedings.mlr.press/v162/zhou22g/zhou22g.pdf},
  url = 	 {https://proceedings.mlr.press/v162/zhou22g.html},
}

About

- This is a special repository for the various transformer architectures in remaining useful life (RUL) and predictive maintenance.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors