Inspired by The Irishman, MyTimeMachine performs personalized de-aging and aging with high fidelity and identity preservation from ~50 selfies, extendable to temporally consistent video editing.
For more visual results, go checkout our project page
- [2025.08] Release the training and inference code.
- [2025.03] Paper accepted to SIGGRAPH 2025 (TOG).
-
Clone Repo
git clone git@github.com:luchaoqi/mytimemachine.git cd mytimemachine -
Create Conda Environment and Install Dependencies: we build our environment on top of SAM and use the same base setup, so you can directly set up the environment and install the following additional packages:
pip install lpips# you can also use the following environment directly conda env create -f environment/mytimemachine.yml
Thanks to, and similar to, SAM, we provide all the checkpoints (including various auxiliary models) listed below. Please download them and save them in the pretrained_models folder:
| Path | Description |
|---|---|
| SAM | SAM trained on the FFHQ dataset for age transformation. This is the global aging prior used for MyTimeMachine. |
| pSp Encoder | pSp taken from pixel2style2pixel, trained on the FFHQ dataset for StyleGAN inversion. |
| FFHQ StyleGAN | StyleGAN model pretrained on FFHQ, taken from rosinality with 1024×1024 output resolution. |
| IR-SE50 Model | Pretrained IR-SE50 model from TreB1eN, used for our ID loss during training. |
| VGG Age Classifier | VGG age classifier from DEX, fine-tuned on the FFHQ-Aging dataset for use in our aging loss. |
Make sure that all model paths are correctly configured in paths_config.py
Thank you all for your interest in our dataset! As the dataset contains celebrity images, we need to ensure proper copyright clearance before release. It will not be available at this stage. Please stay tuned for updates! 🙂
If you want to quickly verify the code setup without setting up the dataset, you can skip this step and use the dataset here for verification.
Otherwise, please collect your own dataset and follow the steps below.
We follow the steps below for data preprocessing:
-
Image Enhancement
We follow GFPGAN to enhance face quality. Other SOTA enhancers can also be used as alternatives. -
Quality Filtering & ID Deduplication (Optional)
We follow MyStyle to filter out low-quality faces and remove duplicates or highly similar faces. -
Face alignment
We usedlibto align faces, you can use script here for multi-processing.
The final dataset structure is as follows:
{img_folder_name}
├── {age}_{idx}.ext
{age}contains the age of the face in the image.{idx}can be any number for multiple images of the same age.extcan be any common image extension as defined inIMG_EXTENSIONSin utils/data_utils.py.
An example of the final dataset:
30_70
├── 30_11.jpeg
├── 30_14.jpeg
├── 30_5.jpeg
├── 30_8.jpeg
├── 31_19.jpeg
├── 31_1.jpeg
...
├── 68_69.jpeg
├── 69_29.jpeg
├── 69_53.jpeg
├── 70_42.jpeg
├── 70_43.jpeg
python scripts/train.py \
--dataset_type=ffhq_aging \
--workers=6 \
--batch_size=4 \
--test_batch_size=4 \
--test_workers=6 \
--val_interval=1000 \
--save_interval=2000 \
--start_from_encoded_w_plus \
--id_lambda=0.1 \
--lpips_lambda=0.1 \
--lpips_lambda_aging=0.1 \
--lpips_lambda_crop=0.6 \
--l2_lambda=0.25 \
--l2_lambda_aging=0.25 \
--l2_lambda_crop=1 \
--w_norm_lambda=0.005 \
--aging_lambda=5 \
--cycle_lambda=1 \
--input_nc=4 \
--target_age=uniform_random \
--use_weighted_id_loss \
--checkpoint_path={path_to_sam_ffhq_aging.pt} \
--train_dataset={path_to_training_data} \
--exp_dir={path_to_experiment_saving_dir} \
--train_encoder \
--adaptive_w_norm_lambda=7The adaptive_w_norm_lambda corresponds to Eqn 7 in the paper. It describes how close personalized aging is compared to global aging, and thus can be sensitive and person-specific.
- If your results seem
underfittedand too close to global aging, you can use higher values, e.g.,10–30. - If your results seem
overfitted, you can use lower values, e.g.,5.
We recommend starting with a value of 7.
We provide code to run inference after training the model:
python helper.py --img_dir={path_to_test_data} --model_path={path_to_experiment_saving_dir} --blender --output_dir={path_to_output_dir}For example, you may use this checkpoint, trained on Al Pacino (ages 30–70) with adaptive_w_norm_lambda=7, and obtain the results shown below. The input is on the left, followed by the predicted appearance at every 10-year interval from age 0 to 100.

For pre-trained (SAM) results:
python helper.py --img_dir={path_to_test_data} --desc='elaine_sam_pretrained' --output_dir={path_to_output_dir}If you find our repo useful for your research, please consider citing our paper:
@article{qiMyTimeMachinePersonalizedFacial2025,
title = {{{MyTimeMachine}}: {{Personalized Facial Age Transformation}}},
shorttitle = {{{MyTimeMachine}}},
author = {Qi, Luchao and Wu, Jiaye and Gong, Bang and Wang, Annie N. and Jacobs, David W. and Sengupta, Roni},
date = {2025-07-27},
journaltitle = {ACM Trans. Graph.},
volume = {44},
number = {4},
pages = {140:1--140:16},
issn = {0730-0301},
doi = {10.1145/3731172},
url = {https://dl.acm.org/doi/10.1145/3731172}
}
