Towards an Inclusive Mobile Web: A Dataset and Framework for Focusability in UI Accessibility (WWW'25 Web4Good)
📄 Read the full paper here Link.
- python>=3.8
- for installation scripts see .ci/install-dev.sh, .ci/install.sh
bash .ci/install-dev.sh
bash .ci/install.shCheck the DOI url or the following google driver urls:
- extra-nos-raw-labeled
- download-url
- train/predict nos-raw-labeled needed, you can download or generate by script
- extra-rico-labeled
- download-url
- train/predict rico-labeled needed, you can download or generate by script
- nos-raw-labeled
- download-url
- train/predict nos-raw-labeled needed
- rico-labeled
- download-url
- train/predict rico-labeled needed
- mixed-split
- download-url
- train/predict mixed needed, dataset split for mixed experiments
- weights
- download-url
- predict needed, trained model weights
- downloaded directory structure
.
├── extra-nos-raw-labeled
│ ├── box
│ ├── box_feat
│ ├── graph
│ └── text_feat
├── extra-rico-labeled
│ ├── box
│ ├── box_feat
│ ├── graph
│ └── text_feat
├── nos-raw-labeled
│ ├── hierarchy
│ └── screenshot
├── rico-labeled
│ ├── hierarchy
│ └── screenshot
├── mixed-split
└── weights
├── mixed
├── nos-raw-labeled
└── rico-labeled- you should put the dataset into
NOS/dataset
cd NOS
mkdir dataset
mv ${nos-raw-labeled-path} ./dataset/nos-raw-labeled
mv ${rico-labeled-path} ./dataset/rico-labeled- if you has downloaded the
extrapart, you need to run:
mv ${extra-nos-raw-labeled-path}/* ./dataset/nos-raw-labeled
mv ${extra-rico-labeled-path}/* ./dataset/rico-labeled- otherwise, you need generate the
extrapart by:
# first edit the file `preprocess.py`
line 14 dataset_dir = "./dataset/nos-raw-labeled" <-- choose one dataset
line 15 # dataset_dir = "./dataset/rico-labeled" <-- and comment the another line
# then run the preprocess script to generate `box` `box_feat` `graph` `text_feat` (about 30min per dataset)
python preprocess.py
# for dataset `nos-raw` and `rico`, you should repeat the above steps twice- finally, the
NOSworkspace will be:
.
├── dataset
│ ├── nos-raw-labeled
│ │ ├── box
│ │ ├── box_feat
│ │ ├── graph
│ │ ├── hierarchy
│ │ ├── screenshot
│ │ ├── text_feat
│ │ └── dataset_split_{1,2,3}.json
│ ├── rico-labeled
│ │ ├── box
│ │ ├── box_feat
│ │ ├── graph
│ │ ├── hierarchy
│ │ ├── screenshot
│ │ ├── text_feat
│ │ └── dataset_split_{1,2,3}.json
│ └── mixed # combine the contents of the two folders `nos-raw-labeled` and `rico-labeled`
│ ├── box
│ ├── box_feat
│ ├── graph
│ ├── hierarchy
│ ├── screenshot
│ ├── text_feat
│ └── dataset_split_{1,2,3}.json
├── main.py
├── config.py
├── data_loader.py
├── model.py
├── predict.py
├── utils.py
└── IGNN- copy the checkpoint file
mkdir -p ./model_checkpoint/ignn/nos-raw-labeled_split_1
mkdir -p ./model_checkpoint/ignn/rico-labeled_split_1
mv ${weights-path}/nos-raw-labeled/split_1.pt ./model_checkpoint/ignn/nos-raw-labeled_split_1
mv ${weights-path}/rico-labeled/split_1.pt ./model_checkpoint/ignn/rico-labeled_split_1- run command
source .env/bin/activate
python -u main.py --mode predict --checkpoint split_1 --dataset nos-raw --split 1 --gpu 0
python -u main.py --mode predict --checkpoint split_1 --dataset rico --split 1 --gpu 0- then you can see the result in
./predict_result/ignn
- run command
source .env/bin/activate
python -u main.py --dataset nos-raw --split 1 --mode train --gpu 0
python -u main.py --dataset rico --split 1 --mode train --gpu 0- the checkpoint will be saved in
./model_checkpoint/ignn
@inproceedings{10.1145/3696410.3714523,
title = {Towards an Inclusive Mobile Web: A Dataset and Framework for Focusability in UI Accessibility},
author = {Gu, Ming and Pei, Lei and Zhou, Sheng and Shen, Ming and Wu, Yuxuan and Gao, Zirui and Wang, Ziwei and Shan, Shuo and Jiang, Wei and Li, Yong and Bu, Jiajun},
booktitle = {Proceedings of the ACM on Web Conference 2025},
pages = {5096–5107},
numpages = {12},
year = {2025}
}