BinKit is a binary code similarity analysis (BCSA) benchmark. BinKit provides scripts for building a cross-compiling environment, as well as the compiled dataset. The original dataset includes 1,352 distinct combinations of compiler options of 8 architectures, 5 optimization levels, and 13 compilers. We currently tested this code in Ubuntu 16.04.
For more details, please check our paper.
For a BCSA tool and ground truth building, please check TikNib.
You can download our dataset and toolchain as below. The link will be changed to
git-lfs soon.
- Normal dataset
- SizeOpt dataset
- Noinline dataset
- PIE dataset
- LTO dataset
- Obfus dataset
- Obfus 2-Loop dataset
Below data is only used for our evaluation.
- x86_32
- x86_64
- arm_32 (little endian)
- arm_64 (little endian)
- mips_32 (little endian)
- mips_64 (little endian)
- mipseb_32 (big endian)
- mipseb_64 (big endian)
- O0
- O1
- O2
- O3
- Os
- gcc-4.9.4
- gcc-5.5.0
- gcc-6.4.0
- gcc-7.3.0
- gcc-8.2.0
- clang-4.0
- clang-5.0
- clang-6.0
- clang-7.0
- clang-8.0
- clang-9.0
- clang-obfus-fla (Obfuscator-LLVM - FLA)
- clang-obfus-sub (Obfuscator-LLVM - SUB)
- clang-obfus-bcf (Obfuscator-LLVM - BCF)
- clang-obfus-all (Obfuscator-LLVM - FLA + SUB + BCF)
NUM_JOBS: formake,parallel, andpythonmultiprocessingMAX_JOBS: maximum formake
We build crosstool-ng and clang environment. If you download pre-compiled toolchain. Please skip this.
$ source scripts/env.sh
# We may have missed some packages here ... please check
$ scripts/install_default_deps.sh # install default packages for dataset compilation
$ scripts/setup_ctng.sh # setup crosstool-ng binaries
$ scripts/setup_gcc.sh # build ct-ng environment. Takes a lot of time
$ scripts/cleanup_ctng.sh # cleaning up ctng leftovers
$ scripts/setup_clang.sh # setup clang and llvm-obfuscator$ scripts/link_toolchains.sh # link base toolchainTo undo the linking, please check scripts/unlink_toolchains.sh
Please configure variables in compile_packages.sh and run below. The script
automatically downloads the source code of GNU packages, and compiles them to
make all the dataset. However, it may take too much time to create all of them.
- NOTE that it takes SIGNIFIACNT time.
- NOTE that some packages would not be compiled for some compiler options.
$ scripts/install_gnu_deps.sh # install default packages for dataset compilation
$ ./compile_packages.shYou can download the source code of GNU packages of your interest as below.
- Please check step 1 before running the command.
- You must give ABSOLUTE PATH for
--base_dir.
$ source scripts/env
$ python gnu_compile_script.py \
--base_dir "/home/dongkwan/binkit/dataset/gnu" \
--num_jobs 8 \
--whitelist "config/whitelist.txt" \
--downloadYou can compile only the packages or compiler options of your interest as below.
$ source scripts/env
$ python gnu_compile_script.py \
--base_dir "/home/dongkwan/binkit/dataset/gnu" \
--num_jobs 8 \
--config "config/normal.yml" \
--whitelist "config/whitelist.txt"You can check the compiled binaries as below.
$ source scripts/env
$ python compile_checker.py \
--base_dir "/home/dongkwan/binkit/dataset/gnu" \
--num_jobs 8 \
--config "config/normal.yml"For more details, please check compile_packages.sh
To build datasets by customizing options, you can make your own configuration
file (.yml) and select target compiler options. You can check the format in
the existing sample files in the /config directory. Here, please make sure
that the name of your config file is not included in the blacklist in the
compilation
script.
We ran all our experiments on a server equipped with four Intel Xeon E7-8867v4 2.40 GHz CPUs (total 144 cores), 896 GB DDR4 RAM, and 4 TB SSD. We setup Ubuntu 16.04 on the server.
- Python 3.8.0
The time spent for running the below script took 7 hours on our machine.
$ python gnu_compile_script.py \
--base_dir "/home/dongkwan/binkit/dataset/gnu" \
--num_jobs 72 \
--config "config/normal.yml" \
--whitelist "config/whitelist.txt"If compilation fails, you may have to adjust the number of jobs for parallel processing in the step 1, which is machine-dependent.
This project has been conducted by the below authors at KAIST.
We would appreciate if you consider citing our paper when using BinKit.
@article{kim:2020:binkit,
author = {Dongkwan Kim and Eunsoo Kim and Sang Kil Cha and Sooel Son and Yongdae Kim},
title = {Revisiting Binary Code Similarity Analysis using Interpretable Feature Engineering and Lessons Learned},
eprint={2011.10749},
archivePrefix={arXiv},
primaryClass={cs.SE}
year = {2020},
}