Skip to content

Commit f39c01d

Browse files
Myeongsoo KimMyeongsoo Kim
authored andcommitted
Add description
1 parent 4cfbe6f commit f39c01d

File tree

2 files changed

+21
-104
lines changed

2 files changed

+21
-104
lines changed

APIFuzzer/APIFuzzer

Lines changed: 0 additions & 101 deletions
This file was deleted.

README.md

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,10 @@ It will take around 20 minutes to finish setup.
2929

3030
You can test the service with a tool using `run_small.py`, a python script to test the service for six minutes.
3131
To use this script, you should provide the tool's name and a port number. Possible tools' names are`evomaster-whitebox`, `evomaster-blackbox`, `restler`, `restest`, `resttestgen`, `bboxrt`, `schemathesis`, `dredd`, `tcases`, and `apifuzzer`.
32-
You can use any available port number. The port number is used for collecting the achieved code coverage.
32+
You can use any available port number. We recommend to use different port number for each run because it will add the achieved code coverage based on the previous run if you use the same port number. The port number is used for collecting the achieved code coverage.
3333
Before running the script, make sure that you use `virtualenv`.
34+
Also, we need to check if there is already running session. You can check the running sessions using "tmux ls" command. If there is running session, you may want to kill the session before running new experiment.
35+
You can kill the session with "tmux kill-sess -t {session name}." You should find the session name in "tmux ls" command if there is any.
3436

3537
```
3638
source venv/bin/activate
@@ -60,6 +62,20 @@ Users can stop a service using the following command.
6062
python3 stop_service.py {service name}
6163
```
6264

65+
### Example commands and Result
66+
67+
We show example command sets and result.
68+
69+
```
70+
cd REST_Go
71+
sh small_setup.sh
72+
source venv/bin/activate
73+
python run_small.py restler 10200
74+
python report_small.py 10200
75+
```
76+
77+
If you check the data/project-tracking-system/res.csv, you will see sixth row (1 hour coverage) 35%, 7.3%, 4.7% and seventh row (found bugs) 35, 7, 5.
78+
6379
## Detailed Description
6480

6581

@@ -86,8 +102,10 @@ Now you are ready to run the experiment!
86102
### How to run the tool?
87103

88104
You can use the following tools `EvoMasterWB`, `EvoMasterBB`, `RESTler`, `RESTest`, `RestTestGen`, `bBOXRT`, `Schemathesis`, `Dredd`, `Tcases`, and `APIFuzzer` to test, using our python script, the following services `cwa-verification`, `erc20-rest-service`, `features-service`, `genome-nexus`, `languagetool`, `market`, `ncs`, `news`, `ocvn`, `person-controller`, `problem-controller`, `project-tracking-system`, `proxyporint`, `rest-study`, `restcountries`, `scout-api`, `scs`, `spring-batch-rest`, `spring-boot-sample-app`, and `user-management`.
89-
For testing, you can use any free port number. The port number is for collecting code coverage.
105+
You can use any available port number. We recommend to use different port number for each run because it will add the achieved code coverage based on the previous run if you use the same port number. The port number is used for collecting the achieved code coverage.
90106
Before run the script, make sure that you use the `virtualenv`.
107+
Also, we need to check if there is already running session. You can check the running sessions using "tmux ls" command. If there is running session, you may want to kill the session before running new experiment.
108+
You can kill the session with "tmux kill-sess -t {session name}." You should find the session name in "tmux ls" command if there is any.
91109
```
92110
python3 run_tool.py {tool_name} {service_name} {time_limit}
93111
```
@@ -114,7 +132,7 @@ python3 stop_service.py {service name}
114132

115133
### Result
116134

117-
You can compare your result to our result below. Since the tools have randomness, you may have a slightly different result from us. We run each tool ten times and get their average.
135+
You can compare your result to our result below. This figure, although not shown in the paper, is generated based on the same experiment results. Since the tools have randomness, you may have a slightly different result from us. We run each tool ten times and get their average.
118136

119137
![res](images/figure_all.png)
120138

0 commit comments

Comments
 (0)