You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+21-3Lines changed: 21 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,8 +29,10 @@ It will take around 20 minutes to finish setup.
29
29
30
30
You can test the service with a tool using `run_small.py`, a python script to test the service for six minutes.
31
31
To use this script, you should provide the tool's name and a port number. Possible tools' names are`evomaster-whitebox`, `evomaster-blackbox`, `restler`, `restest`, `resttestgen`, `bboxrt`, `schemathesis`, `dredd`, `tcases`, and `apifuzzer`.
32
-
You can use any available port number. The port number is used for collecting the achieved code coverage.
32
+
You can use any available port number. We recommend to use different port number for each run because it will add the achieved code coverage based on the previous run if you use the same port number. The port number is used for collecting the achieved code coverage.
33
33
Before running the script, make sure that you use `virtualenv`.
34
+
Also, we need to check if there is already running session. You can check the running sessions using "tmux ls" command. If there is running session, you may want to kill the session before running new experiment.
35
+
You can kill the session with "tmux kill-sess -t {session name}." You should find the session name in "tmux ls" command if there is any.
34
36
35
37
```
36
38
source venv/bin/activate
@@ -60,6 +62,20 @@ Users can stop a service using the following command.
60
62
python3 stop_service.py {service name}
61
63
```
62
64
65
+
### Example commands and Result
66
+
67
+
We show example command sets and result.
68
+
69
+
```
70
+
cd REST_Go
71
+
sh small_setup.sh
72
+
source venv/bin/activate
73
+
python run_small.py restler 10200
74
+
python report_small.py 10200
75
+
```
76
+
77
+
If you check the data/project-tracking-system/res.csv, you will see sixth row (1 hour coverage) 35%, 7.3%, 4.7% and seventh row (found bugs) 35, 7, 5.
78
+
63
79
## Detailed Description
64
80
65
81
@@ -86,8 +102,10 @@ Now you are ready to run the experiment!
86
102
### How to run the tool?
87
103
88
104
You can use the following tools `EvoMasterWB`, `EvoMasterBB`, `RESTler`, `RESTest`, `RestTestGen`, `bBOXRT`, `Schemathesis`, `Dredd`, `Tcases`, and `APIFuzzer` to test, using our python script, the following services `cwa-verification`, `erc20-rest-service`, `features-service`, `genome-nexus`, `languagetool`, `market`, `ncs`, `news`, `ocvn`, `person-controller`, `problem-controller`, `project-tracking-system`, `proxyporint`, `rest-study`, `restcountries`, `scout-api`, `scs`, `spring-batch-rest`, `spring-boot-sample-app`, and `user-management`.
89
-
For testing, you can use any free port number. The port number is for collecting code coverage.
105
+
You can use any available port number. We recommend to use different port number for each run because it will add the achieved code coverage based on the previous run if you use the same port number. The port number is used for collecting the achieved code coverage.
90
106
Before run the script, make sure that you use the `virtualenv`.
107
+
Also, we need to check if there is already running session. You can check the running sessions using "tmux ls" command. If there is running session, you may want to kill the session before running new experiment.
108
+
You can kill the session with "tmux kill-sess -t {session name}." You should find the session name in "tmux ls" command if there is any.
You can compare your result to our result below. Since the tools have randomness, you may have a slightly different result from us. We run each tool ten times and get their average.
135
+
You can compare your result to our result below. This figure, although not shown in the paper, is generated based on the same experiment results. Since the tools have randomness, you may have a slightly different result from us. We run each tool ten times and get their average.
0 commit comments