You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+51-4Lines changed: 51 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,57 @@
1
1
# Enhancing REST API Testing with NLP Techniques
2
2
3
+
## Getting Started
4
+
5
+
This section provides a comprehensive guide to setting up an artifact and validating its functionality using a simple example. The setup process was successfully tested on a Google Cloud EC2 machine using an Ubuntu 20.04 image.
6
+
7
+
1. Setup environment:
8
+
9
+
Start by setting up your environment. This can be done by executing the setup script in your terminal as shown below:
10
+
11
+
```
12
+
sh setup.sh
13
+
```
14
+
15
+
2. Run Service Proxy:
16
+
17
+
Navigate to the services directory and run the Service Proxy using Python3:
18
+
19
+
```
20
+
cd services
21
+
python3 run_service.py rest-countries no_token
22
+
```
23
+
24
+
3. Run Rule Extractor:
25
+
26
+
Before running the Rule Extractor, download the following pretrained models and place them in the `rule_extractor` directory:
Then, navigate to the `rule_extractor` directory and build a Docker image tagged 'rex'. Please allow around 5 minutes to build the Docker image. Finally, run the Docker image, mapping the host port 4000 to the Docker container's port 4000.
31
+
32
+
```
33
+
cd rule_extractor
34
+
docker build -t rex .
35
+
docker run -p 4000:4000 rex
36
+
```
37
+
38
+
4. Run Rule Validator:
39
+
40
+
Firstly, adjust the `specificationFileName` in the `rtg_config.json` file to point to the rest-countries specification. Then, use Gradle to run the project as follows:
41
+
42
+
```
43
+
./gradlew run
44
+
```
45
+
46
+
This will generate an enhanced specification in the `output` directory.
47
+
48
+
## Detailed Instructions
49
+
3
50
This repository contains the components necessary to run NLP2REST, an approach designed to automatically enhance OpenAPI specifications with rules extracted from natural language description fields. These rules include constraints and example values.
4
51
5
52
In addition, this repository provides all the components required to replicate the experiment described in our paper, including testing tools and benchmark APIs.
@@ -13,14 +60,14 @@ In addition, this repository provides all the components required to replicate t
13
60
- Deployment of the Rule Extractor Service
14
61
- Run the Rule Validator
15
62
16
-
## Recommended Environment
63
+
###Recommended Environment
17
64
18
65
This project has been tested and is known to work well on the following setup:
19
66
20
67
- Ubuntu 20.04
21
68
- Google Cloud EC2 machine with 24-core CPU and 128GB memory
22
69
23
-
## Setup
70
+
###Setup
24
71
25
72
We provide a setup script to install the necessary packages and set up the environment for the project:
26
73
@@ -30,7 +77,7 @@ We provide a setup script to install the necessary packages and set up the envir
30
77
31
78
It will take around 10 minutes.
32
79
33
-
## Steps to Use NLP2REST
80
+
###Steps to Use NLP2REST
34
81
35
82
1.**REST API(s):** NLP2REST is designed to be applied to one or more target REST APIs. The APIs should be accessible at the URLs detailed in their OpenAPI specifications. We have provided several REST APIs in the [services](https://github.com/codingsoo/nlp2rest/tree/main/services) directory along with a manuscript for running these services. Alternatively, you can use any public APIs accessible online that have OpenAPI specifications. For a quick trial of our approach, we suggest using the [FDIC REST API](https://banks.data.fdic.gov/). This is an online API and its specification is available in our `specifications` directory.
0 commit comments