Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 7 additions & 17 deletions .github/workflows/run_pr_tests.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,9 @@
name: Run PR tests, graded and discrim

on:
pull_request:
# only run when PRs target the main branch
push:
branches:
- main
types:
- opened
- synchronize
- reopened
- test/maccGradedCase

jobs:
run-tests:
Expand Down Expand Up @@ -57,11 +52,8 @@ jobs:
sudo apt-get install git-all -y

# Getting current branch
REPO="${{ github.repository }}"
CURRENT_BRANCH="${{ github.head_ref }}"
echo "Repository*^*: $REPO"
echo "Branch name*^*: $CURRENT_BRANCH"
git clone -b "${CURRENT_BRANCH}" "https://github.com/${REPO}.git"
CURRENT_BRANCH="${{ github.ref_name }}"
git clone -b "$CURRENT_BRANCH" https://github.com/stacs-cp/AutoIG.git

# Install Necessary Dependencies into AutoIG Bin
bash bin/install-mininzinc.sh
Expand All @@ -78,20 +70,18 @@ jobs:
cd scripts/testScripts

# Run the two test scripts associated with PRs
echo "About to run pr_discrim tests"
bash check_pr_discrim.sh
echo "About to run pr_graded tests"
bash check_pr_graded.sh
bash check_pr.sh

# if script fails reject PR
- name: Fail
if: ${{ failure() }}
run: |
echo "PR tests failed, rejecting."
echo "This tests failed, rejecting PR."
exit 1
# if script passes approve PR
- name: Pass
if: ${{ success() }}
run: |
echo "PR tests passed! allowing PR."
echo "This tests passed! allowing PR."
exit 0
10 changes: 3 additions & 7 deletions .github/workflows/run_push_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ name: Run push tests, graded and discrim
on:
push:
branches:
- "**"
- "*"

jobs:
run-tests:
Expand Down Expand Up @@ -52,13 +52,9 @@ jobs:
sudo apt-get install r-base -y
sudo apt-get install git-all -y

# Getting current branch
REPO="${{ github.repository }}"

# Getting current branch
CURRENT_BRANCH="${{ github.ref_name }}"
echo "Repository*^*: $REPO"

git clone -b "$CURRENT_BRANCH" https://github.com/${REPO}.git
git clone -b "$CURRENT_BRANCH" https://github.com/stacs-cp/AutoIG.git

# Install Necessary Dependencies into AutoIG Bin
bash bin/install-mininzinc.sh
Expand Down
2 changes: 0 additions & 2 deletions .gitignore

This file was deleted.

26 changes: 2 additions & 24 deletions DEV_README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
 # Docker Build And Run Commands in a Container
# Docker Build And Run Commands in a Container

### Builds an image of <container-name> using Docker

Expand Down Expand Up @@ -33,39 +33,17 @@ At this point, AutoIG is fully configured and ready for use as normal.

## Example sequence of commands for setting up an experiment:

### To set up for MiniZinc

### Graded Example
`mkdir -p experiments/macc-graded/`

`cd experiments/macc-graded/`

`python $AUTOIG/scripts/setup.py --generatorModel $AUTOIG/data/models/macc/generator-small.essence --problemModel $AUTOIG/data/models/macc/problem.mzn --instanceSetting graded --minSolverTime 0 --maxSolverTime 5 --solver chuffed --solverFlags="-f" --maxEvaluations 180 --genSolverTimeLimit 5`

### Discriminating Example
`mkdir -p experiments/macc-discriminating/`

`cd experiments/macc-discriminating/`

`python $AUTOIG/scripts/setup.py --generatorModel $AUTOIG/data/models/macc/generator-small.essence --problemModel $AUTOIG/data/models/macc/problem.mzn --instanceSetting discriminating --minSolverTime 1 --maxSolverTime 3 --baseSolver chuffed --solverFlags="-f" --favouredSolver or-tools --favouredSolverFlags="-f" --maxEvaluations 180 --genSolverTimeLimit 5`


### To set up for essence

#### Graded Example
For "vessel_loading" essence problem
`python $AUTOIG/scripts/setup.py --generatorModel $AUTOIG/data/models/vessel-loading/generator.essence --problemModel $AUTOIG/data/models/vessel-loading/problem.essence --instanceSetting graded --minSolverTime 0 --maxSolverTime 5 --solver chuffed --solverFlags="-f" --maxEvaluations 180 --genSolverTimeLimit 5`

For "car-sequencing" essence problem
`python $AUTOIG/scripts/setup.py --generatorModel $AUTOIG/data/models/car-sequencing/generator.essence --problemModel $AUTOIG/data/models/car-sequencing/problem.essence --instanceSetting graded --minSolverTime 0 --maxSolverTime 5 --solver chuffed --solverFlags="-f" --maxEvaluations 300 --genSolverTimeLimit 5`

#### Discriminating Example

`python $AUTOIG/scripts/setup.py --generatorModel $AUTOIG/data/models/vessel-loading/generator.essence --problemModel $AUTOIG/data/models/vessel-loading/problem.essence --instanceSetting discriminating --minSolverTime 1 --maxSolverTime 3 --baseSolver chuffed --solverFlags="-f" --favouredSolver ortools --favouredSolverFlags="-f" --maxEvaluations 180 --genSolverTimeLimit 5`
### To Run The Generated Bash Script

bash run.sh

# Considerations for Use of Dockerfile

The build Docker image allows for the program to be run in a container. It is worth noting though that the container could take up more storage than running AutoIG through Linux directly, as it will download dependencies within the container such as Python and R. If a users system already has these, it could be more efficient to run it directly on the system without a VM. In addition, data does not persist within the container, so it is important to save the results of AutoIG runs, perhaps with a Docker Volume. Instructions for setting up the Docker Volume are listed above.

4 changes: 1 addition & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ RUN echo "$CACHEBUST"

# Clone into AutoIG directory on Vincent fork
# Not incorrect, but will need to be changed later to clone from stacs-cp/AutoIG instead
RUN git clone --depth 1 -b main https://github.com/vincepick/AutoIG.git
RUN git clone -b build/update-docker https://github.com/vincepick/AutoIG.git

WORKDIR /AutoIG

Expand All @@ -59,8 +59,6 @@ RUN bash bin/update-or-path.sh
RUN bash bin/update-conjure-paths.sh

# For use during development

RUN apt-get update
RUN apt-get install -y \
vim \
file
Expand Down
2 changes: 1 addition & 1 deletion bin/R-packages.R
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ for (p in c("R6","data.table")){
install.packages(paste(binDir,paths[[p]],sep='/'), lib=binDir)
library(p,character.only = TRUE, lib.loc=binDir)
}
}
}
Binary file added bin/data.table_1.14.2.tar.gz
Binary file not shown.
4 changes: 2 additions & 2 deletions bin/install-irace.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ echo ""
echo "============= INSTALLING $name ==================="
echo "$name version: $version"

BIN_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
BIN_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"

# check if R is installed
if ! [ -x "$(command -v R)" ]; then
Expand All @@ -17,7 +17,7 @@ url="https://cran.r-project.org/src/contrib/irace_3.4.1.tar.gz"

pushd $BIN_DIR

mkdir -p $name
mkdir -p $name

SOURCE_DIR="$name-source"
mkdir -p $SOURCE_DIR
Expand Down
Binary file modified scripts/.DS_Store
Binary file not shown.
135 changes: 38 additions & 97 deletions scripts/collect_results.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,11 @@

import sys
import os
import re

scriptDir = os.path.dirname(os.path.realpath(__file__))
sys.path.append(scriptDir)

from minizinc_utils import calculate_minizinc_borda_scores, get_minizinc_problem_type
from essence_pipeline_utils import get_essence_problem_type, calculate_essence_borda_scores

pd.options.mode.chained_assignment = None

Expand Down Expand Up @@ -80,8 +78,8 @@ def rename_status_dis(status, score):
else:
return status
tRs.loc[:,"status"] = [rename_status_dis(s[0],s[1]) for s in zip(tRs.status,tRs.score)]


#display(tRs[tRs.status.str.contains("Wins")])
# rename some columns and re-order the columns
tRs.rename(columns={"hashValue":"instanceHashValue","score":"iraceScore"}, inplace=True)
tRs = tRs[["genInstance","instance","genResults","instanceResults","status","iraceScore","totalTime","instanceHashValue"]]
Expand All @@ -103,7 +101,6 @@ def print_stats(config, tRs, tRsNoDup):
# number of instances generated
nInstances = len(tRsNoDup.instance.unique())


# number of runs for each run status
runStats = tRs.groupby('status').genResults.count().to_dict()
runStatsWithoutDuplicates = tRsNoDup.groupby('status').genResults.count().to_dict()
Expand Down Expand Up @@ -154,100 +151,44 @@ def extract_graded_and_discriminating_instances(runDir):
"""
extract information about graded/discriminating instances and save to a .csv file
"""


outFile = None
if re.search(r'\.mzn$', config["problemModel"]):
if config["instanceSetting"] == "graded":
# filter out non-graded instances
tInfo = tRsNoDup.loc[tRsNoDup.status=="graded",:]
# extract instance type
tInfo.loc[:,"instanceType"] = [x["results"]["main"]["runs"][0]["extra"]["instanceType"] for x in tInfo.instanceResults]
# calculate average solving time for each instance
tInfo.loc[:,"avgSolvingTime"] = [np.mean([rs["time"] for rs in x["results"]["main"]["runs"]]) for x in tInfo.instanceResults]
# re-order columns
tInfo = tInfo[["instance","instanceType","avgSolvingTime","instanceResults","genInstance","genResults","status","iraceScore","totalTime","instanceHashValue"]]
# save to a .csv file
outFile = f"{runDir}/graded-instances-info.csv"
print(f"\nInfo of graded instances is saved to {os.path.abspath(outFile)}")
tInfo.to_csv(outFile, index=False)
else:
# filter out non-discriminating instances or instances where the favoured solver lost
tInfo = tRsNoDup.loc[tRsNoDup.status.isin(["favouredSolverWins"]),:]
# extract instance type
tInfo.loc[:,"instanceType"] = [x["results"]["favoured"]["runs"][0]["extra"]["instanceType"] for x in tInfo.instanceResults]
# extract MiniZinc Borda score of the favoured and the base solvers
print("about to try to get problem type", config["problemModel"])

problemType = get_minizinc_problem_type(config["problemModel"])



def extract_minizinc_score(r):
results = calculate_minizinc_borda_scores(r['base']['runs'][0]['status'], r['favoured']['runs'][0]['status'],
r['base']['runs'][0]['time'], r['favoured']['runs'][0]['time'],
problemType,
r['base']['runs'][0]['extra']['objs'], r['favoured']['runs'][0]['extra']['objs'],
True)
scores = results["complete"] # first element: base solver's score, second element: favoured solver's score
return scores[1]
tInfo.loc[:,"favouredSolverMiniZincScore"] = [extract_minizinc_score(x["results"]) for x in tInfo.instanceResults]
tInfo.loc[:,"baseSolverMiniZincScore"] = [1 - x for x in tInfo.favouredSolverMiniZincScore]
tInfo.loc[:,"discriminatingPower"] = tInfo["favouredSolverMiniZincScore"] / tInfo["baseSolverMiniZincScore"]
# re-order columns
tInfo = tInfo[["instance","discriminatingPower","favouredSolverMiniZincScore","baseSolverMiniZincScore","instanceType","instanceResults","genInstance","genResults","status","iraceScore","totalTime","instanceHashValue"]]
# save to a .csv file
outFile = f"{runDir}/discriminating-instances-info.csv"
print(f"\nInfo of discriminating instances is saved to {os.path.abspath(outFile)}")
tInfo.to_csv(outFile, index=False)

elif re.search(r'\.essence$', config["problemModel"]):
if config["instanceSetting"] == "graded":
# filter out non-graded instances
tInfo = tRsNoDup.loc[tRsNoDup.status=="graded",:]

# extract instance type
tInfo.loc[:,"instanceType"] = [x["results"]["main"]["runs"][0]["status"] for x in tInfo.instanceResults]
# calculate average solving time for each instance
tInfo.loc[:,"avgSolvingTime"] = [np.mean([rs["time"] for rs in x["results"]["main"]["runs"]]) for x in tInfo.instanceResults]
# re-order columns
tInfo = tInfo[["instance","instanceType","avgSolvingTime","instanceResults","genInstance","genResults","status","iraceScore","totalTime","instanceHashValue"]]
# save to a .csv file
outFile = f"{runDir}/graded-instances-info.csv"
print(f"\nInfo of graded instances is saved to {os.path.abspath(outFile)}")
tInfo.to_csv(outFile, index=False)
else:
# filter out non-discriminating instances or instances where the favoured solver lost
tInfo = tRsNoDup.loc[tRsNoDup.status.isin(["favouredSolverWins"]),:]
# The instance type
tInfo.loc[:,"instanceType"] = [x["results"]["favoured"]["runs"][0]["status"] for x in tInfo.instanceResults]
# extract Esesnce Borda score of the favoured and the base solvers

problemType = get_essence_problem_type(config["problemModel"])


def extract_essence_score(r):
# Calculated using only solver time rather than total time of SR + Solver Time
results = calculate_essence_borda_scores(r['base']['runs'][0]['status'], r['favoured']['runs'][0]['status'],
r['base']['runs'][0]['solverTime'], r['favoured']['runs'][0]['solverTime'],
problemType,
True)
# scores = results # first element: base solver's score, second element: favoured solver's score
# Different than the essence pipeline, instaed the calculate_essence_borda_scores calculates the score directly
return results[1]
tInfo.loc[:,"favouredSolverEssenceScore"] = [extract_essence_score(x["results"]) for x in tInfo.instanceResults]
tInfo.loc[:,"baseSolverEssenceScore"] = [1 - x for x in tInfo.favouredSolverEssenceScore]
tInfo.loc[:,"discriminatingPower"] = tInfo["favouredSolverEssenceScore"] / tInfo["baseSolverEssenceScore"]
# re-order columns
tInfo = tInfo[["instance","discriminatingPower","favouredSolverEssenceScore","baseSolverEssenceScore","instanceType","instanceResults","genInstance","genResults","status","iraceScore","totalTime","instanceHashValue"]]
# save to a .csv file
outFile = f"{runDir}/discriminating-instances-info.csv"
print(f"\nInfo of discriminating instances is saved to {os.path.abspath(outFile)}")
tInfo.to_csv(outFile, index=False)
if config["instanceSetting"] == "graded":
# filter out non-graded instances
tInfo = tRsNoDup.loc[tRsNoDup.status=="graded",:]
# extract instance type
tInfo.loc[:,"instanceType"] = [x["results"]["main"]["runs"][0]["extra"]["instanceType"] for x in tInfo.instanceResults]
# calculate average solving time for each instance
tInfo.loc[:,"avgSolvingTime"] = [np.mean([rs["time"] for rs in x["results"]["main"]["runs"]]) for x in tInfo.instanceResults]
# re-order columns
tInfo = tInfo[["instance","instanceType","avgSolvingTime","instanceResults","genInstance","genResults","status","iraceScore","totalTime","instanceHashValue"]]
# save to a .csv file
outFile = f"{runDir}/graded-instances-info.csv"
print(f"\nInfo of graded instances is saved to {os.path.abspath(outFile)}")
tInfo.to_csv(outFile, index=False)
else:
# there are no other supported model types for now
print("Unsupported model type, please try agian with Essence or Mzn problem model")

# filter out non-discriminating instances or instances where the favoured solver lost
tInfo = tRsNoDup.loc[tRsNoDup.status.isin(["favouredSolverWins"]),:]
# extract instance type
tInfo.loc[:,"instanceType"] = [x["results"]["favoured"]["runs"][0]["extra"]["instanceType"] for x in tInfo.instanceResults]
# extract MiniZinc Borda score of the favoured and the base solvers
problemType = get_minizinc_problem_type(config["problemModel"])
def extract_minizinc_score(r):
results = calculate_minizinc_borda_scores(r['base']['runs'][0]['status'], r['favoured']['runs'][0]['status'],
r['base']['runs'][0]['time'], r['favoured']['runs'][0]['time'],
problemType,
r['base']['runs'][0]['extra']['objs'], r['favoured']['runs'][0]['extra']['objs'],
True)
scores = results["complete"] # first element: base solver's score, second element: favoured solver's score
return scores[1]
tInfo.loc[:,"favouredSolverMiniZincScore"] = [extract_minizinc_score(x["results"]) for x in tInfo.instanceResults]
tInfo.loc[:,"baseSolverMiniZincScore"] = [1 - x for x in tInfo.favouredSolverMiniZincScore]
tInfo.loc[:,"discriminatingPower"] = tInfo["favouredSolverMiniZincScore"] / tInfo["baseSolverMiniZincScore"]
# re-order columns
tInfo = tInfo[["instance","discriminatingPower","favouredSolverMiniZincScore","baseSolverMiniZincScore","instanceType","instanceResults","genInstance","genResults","status","iraceScore","totalTime","instanceHashValue"]]
# save to a .csv file
outFile = f"{runDir}/discriminating-instances-info.csv"
print(f"\nInfo of discriminating instances is saved to {os.path.abspath(outFile)}")
tInfo.to_csv(outFile, index=False)

return tInfo

Expand Down
25 changes: 0 additions & 25 deletions scripts/conf.py
Original file line number Diff line number Diff line change
@@ -1,26 +1 @@
problemType = None

# Define constants for scoring

# General
SCORE_UNWANTED_TYPE = 0
SCORE_TOO_EASY = 0
SCORE_INCORRECT_ANSWER = 0
SCORE_TOO_DIFFICULT = 0

# Graded
SCORE_GRADED = -1

# Discriminating
SCORE_BASE_TOO_EASY = 0
SCORE_FAVOURED_TOO_DIFFICULT = 0
# Best when one can do it but the other can't
SCORE_BEST = -9999



# Define constants for outputs
detailedOutputDir = "./detailed-output"

# for minizinc experiments only: solvers where -r doesn't work when being called via minizinc
deterministicSolvers = ["ortools"]
Loading
Loading