-
Notifications
You must be signed in to change notification settings - Fork 0
API Reference
A quick reference of the below API calls, along with example usage, can be found at /api on the JudgeLite admin interface.
Most of the API calls return JSON data. Some require the secret key to be used (like submitting code); this secret key is defined in the environment variables (see Setup Instructions). Others do not need this key (like getting the list of problems).
Returns the list of problems that have a status of "up" on JudgeLite. This list will be in the following format:
{
"groups": [
{
"id": "test_group",
"name": "A Test Group",
"problems": [
{
"id": "test1",
"name": "Test Problem #1",
"difficulty": "Beginner",
"blurb": "The first test problem",
}
]
}
]
}There may be multiple problems in a group, and there can be multiple groups as well.
Here is a list of the fields in a group:
- id - The ID of the group.
- name - The name of the group.
- problems - An array of problems that are in the current group.
Here is a list of the fields in a problem:
- id - The ID of the problem.
- name - The name of the problem.
- difficulty - The difficulty of the problem. If the problem doesn't have a difficulty value, this field will be an empty string.
- blurb - A short description of the problem. If the problem doesn't have a blurb, this field will be an empty string.
Gets the problem info of the problem with ID <problem_id> (this is a parameter you'll have to fill in). For example, a GET request sent to /api/get_problem_info/test1 could return:
{
"id": "test1",
"name": "Test Problem #1",
"difficulty": "Beginner",
"max_score": 100,
"memory_limit": 64,
"time_limit": 2,
"statement": "<p>The problem statement should go here.</p>",
"bonus": "<p>The bonus problem statement should go here (if there is a bonus).</p>",
"hints": "<p>A helpful hint should go here.</p>"
}Here is a list of the fields returned:
- id - The ID of the problem.
- name - The name of the problem.
- difficulty - The difficulty of the problem. If the problem doesn't have a difficulty value, this field will be an empty string.
- max_score - The max score that could be achieved by solving this problem (not including bonus).
- memory_limit - The memory limit for this problem.
- time_limit - The time limit for this problem. (This is x1.5 for Java, x2 for Python).
- statement - The problem statement, formatted in HTML and MathJax. If no problem statement is found, this field will be an empty string.
- bonus - The bonus problem statement, formatted in HTML and MathJax. If no bonus problem statement is found, this field will be an empty string.
- hints - The hints for this problem, formatted in HTML and MathJax. If no hints are found, this field will be an empty string.
Warning: If you try to get the problem info of a problem that doesn't exist (or the problem does not have a status of "up"), this API request will instead return the following:
{
"error": "Invalid problem ID!"
}Make sure to account for this in your code.
Gets the status of the submission with the ID of <job_id> (you'll have to fill in this parameter). For example, a GET request might be sent to something like /api/get_status/b757deae-318b-4cdb-b8c8-81e2732ae65c.
Depending on the value of the status field, a couple of different things could be returned.
If the status is queued, that means the submission is in queue, and has not been evaluated yet. So, no real data will be returned:
{
"status": "queued"
}If the status is judging, that means the submission is currently being judged, and the results given in the JSON data are the results of the program in real time. The returned data will look like this:
{
"status": "judging",
"max_score": 100,
"score": [20, 0],
"is_bonus": [0, 1],
"subtasks": [
[
["AC", 10, 3.4],
["WA", 183, 11.7],
["TLE", 2000, 59.5],
["--", 0, 0],
["--", 0, 0]
],
[
["--", 0, 0],
["--", 0, 0]
]
]
}Here is a list of the fields returned:
- max_score - The max score that could be achieved by solving this problem (not including bonus).
- score - An array representing the score that the program currently has for each subtask of the problem.
- is_bonus - An array representing whether or not each subtask is a bonus subtask (0 for no, 1 for yes). This can be used to display bonus test results differently, like hiding bonus results if the submission has not gotten any bonus tests correct.
- subtasks - An array of subtasks. See more detail about this field below.
The subtasks field will contain an array of subtasks. Each individual subtask will contain an array of test results. Every test result is an array of length 3 ([verdict, time, memory]). The 1st element is a string representing that test's verdict. Here is a list of valid verdicts:
- AC - Accepted (correct answer)
- WA - Wrong answer
- TLE - Time limit exceeded
- MLE - Memory limit exceeded
- RE - Runtime error
- SK - Test skipped (may appear based on the grading method used for this problem)
- -- - Test has not been run yet
The 2nd element represents the time (in milliseconds) that the program took to run (if applicable). The 3rd element represents the max amount of memory (in megabytes) that the program used.
If the status is done, the submission has been fully evaluated by JudgeLite, and the returned JSON data has been finalized (it will not change if you get the status again). The returned data will look like this:
{
"status": "done",
"final_score": 20,
"max_score": 100,
"score": [20, 0],
"is_bonus": [0, 1],
"subtasks": [
[
["AC", 10, 3.4],
["WA", 183, 11.7],
["TLE", 2000, 59.5],
["MLE", 505, 256.0],
["RE", 293, 121.9]
],
[
["WA", 10, 4.2],
["SK", 0, 0]
]
]
}The format is very similar to the judging status, with one small difference: The final_score field contains the final score of the submission.
If the status is compile_error, the submission failed to compile, so it was not evaluated. The returned data will look like this:
{
"status": "compile_error",
"error": "Error description goes here.",
"final_score": 0,
"max_score": 100
}The error field gives the exact compiler error that was generated (it is safe to show this to the user).
Finally, if the status is internal_error, then some sort of internal error occurred. The returned data will look like this:
{
"status": "internal_error",
"error": "NO_SUCH_JOB",
"job_id": "189705c3-6d50-42d5-3be2-162ab20975aa"
}The error field gives an error code that can be used to trace down why the internal error occurred (or for filing an issue on Github), while the job_id field simply echoes back the job_id that you queried for. Here is a list of error codes:
- NO_SUCH_JOB - No submission with the requested job_id exists.
-
INIT_FAIL - Isolate could not initialize its sandbox. Possible fix: Make sure the Docker container is running in privileged mode (Use the
--privilegedflag when starting it). - WEBHOOK_FAIL - The judge could not send a POST request to the specified webhook URL (more info about the webhook feature can be found here). More detailed error info can be found in JudgeLite's log.
- JOB_FAILED - The submission could not be evaluated (this is a catch-all). More detailed error info can be found in JudgeLite's log.
Gets the source code of the submission with ID of <job_id> (you'll need to fill in this parameter). For example, a GET request to /api/get_submission_source/b757deae-318b-4cdb-b8c8-81e2732ae65c would return something like this:
A = int(input())
B = int(input())
print(A + B)Note that this is not JSON data! This is the only API call that does not return JSON data; it simply returns the submission's source code, without any extra HTML tags / formatting.
This API call will submit code to JudgeLite. In your POST request, make sure the content type is multipart/form-data. Then, include the following fields:
- problem_id - The ID of the problem that this submission is targeted at.
- username - The username of the person who is submitting the code. If you don't want JudgeLite to record this submission (it also won't send a webhook), use DO_NOT_TRACK as the username.
- type - The language that the code is written in. Should be one of "java", "cpp", or "python".
- code - The file containing the submission's source code. Note that this is an actual file, not just a string representing the source code.
- secret_key - The secret key that is set using JudgeLite's SECRET_KEY environment variable. This makes sure that only servers you control can actually submit code.
- run_bonus - (Optional, defaults to "on") Whether or not JudgeLite should run bonus test cases for this submission. This should be one of "on" or "off".
An example POST request could contain the following fields:
problem_id = "test1"
username = "bob"
type = "python"
code = <File code.py>
secret_key = <Value of the SECRET_KEY environment variable>If the submission was successful, the returned response will look like this:
{
"status": "success",
"job_id": "b757deae-318b-4cdb-b8c8-81e2732ae65c"
}The job_id field contains the submission ID. It can be used to query for a submission's status.
If the submission failed, the returned response will look like this:
{
"error": "Error message goes here."
}The error field will have a user-friendly error message as to why the submission failed. It could be something like "Missing .java file extension!", or "No username!".
Returns a list of submissions, sorted by most recent first, with at most PAGE_SIZE (defaults to 50) submissions per page (use the <page_number> variable to change the page number). In order to use this API call, you will need to include the secret key as a GET parameter. For example, a GET request to /api/get_submissions/1?secret_key=FAKE_SECRET_KEY could return:
[
{
"problem_id": "test1",
"username": "mary",
"score": 0,
"verdict": "TLE",
"timestamp": "06/09/2020 04:20:00 PM",
"job_id": "96c2dddb-41fe-4ce7-ad08-a63a4f5b1f3e"
},
{
"problem_id": "test1",
"username": "bob",
"score": 100,
"verdict": "AC",
"timestamp": "06/09/2020 01:33:37 PM",
"job_id": "b757deae-318b-4cdb-b8c8-81e2732ae65c"
}
]This response contains an array of submissions. Each submission has the following fields:
- problem_id - The ID of the problem that this submission was for.
- username - The username of the person who made this submission.
- score - The score that this submission got.
- verdict - The verdict that this submission got. This will be the first verdict that was not AC. If there were no non-AC verdicts, this will be AC, or AC* if the submission's score exceeds the maximum score.
- timestamp - The time when this submission finished being evaluated. (It will be based on the server's time)
- job_id - The job_id of this submission (can be used to get more details using /api/status/<job_id>).
Whew! I'm amazed you actually read all of this. If anything seemed unclear to you, please file a short issue on Github; the intent of this reference is to be as easy to understand as possible, so anything that could be done to make this page more clear would be super helpful!