Skip to content
Michael Wisely edited this page Dec 3, 2015 · 1 revision

Automated grading is accomplished by running scripts within a docker container. Since assignments vary so drastically from course to course or assignment to assignment, we need a flexible approach for evaluating assignments.

Each assignment will have its own dedicated configuration repository (as described in Folder Structure). In that repo, configuration files (assignment.yaml and Dockerfile) specify to grader how to properly setup docker images and containers. The config/ repo also contains the scripts/programs for evaluating student assignments. Whether the automated grading script is running unit tests or just diffing output, the result from the script should be predictable, so that we can generate reports.

To this end, all automated grading scripts should print a report in JSON format over standard out. Any errors in the script itself may be reported over standard error.

The JSON report should contain the following fields (eventually enforced by jsonschema):

  • RuntimeData (optional dict)
    • WallTime (optional string)
    • StandardOut (string)
    • StandardErr (string)
  • StyleData (optional dict)
    • Report (string)

Clone this wiki locally