diff --git a/.ci/check_style.sh b/.ci/check_style.sh index b40e62d0..b35f061c 100755 --- a/.ci/check_style.sh +++ b/.ci/check_style.sh @@ -15,7 +15,7 @@ function main() { local error_count=0 - python3 -m autograder.cli.style "${BASE_DIR}/autograder" "${BASE_DIR}/tests" + python3 -m autograder.cli.util.style "${BASE_DIR}/autograder" "${BASE_DIR}/tests" ((error_count += $?)) if [[ ${error_count} -gt 0 ]] ; then diff --git a/README.md b/README.md index b576c3b2..4d205390 100644 --- a/README.md +++ b/README.md @@ -4,33 +4,35 @@ The Python interface for the autograding server. ## The CLI -This project contains several tools for interacting with an autograding server and testing -via the `autograder.cli` package. +This project contains several tools for interacting with an autograding server and +working with autograder assignments via the `autograder.cli` package. All tools will show their usage if given the `--help` options. -You can get al list of tools by invoking `autograder.cli` directly: +You can get a list of the package for each set of tools by invoking `autograder.cli` directly: ```sh python3 -m autograder.cli ``` +There are many available tools and instead of discussing each one here, +this document will highlight the tools each type of user (student, TA, course developer) will generally use. -### Server API Tools +### Configuration -These set of tools interact with an autograding server's API and typically revolve around working with assignments. - -#### Configuration +Before discussing specific tools, you should know some general information about +configuring and sending options to each tool. To know who you are and what you are working on the autograder needs a few configuration options: - `server` -- The autograding server to connect to. - `course` -- The ID for the course you are enrolled in. - - `assignment` -- The current assignment you are working on. + - `assignment` -- The current assignment you are working on (does not always apply).. - `user` -- Your username (which is also your email). - `pass` -- You password (probably sent to you by a TA or the autograding server in an email). All these options can be set on the command line when invoking on of these tools, e.g.,: ```sh -python3 -m autograder.cli.submit --user sammy@ucsc.edu --pass pass123 my_file.py +python3 -m autograder.cli.submission.submit --user sammy@ucsc.edu --pass pass123 my_file.py ``` +However, it will generally be more convenient to hold these common options in a more reusable location. There are several other places that config options can be specified, with each later location overriding any earlier options. @@ -46,38 +48,44 @@ You can modify this config to include your settings and use that for setting all Using the default config file (`config.json`): ```sh # `./config.json` will be looked for and loaded if it exists. -python3 -m autograder.cli.submit my_file.py +python3 -m autograder.cli.submission.submit my_file.py ``` Using a custom config file (`my_config.json`): ```sh # `./my_config.json` will be used. -python3 -m autograder.cli.submit --config my_config.json my_file.py +python3 -m autograder.cli.submission.submit --config my_config.json my_file.py ``` You can also use multiple config files (latter files will override settings from previous ones). This is useful if you want to use the config files provided with assignments, but keep your user credentials in a more secure location: ```sh # Use the default config file (config.json), but then override any settings in there with another config file: -python3 -m autograder.cli.submit --config config.json --config ~/.secrets/autograder.json my_file.py +python3 -m autograder.cli.submission.submit --config config.json --config ~/.secrets/autograder.json my_file.py ``` For brevity, all future commands in this document will assume that all standard config options are in the default config files (and thus will not need to be specified). +### Commands for Students + +Students will mainly be concerned with submitting assignments and checking on the status of their submission. +Therefore, the `autograder.cli.submission` package will be their primary resource. +This package contains tools for making, managing, and querying submissions. + #### Submitting an Assignment -Submitting an assignment to an autograder is done using the `autograder.cli.submit` command. +Submitting an assignment to an autograder is done using the `autograder.cli.submission.submit` command. This command takes the standard config options as well as an optional message to attach to the submission (like a commit message) as well as all files to be included in the submission. ```sh -python3 -m autograder.cli.submit --message "This is my submit message!" my_file.py +python3 -m autograder.cli.submission.submit --message "This is my submit message!" my_file.py ``` As many files as you need can be submitted (directories cannot be submitted): ```sh -python3 -m autograder.cli.submit my_first_file.py my_second_file.java some_dir/* +python3 -m autograder.cli.submission.submit my_first_file.py my_second_file.java some_dir/* ``` The autograder will attempt to grade your assignment and will return some message about the result of grading. @@ -106,10 +114,10 @@ Message from the autograder: Request could not be authenticated. Ensure that you #### Checking Your Last Submission You can ask the autograder to show you the grade report for your last submission using the -`autograder.cli.peek` command. +`autograder.cli.submission.peek` command. ```sh -python3 -m autograder.cli.peek +python3 -m autograder.cli.submission.peek ``` The output may look like: @@ -133,10 +141,10 @@ No past submission found for this assignment. #### Getting a History of All Past Submissions -You can use the `autograder.cli.history` command to get a summary of all your past submissions for an assignment. +You can use the `autograder.cli.submission.history` command to get a summary of all your past submissions for an assignment. ```sh -python3 -m autograder.cli.history +python3 -m autograder.cli.submission.history ``` The output may look like: @@ -151,146 +159,32 @@ If you have made no past (successful) submissions, then your output may look lik No past submission found for this assignment. ``` +### Commands for TAs and Instructors -### General Tools - -This project also provides tools to use on local code and does not interact with an autograding server. - -#### Checking Style - -You can invoke the default style checker (often used in assignments) with the `autograader.cli.style` command. -The style checker works with Python files (.py) and iPython Notebooks (.ipynb). - +For those that are managing a course and students, +most commands will be useful to you. +So you should have a look through all commands via: ```sh -python3 -m autograder.cli.style my_file.py -``` - -Multiple files and directories can be specified (directories will be recursively descended): -```sh -python3 -m autograder.cli.style my_first_file.py my_second_file.ipynb some_dir/* -``` - - -### Utilities - -These commands are general utilities that students and graders may both find useful. - -#### Extracting Sanitized Code - -It may be difficult to view a student's code on the command line. -Whether it is inside an iPython notebook or just messy, -it can be hard to tell what code the autograder it actually going to look at. -You can use `autograder.cli.util.extract-code` to pull Python code out of -a vanilla Python (.py) file or iPython notebook (.ipynb) and either output it to the screen or write it out to another file. - -```sh -python3 -m autograder.cli.util.extract-code path/to/some/code.ipynb -``` - -Note that the code that is output is post-sanitization, so loose code and comments may be discarded. -Additionally the code is rebuilt from an AST, so all style is ignored. - - -### TA Tools - -This project also provides administrative tools for interacting with an autograding server. -Test tools live in the `autograder.cli.ta` package. -These tools typically require more permissions than a student has (so students can ignore this section). - -#### Fetch Grades for an Assignment - -TAs can fetch the grades for an assignment in TSV format using the `autograder.cli.ta.fetch-grades` command. - -#### Fetch Most Recent Submission for a User - -TAs can fetch the most recent submission for a user/assignment using the `autograder.cli.ta.fetch-submission` command. - -For example, you can fetch Sammy Slug's most recent submission for "HW1" using: -``` -python3 -m autograder.cli.ta.fetch-submission sslug@ucsc.edu --assignment HW1 +python3 -m autograder.cli ``` -On success, this will write out a file called submission.zip (you can change where it writes out with `-o`) that contains a user's full submission and result. - - -### Testing Tools - -This project also provides several tools that are useful for testing assignments. -These are generally used by those developing courses rather than students (so students can ignore this section). -Only an overview of these tools will be provided in this document, see the specific usage for each tool for details. - -#### Grading an Assignment - -This project can perform a full grading of a submission like the autograder would, -but in a Python (rather than docker) environment. -The output of a docker-based grading should exactly match the output of a Python-based grading. -`autograder.cli.grade-assignment` can be used to perform a full grading of an assignment without any additional setup. -Note that this command will also perform static steps (which are usually performed when creating the docker image) -in a temp directory. - -When developing/debugging assignments, this should be be your go-to command. - -#### Grading a Partially Formed Submission - -`autograder.cli.grade-submission` can be used instead of `autograder.cli.grade-assignment` when the static steps -of preparing a submission have already been completed. -This command pairs with `autograder.cli.setup-grading-dir`. - -#### Prepare a Grading Directory - -`autograder.cli.setup-grading-dir` takes in an assignment and submission and prepares a set of directories as if it was being graded, -but does not actually perform the grading. -This command pairs with `autograder.cli.grade-submission`. - -#### Preparing for Docker-Based Grading - -Docker-based grading is typically only performed on an autograding server, -but when debugging that it can be helpful to see the input that a docker-based grading system would be using. -The `autograder.cli.pre-docker-grading` command can be used to emulate the steps done in-preparation for docker-based grading. - -#### Test with an Example/Test Submission - -Test submissions are example submissions that include a `test-submission.json` file containing the expected grading output. -The `autograder.cli.test-submissions` command can be used to run one or more test submissions using the local Python grader -and ensure that the output matches the expected result (from `test-submission.json`). - -This command is especially useful as part of a CI to ensure that all assignments are getting the right answer. - -#### Test Against a Remote Server - -The `autograder.cli.test-remote-submissions` command works like the `autograder.cli.test-submissions` command, -except it sends the test submissions off to be graded by an autograding server (instead of local Python-based grading). -The server can be local (e.g. `127.0.0.1`), but just needs to be accepting connections. -It is recommended, but not required to run the server with the following options when testing: - - `-c web.noauth=true` -- Do not authenticate API requests. - - `-c grader.nostore=true` -- Do not store the result of grading. - - -### User Management - -The tools for user management all start live in the `autograder.cli.user` package. -These tools generally require `admin` privileges. - -#### Add a User - -Use the `autograder.cli.user.add` command to add users to the course. -If a password is not specified, it will be generated on the server side and must be sent out to the user in an email -(therefore `--send-email` is required when no password is specified). -The `--force` parameter can also be used to update an existing user. - -#### Auth as a User - -You can use the `autograder.cli.user.auth` command to authenticate as the given user. -This command is useful for checking a user's password. - -#### Get a User - -The `autograder.cli.user.get` command can be used to get basic information about a user. - -#### List all Users +Below is a list of commands you may want to look into. +The help prompt of each command (accessible using the `--help` option) +will give a more in-depth description of the command and available options. -The `autograder.cli.user.list` command can be used to list all the users in the course. + - `autograder.lms.sync-users` -- Get information about users from your LMS (e.g. Canvas). + - `autograder.lms.upload-scores` -- Upload scores for any LMS assignment straight to your LMS. Very useful for avoiding a clunky LMS interface. + - `autograder.submission.fetch-scores` -- Get all the most recent scores for an assignment. + - `autograder.submission.fetch-submission` -- Get a student's submission (code) and grading output. + - `autograder.submission.fetch-submissions` -- Get all the most recent submissions (code and grading output) for an assignment. + - `autograder.user.*` -- Several different tools for managing users (adding, removing, changing passwords, etc). -#### Remove a User +### Commands for Course Builders -To remove a user, use the `autograder.cli.user.remove` command. +Users who are building courses should generally be aware of all the available tools, +but most of your time will probably be spent in the +`autograder.cli.testing` and `autograder.cli.grading` packages. +`autograder.cli.testing` is for running tests and checks (usually locally) on assignments. +`autograder.cli.grading` lets you grade assignments locally (without using an autograding server). +Because the autograding server runs this package inside a Docker container to do grading, +it can be much faster and more convenient to build assignments fully locally before using an autograding server. diff --git a/autograder/cli/grading/__init__.py b/autograder/cli/grading/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/autograder/cli/grading/__main__.py b/autograder/cli/grading/__main__.py new file mode 100644 index 00000000..41452712 --- /dev/null +++ b/autograder/cli/grading/__main__.py @@ -0,0 +1,25 @@ +""" +Commands to grade submissions or help in the grading procedure. +""" + +import os +import sys + +import autograder.util.cli + +THIS_DIR = os.path.abspath(os.path.dirname(os.path.realpath(__file__))) +ROOT_DIR = os.path.join(THIS_DIR, '..', '..', '..') + +def run(): + relpath = os.path.relpath(THIS_DIR, start = ROOT_DIR) + package = '.'.join(relpath.split(os.sep)) + + print(__doc__.strip()) + autograder.util.cli.list_dir(THIS_DIR, package) + return 0 + +def main(): + return run() + +if (__name__ == '__main__'): + sys.exit(main()) diff --git a/autograder/cli/testing/grade-grading-dir.py b/autograder/cli/grading/grade-dir.py similarity index 96% rename from autograder/cli/testing/grade-grading-dir.py rename to autograder/cli/grading/grade-dir.py index adb56c6b..b6fdeb5f 100644 --- a/autograder/cli/testing/grade-grading-dir.py +++ b/autograder/cli/grading/grade-dir.py @@ -28,7 +28,7 @@ def _get_parser(): 'Grade a submission given an already prepared grading directory' + ' (see autograder.cli.testing.setup-grading-dir)' + ' and a grader file.' - + ' Use autograder.cli.testing.grade if you have not already prepared' + + ' Use autograder.cli.grading.grade if you have not already prepared' + ' your grading directory.') parser.add_argument('-g', '--grader', diff --git a/autograder/cli/testing/grade.py b/autograder/cli/grading/grade.py similarity index 100% rename from autograder/cli/testing/grade.py rename to autograder/cli/grading/grade.py diff --git a/autograder/cli/pre-docker-grading.py b/autograder/cli/grading/pre-docker.py similarity index 95% rename from autograder/cli/pre-docker-grading.py rename to autograder/cli/grading/pre-docker.py index 5cf36397..4d5d55ae 100644 --- a/autograder/cli/pre-docker-grading.py +++ b/autograder/cli/grading/pre-docker.py @@ -36,7 +36,8 @@ def run(args): def _get_parser(): parser = argparse.ArgumentParser(description = - 'Take any necessary steps before performing a standard docker-based grading.') + 'Take any necessary steps before performing a standard docker-based grading.' + + ' This script is used as part of the standard docker grading procedure for Python.') parser.add_argument('-c', '--config', action = 'store', type = str, default = DEFAULT_CONFIG_PATH, diff --git a/autograder/cli/testing/__main__.py b/autograder/cli/testing/__main__.py index 683811bc..2498789e 100644 --- a/autograder/cli/testing/__main__.py +++ b/autograder/cli/testing/__main__.py @@ -1,5 +1,5 @@ """ -Commands to test, debug, and deelop courses and assignments. +Commands to test, debug, and develop courses and assignments. """ import os diff --git a/autograder/cli/test-remote-submissions.py b/autograder/cli/testing/test-remote-submissions.py similarity index 71% rename from autograder/cli/test-remote-submissions.py rename to autograder/cli/testing/test-remote-submissions.py index 3f4491a2..7a28e99a 100644 --- a/autograder/cli/test-remote-submissions.py +++ b/autograder/cli/testing/test-remote-submissions.py @@ -2,27 +2,10 @@ import sys import traceback -import autograder.api.common -import autograder.api.submit +import autograder.api.submission.submit import autograder.submission -def _get_files(test_submission): - paths = [] - - test_submission = os.path.abspath(test_submission) - - submission_dir = os.path.dirname(test_submission) - for dirent in os.listdir(submission_dir): - path = os.path.join(submission_dir, dirent) - if (not os.path.samefile(test_submission, path)): - paths.append(path) - - return paths - def run(arguments): - config_data = autograder.api.common.parse_config(arguments) - config_data['message'] = '' - try: test_submissions = autograder.submission.fetch_test_submissions(arguments.submissions) except Exception as ex: @@ -35,17 +18,19 @@ def run(arguments): paths = _get_files(test_submission) try: - success, result = autograder.api.submit.send(config_data.get("server"), - config_data, paths) + result = autograder.api.submission.submit.send(arguments, paths) except Exception as ex: print("Failed to run submission '%s': '%s'." % (test_submission, ex)) traceback.print_exc() + errors += 1 + continue - if (not success): - print("Autograder failed to grade the submission: " + result) + if (not result['grading-success']): + print("Autograder failed to grade the submission.") errors += 1 continue + result = autograder.assignment.GradedAssignment.from_dict(result['result']) if (not autograder.submission.compare_test_submission(test_submission, result)): errors += 1 @@ -58,12 +43,27 @@ def run(arguments): return errors +def _get_files(test_submission): + paths = [] + + test_submission = os.path.abspath(test_submission) + + submission_dir = os.path.dirname(test_submission) + for dirent in os.listdir(submission_dir): + path = os.path.join(submission_dir, dirent) + if (not os.path.samefile(test_submission, path)): + paths.append(path) + + return paths + def _get_parser(): - parser = autograder.api.common.get_argument_parser(description = - 'Submit multiple assignments to an autograder and ensure the output is expected.') + parser = autograder.api.submission.submit._get_parser() + + parser.description = ('Submit multiple assignments to an autograder' + + ' and ensure the output is as expected.') - parser.add_argument('-s', '--submissions', - action = 'store', type = str, required = True, + parser.add_argument('submissions', metavar = 'SUBMISSIONS_DIR', + action = 'store', type = str, help = 'The path to a dir containing one or more test submissions.') return parser diff --git a/autograder/cli/test-submissions.py b/autograder/cli/testing/test-submissions.py similarity index 100% rename from autograder/cli/test-submissions.py rename to autograder/cli/testing/test-submissions.py diff --git a/autograder/cli/style.py b/autograder/cli/util/style.py similarity index 100% rename from autograder/cli/style.py rename to autograder/cli/util/style.py