Figuring out how to efficiently grade students' assignments is a non-trivial task. Grading can be made more efficient by automatic output checking but that doesn't leave room for quality assessment and feedback. To deal with the logistics of grading a set of scripts and grading practices has been developed. They are outlined here.
LastName_AssignmentName.txt
and saved in the student's repository directory.The text file with student info has columns to represent Section #, Last Name, First Name, email alias, Github user account, repository name, major, class standing. For example:
In [1]:
!cat scripts/student-info.txt
Students are required to "turn in" assignments by committing their changes to their local git repository and then pushing them to Github by the assignment deadline.
The rubrics for grading can be found in the syllabus.
These are some general instructions on the workflow for grading.
For Project Euler assignments, the grading is fairly straightforward. For full credit (5/5), they must have all of the following:
Some example grades for specific cases, which can be indicated in the comments to the student:
Include any comments about why their code might not be working. E.g. if they left it because they got some kind of error and couldn't figure it out, offer a suggestion for what they did wrong. If their code runs but returns the wrong solution, try to figure out why and indicate what the problem is. If it is too much work to figure out what they did wrong, just indicate that in the comments.
Guidelines provided to a grader for the NumPy exercises - a basic grading rubric.
There are four questions, but some of them have subparts. To reinforce Learn by Doing, they get full credit for doing the parts that don't require additional input beyond correctly typing in the code and executing it.
Question 1.
Give 5 points for array 1 and 5 points for array 2. Maximum credit if they were able to achieve the result in just a few lines, similar to my examples.
Question 2.
There are a couple of ways to accomplish this one. I gave two examples in my solutions. Full credit if their solution involves using the NumPy tile method.
Question 3.
(d) 3/3 for correct implementation of np.nonzero
(e) This one is where they had to do some work:
Question 4.
(d) 2/2 for executing the code I gave them
(e) This one is where they had to do some work:
4/5 A common mistake may be using integers in the mask expression that produces very blocky output instead of a smooth curve.
Same approximate rubric as question 3(e) for other types of incorrect answers.
These are "available by request" to authorized educators with a valid .edu
email address. Contact the repository owner to request them.
I purposely choose projects that I have not personally coded myself that form a basis for answering real research questions. There are several reasons I do this:
For these reasons, the project assessment is built less on "correctness" than on their formulation of the solution, documentation of the results, and their attempt to answer the assigned "research question". The rubric is set up so that they can get most of the points for developing an organized, complete project with documentation, even if their results are incorrect.
See the syllabus for project grading guidelines.