Graded assignments with Jupyter notebooks
Jupyter notebooks can be used for a wide range of assignment types, from problem solving exercises to data-based scientific investigations. You can ask students to write text and/or to write code, and also to include other types of media such as images or video in their notebook submission. With tools that make automatic grading possible, Jupyter notebooks also make assignment grading scalable to large classes.
Find out below information on how to:
- Ensure the fairness and validity of your notebook-based assignments
- Provide students with useful feedback on their work
- Manage students’ submissions and grade notebooks automatically
Learn how Pol del Aguila Pla developed automatically graded labs with Jupyter Notebooks for an image processing course
How to ensure the fairness and validity of your notebook-based assignments?
Below we review two good practices to apply when designing your notebook-based assignments.
For your notebook-based assignment to fairly assess students’ learning, you need to minimize the challenges which are not related to what you want to assess, such as technical issues for instance. In particular, making sure all students have access to a proper Jupyter environment with all the necessary libraries is essential to ensure the fairness of your assignment. At EPFL, noto offers a stable and easy to use JupyterLab environment for all. We advise you to thoroughly test your assignment on noto before distributing it to students. For more information, see our quick start guide here and don’t hesitate to contact us if you need specific libraries installed.
Other helpful measures consist in offering technical support for the assignment, for instance with a teaching assistant available to answer questions, which can be done effectively on an online forum (e.g. a Piazza forum on moodle).
Finally, this might sound obvious but students should be provided with the opportunity to practice manipulating notebooks and using the Jupyter environment as well as the associated tools through non-graded activities prior to the assignment (for instance through tutorials, exercise worksheets).
In a notebook-based assignment, as in any assignment in general, it is judicious to include a description of how students’ work will be assessed, not only in terms of the criteria used but also in terms of what success means based on these criteria (J. A. C. Hattie & Donoghue, 2016).
For instance, if you evaluate students on writing a function in Python for a total of 2 points, what type of work will get 2 points / 1 point / 0 point? Is it sufficient to implement the function correctly or do you also expect students to optimize their code in terms of complexity, or to follow some coding best practices? In other words, what do you consider an excellent work versus an insufficient one?
Clarifying the success criteria in your assessment is allowing your students to meet your expectations. If students have to guess what an excellent work means for you, then what you assess is their ability to guess instead of their ability to do an excellent work. Therefore, making success criteria explicit will increase the validity of your assessment, i.e. increase the likelihood that you assess what you want to assess.
Making notebook-based assignments a learning opportunity
It is hard for students to make sense of a grade if they are not provided with information that links the grade to characteristics of their work. This will, in turn, make it harder for them to improve. Providing feedback information alongside a grade transforms an assignment into a learning opportunity (J. Hattie & Timperley, 2007). There are different ways of implementing feedback on notebook-based assignments, which also depend on the size of your class.
A first option consists in preparing a set of written comments while you grade students’ work. You can submit such textual comments alongside the grade in platforms such as Moodle for instance. Students will benefit more from your comments if they include: a) a short description of what they were expected to achieve, b) a factual description of both the positive and the negative sides of what the student has done and c) ideas as to how the student could improve, in particular in methodological terms.
One issue with the previous option is that it makes it difficult to link precisely what the student has done in the notebook to the feedback. An alternative is to record a screencast when you go through the notebook of the student, commenting on the work as you go. In addition to being more detailed, this type of video-feedback is also more personal and can give more insights to students as to how you see their work.
Of course, the two previous options do not scale very well for large classes, in which case automated grading might be considered. You can find out more about this option in the section below. Now while the messages generated by the automated tests run by the grader (assertion errors) do contain some feedback information, this information is usually in a form that is very hard to comprehend for most students. Providing students with feedback in this context means anticipating the possible mistakes that students can make in order to integrate meaningful feedback messages into your test functions. This requires some development but is worth it, as Pol del Aguila Pla explains in his story about the image processing labs he developed in Jupyter notebooks.
Tools for notebook-based assignment management and grading
Scaling up your notebook-based assignments can quickly become a pressing need if you have more than 30 to 50 students and/or if you want to organize several assignments during the semester. We review the tools that can help you with this task.
A first important question to address with notebook-based assignment is that of the distribution of the assignment and the collection of students’ submissions. Jupyter notebooks are simple text files, which can therefore be exchanged using standard communication and file exchange tools. Although this solution is neither very scalable nor very practical, it is perfectly possible to manage a notebook-based assignment by email for instance.
Fortunately, assignment management tools exist, such as the assignment manager in moodle. We have developed a version of the moodle assignment manager that interacts with noto, the EPFL JupyterLab platform for education, so that a) you can easily distribute a notebook-based assignment to students directly from your noto workspace, and b) students can attach notebooks from their noto workspace when they submit their work to you.
You can find here the documentation on how to use the notebook assignment plugin in moodle:
Don’t hesitate to contact us in case of question or if you need more information.
Notebooks support automated grading through tools such as nbgrader or ottergrader. These tools implement a workflow in which you first design your assignment alongside a set of tests which will assess whether students’ code is correct or not, specifying the points to grant (or not) accordingly. These tests are automatically removed from the version of the assignment distributed to students. In the last step of the process, the tests are automatically run on students’ submissions, which results in a grade for each submission.
For automated grading to be effective, you need to anticipate the different ways in which students can solve the exercises and/or write the code. Therefore, it requires quite a notable investment in terms of software development. A major advantage is that once the tests are written, the number of submissions to assess does not really matter and the solution scales up very well. As we discussed above, a drawback of this solution is that providing feedback to students is not straightforward and might require additional software development. Hear Pol del Aguila Pla explain how he has tackled this issue in his image processing labs.
Author: Cécile Hardebolle
Discover other ways of using Jupyter Notebooks in your teaching
Hattie, J. A. C., & Donoghue, G. M. (2016). Learning strategies: A synthesis and conceptual model. Npj Science of Learning, 1(1), 1–13. https://doi.org/10.1038/npjscilearn.2016.13
Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487