Maker Project Rubric

maker project rubric header categories

It finally has begun. The tipping point is here. I have finally started to hear the terms “makers” and “makerspace” from educators outside of those deeply entrenched in the traditional maker movement. These are exciting times!

So you are starting a makerspace this year?
-English teacher, no knowledge of what a makerspace was prior to this year

What does this mean for makers looking to bring making seamlessly into the classroom environment?

It means we as makers need to show traditional educators the value that making can bring into education. And to do that, we need to meet traditional educators halfway — we need a reliable tool to measure progress, measure growth, and measure making in a way that was previously not necessary.

These are the reasons Digital Harbor Foundation and its partners have developed the first Maker Project Rubric. It is a simple way to assess the progress and growth of a student maker.  This rubric was specifically developed to measure any maker project they complete, whether it’s 3D printing, claymation, soldering, sewing, puppetry… whatever. It is also a way to, if necessary (depending on how your school system values grades)  assign a numerical percentage to a maker project.

I would like to first share the rubric with you, then discuss the following:

  • Discuss how the rubric was developed
  • Discuss the multiple iterations made as it was tested and refined
  • Discuss suggestions on how to use the rubric

Here is the Maker Project Rubric:

maker project rubric overview

maker project rubric scoring equivalency tool


It is also available to download as a PDF for you to print or you can purchase the Maker Project Rubric Poster. We suggest printing the rubric by itself to distribute to students and then print the Scoring Equivalency Tool and Explanation double sided for your reference as a teacher.

Rubric Development

First, a number of considerations were contemplated when developing the categories. A wide variety of stakeholders were at the table during this important conversation. We had leaders in the field of Maker Education, including folks from Digital Harbor Foundation. We had general education teachers involved for their skills in rubric and content development that align with standards-based education. We had Level II Google Certified Educators who were adept at developing content for 21st Century Education. We had Program Facilitators with experience in leading youth in Maker projects and activities. We also had input from students and young makers to ensure all the appropriate stakeholders were present in the development of this project.

After a number of brainstorming sessions and following a number of iterations, we decided on the categories we believed to be most important in not only assessing a maker project, but also the categories that would be most helpful in improving a young makers’ skills, attitudes, and dispositions. We settled on the following categories:

  • Creativity
  • Iteration
  • Initiative
  • Learning
  • Community

The language in the rubric was developed to look for specific outcomes and behaviors in each of the above categories so the rubric would maintain a high level of reliability. However, we also allowed the language a certain level of subjectivity to allow for the teacher (who always know his or her students best) to decide on the appropriate measurement for the student/project in question.

Testing and Refining the Rubric

In order to deliver the best possible product, and to make the rubric as reliable and valid as possible, we went through a number of tests and iterations of the rubric.

In one test, I visited DHF during a Stop-Motion animation class and selected a number of students with varied backgrounds and abilities. Before they began the project, I conferenced with them, explained the rubric, then observed them as they completed their projects.

After students finished, both the students and I filled out the rubric. We met and held a conference where we discussed why we each gave the score we did in each category. In some situations, the student convinced me he or she should have earned a higher level than I originally thought, giving me full explanation and rationale about something I may have missed. In other situations, students gave themselves a lower score then I did, and I explained based on my observations why they actually should have scored higher.

Through this initial test, we made a number of interesting finds:

Not Finishing a Project

In one case, a student was unable to complete his project. This brought up a few interesting points:

  • In making, is a project ever finished? Not necessarily. Improvements can always be made.
    • However, the philosophy we decided on was this: in a makerspace environment, this would be fine. However, in a school environment, deadlines are a real thing. If a teacher assigns a making project, where they give what they believe to be a fair deadline, there needs to be some level of accountability.
    • The original rubric had no language where to address this issue. We decided the best place to address this was in the iteration category. We changed the language so that if a student finishes, they can earn a 4 in that category. However, if they still iterate and improve a design, even if they don’t finish, they can still earn a 3. This means not finishing is NOT detrimental to their grade — in fact, they can still do quite well (even still earn an A), while students who do complete the project are awarded by being able to score a 4 in that category.

Sharing with a Community

Through our testing and refining with a real group of students, we also noticed some issues with the sharing part of the rubric. All true Makers understand the importance of sharing what you have made in some fashion.

  • We original had the sharing category focused on strength of explanation and reflection in sharing. However, we realized that different organizations or school systems might have much different access and/or ability to share with the real world. Many school systems still do not allow smartphones, or even still have YouTube blocked (gasp! the horror).
    • We decided to obfuscate the type of sharing in this part of the rubric.
    • We changed the language in Level 3 to informal peer-to-peer sharing. A simple, “hey everyone, look at how cool this is and how I did it” works.
    • We changed the language in Level 4 to “an authentic community in a formal manner“. This authentic community in a formal manner could be the classroom in a science fair type fashion. The authentic community could be YouTube or another social network, blogging site, or anything else — the point here is the teacher can decide what the authentic community is.

Why Share All these Steps with You?

Why might I go into so much depth as to how the rubric was developed and changed along the way before we released the final product? We want to SHOW YOU we practice what we preach. We didn’t just rush this out. We completed the entire making process:



Suggestions on How to Use the Rubric

The suggestions are written out in full on page three of the document. Please take the time to read through them before using the rubric as a tool. It gives a lot of great suggestions on the use of the rubric. Here are some of the highlights:

  • Highly consider using the Maker Project Rubric as a conferencing point. Don’t just fill out the rubric and give a grade. Allow the student to fill the rubric out as well, and use it to conference with the student, taking their feedback into consideration, and agreeing on a grade rather than assigning one. Ultimately, you as the teacher have the final say, of course, but taking input from the student is vital. This is more in line with the maker spirit. More direction on how to do this effectively is on page 3 of the Maker Project Rubric PDF document.
  • We also encourage you to consider not to use the rubric as a scoring tool at all, but simply as a conferencing tool to improve the attitude and disposition of your makers (also on page 3).
  • A note on the Scoring Equivalency Tool and how it was developed is also included on page 3. It is not done by straight 5% differentials; the reasoning for this is again on page 3. Although we feel we have done a lot of work on this tool to ensure the fairest grade for both the student and the accuracy/expectations of a teacher, please feel free to alter the Equivalency Tool to better reflect the scoring philosophy of your school system.

Growth Mindset

As always, we want to continue to grow and improve our practices. We would love to hear what you think about the rubric, and your experiences using it. If you have any suggestions, thoughts, or would like to share your experience, please contact us! You can also reach out to Scott or Digital Harbor Foundation directly on Twitter or checkout #makerrubric.

Again, you can download the Maker Project Rubric PDF or purchase a Maker Project Rubric Poster.