Copernican Revolution . org
Transforming Our Lives through Self Reflection and Psychology
A psychology professor's collection of lessons fostering self-discovery through online activities, hands-on classroom experiences, engaging lectures, and effective discussion prompts.
Home    |    Online Activities    |    Engage In Class    |    Pedagogical Essays    |    Course Specific    |    Katie    |    Search
Science Fairs

Organizing a Science Fair

Science fair organizers guide, including a rubric enhancing scientists' feedback and a spreadsheet to automatically rank projects.

I vividly remember watching Carl Sagan's Cosmos as a little kid and feeling so much awe - we live upon a tiny planet, circling a humdrum star, lost in a galaxy tucked away in a forgotten corner of a universe of billions and billions of galaxies. Yet we're star stuff and we can contemplate the cosmos with science. Sagan inspired me to major in Physics and, while I eventually found exploring the human condition with science my awe-inspiring path, I'm grateful to have learned so many sciences and love how they fit together. It's what led me to bring psychology to understanding science fairs.

Nicolaus Copernicus didn't just discover the Earth isn't the center of university, but fundamentally revolutionized humanity's understanding of ourselves.
De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), p.21
In the field of thinking, the whole history of science from geocentrism to the Copernican revolution, from the false absolutes of Aristotle’s physics to the relativity of Galileo’s principle of inertia and to Einstein’s theory of relativity, shows that it has taken centuries to liberate us from the systematic errors, from the illusions caused by the immediate point of view as opposed to 'decentered' systematic thinking.
Jean Piaget (1965), Structure and Direction in Thinking, p.232

Organizing Your Science Fair

Science Fairs are wonderful opportunities for students to authentically engage in science. But organizing a fair, at any level, is quite difficult. Based on interviews and focus groups with scientists across disciplines, I synthesized a rubric intuitive to them. Our research shows it leads to greater agreement between judges (inter-rater reliability) and higher rates of success of students competing at the next level (predictive validity) even if the next level uses an older rubric.

We would like to judge science fair projects accurately, consistently, and precisely so students can get the best possible feedback. We would also like the process of organizing judge information to be as intuitive as possible. My website, CopernicanRevolution.org, contains guides and handouts for judges and students. It includes an Excel spreadsheet for you, so calculations happen automatically while you enter judges’ ratings!

Assigning Project Numbers

For each of your science fair projects, give it a unique project number. It could be as simple as “3” or it could be some more elaborate code you use like “mp-03.” Whatever you choose, please fill in the “project list” tab in the excel file with each project number and each studentsʼ name (e.g., “Katie Grobman”). This helps you keep track of data. I prepared the Excel file to handle 18 categories with 300 projects in each. Copy a category tab if you need more categories. If you will have more than 300 projects – wow – but highlight some blank rows in the middle and insert new rows. If you simply add rows to the bottom, they won’t be part of formulas I wrote automatically finding averages.

Assigning Judge Numbers

For each of your science fair judges, please collect some basic information and assign them a judge number. Within a category (e.g., chemistry, biology), number the judges starting from 1. Itʼs an arbitrary number, so it does not matter what order you put judges in. If you consider awards separately by grade (e.g., middle school awards and high school awards), then you have two choices. You can have a separate copy of the excel sheet for each grade. In this case, a judge for the middle school division may also be a judge with a completely different number in the high school division. Alternatively, you can put both middle and high school together. In this case, judges should get multiple columns if they judge both grade levels. For example, I might be “judge 10” when I evaluate the middle school social science projects and “judge 11” when evaluating the high school social science projects. The excel formulas have already been set up to adjust for judge bias (z-scores) and when you give somebody multiple judge numbers, you are saying that their evaluations should be scaled separately. For example, you would not want to judge 2 high school students on the same scale as 10 middle school students evaluated by the same judge. Please fill in the “judge list” tab in the excel file with each judge number, category, and grade (which should only appear once), each judgeʼs name (which may appear multiple times), the judgeʼs prior years as a judge (e.g., 3), and the judgeʼs highest degree (e.g., Ph. D. Physics). The excel file is prepared to handle up to 28 judges per category.

Entering Judgesʼ Ratings

On the left side of each judge worksheet (e.g., ctg01) is a place to enter the project number (arbitrary code you used), category (e.g., chemistry), and grade (e.g., middle, high). Once you type in a project number, notice how it automatically appears in the gray columns across the entire spreadsheet. It helps you match projects and judges. Judges will provide you with 4 numbers about each project: Project Idea, Present & Frame, Rigorous Method, and Interpret Results. Please see the judge and student guides for lots of details.

Notice how each judge has a big column (incorporating 4 little columns) on a judge worksheet. The little columns, I, F, M, R match the judgeʼs ratings in the same order as the judgeʼs sheet. Find a projectʼs row and judgeʼs column (just like you would find the product using a multiplication table). Enter the 4 numbers. The worksheet “ctg_sample” illustrates what it looks like. Each worksheet can hold 28 judges. In my experience, three judges who take their role seriously is plenty as long as every judge gets to rate every project. In order to score science fair projects correctly, you need to put all judges for each particularly project on the same worksheet. For example, you could not have judge 28 (on sheet “ctg01”) and judge 29 (on sheet “ctg2”) evaluating projects 102-113. Instead, leave judge 28 blank and make these two people judges 29 and 30.

Collating Judges’ Ratings

On each judge sheet, scroll all the way to the right (passed all the columns in gray) to the 7 columns with light purple headings. Excel has automatically tabulated results for you about each science fair project! Highlight all of this information (excluding the purple headings) and copy it. Do not cut it. Go to the last worksheet titled “scoring sheet.” Notice how it has the same purple headings. Paste Values into the corresponding place. Do not simply press paste because you would copy formulas. To paste just the numbers, click the excel heading “edit”, choose “paste special …”, click the dot for “values”, and press “okay.”

Go to the next judge page and repeat the same process but now pasting the values in the row following the previous ones. If you had to create another excel spreadsheet, open it up and paste those values into a single scoring sheet. You can now sort the information in all sorts of ways to help you choose science fair winners. For example, if you are going to give awards to the top 3 projects in each category and grade-level, sort by “category” and “grade” and finally by the scoring variable (descending so the top performers are on top). To sort, choose excel menu “data”, choose “sort”, and then you can sort by three things in a specific order. Below I’ll explain what each composite score is so you can decide which way you would like to judge science fair projects.

Important Cautions when using Excel!

Do not enter anything in cells when you have no information for them. Do not, for example, type “0” or “n/a” when a student did not show up for the fair. MS Excel calculations are finicky, and they will interpret what you type as though itʼs a real judge rating. Just leave cells with missing data blank.

Never cut and paste in the excel spreadsheet. You can copy and paste. Excel has an odd underlying logic so that when you cut and paste it will change the underlying calculations in bad ways.

What are the Composite Scores?

Notice how each science fair project has 5 composite scores. Regardless of which you like best, 100 is the best and 0 is the worst possible score. Every scores gives equal weight to each judge who rated the project. In most cases, the typical project will score around a 50. If you would like to skip reading this nitty-gritty, I recommend using the “overall” composite score for your awards. Below are some issues to consider when choosing a scoring method.

Should all Judge Rating Dimensions be Treated Equally?

Judges gave 4 ratings: Project Idea and Present & Frame, Rigorous Method, and Interpret Results. One way to score is to say that each of these ratings is equally important. Judges will probably start to think so simply because they make four ratings. Another possibility is to say that the first two ratings are worth 1/3 less than the last two ratings. That is, Project Idea and Present & Frame are each worth 20% and Rigorous Method, and Interpret Results are each worth 30%. That would be more consistent with the feedback when I interviewed scientists because they feel the latter are more important. If you would like to weigh the 4 ratings equally, choose either EQ composite score but if you would like to weigh them like the feedback we received, choose either WE composite score.

Should Judge Scores be Adjusted for Strictness & Leniency?

Ideally, judges follow benchmarks, so they should all be thinking, for example, that a “7” means exactly the same thing. The judge guides try to get judges to follow benchmarks.

However, I find no matter how I explain the judging process, judges uses the scales to match more what they feel a “7” represents. To the extent judges take benchmarks seriously, you can use their “raw” ratings (numbers without being adjusted). But I recommend using adjusted values. Some judges might be very strict (giving lots of low ratings and few high ratings) and others might be lenient (vice-versa). You can especially see this if you have many judges rating the same projects. If you feel judges were taking benchmarks seriously and one judge gave higher rating than another is primarily because they judged better projects, then choose either R composite score. If you feel each judge rated projects of similar quality and differences between judges were their strictness inclination, then choose either Z composite score.

How does Z Adjust for Strictness & Leniency?

We can statistically adjust for strictness by assuming each judge rated projects of equal caliber and of an equal range of caliber. That means each judgesʼ ratings should have the same average and same standard deviation. Excel is shifting, stretching, and crunching each judgesʼ scores so they give an average rating of 5 and a 2.5 standard deviations. Ratings further from the norm than 0 and 10 are held to 0 and 10 to avoid one judge overly influencing results. If you like statistics, what’s happening is each rating is z-scored and then linearly transformed to an intuitive 0 to 10 rating (thus Z).

What is the overall composite score?

The overall composite score is the average of the 4 other scores. If you answer the 2 previous questions by saying, “yes and no”, then you should use this score. Itʼs saying there is reason to consider the 4 dimensions equally but also a reason to weight them. This score compromises by saying the first 2 dimensions should be worth 22.5% and the last 2 dimensions should be worth 27.5%. Itʼs saying some judges really are stricter but judges also got projects of different caliber. This score makes half of each rating the ʻrawʼ score and half from the z-score.

Giving Feedback to Students

Students might like to know why they missed getting an award and how they can improve in the future. Some may like to know what set their project apart and gave them among the best scores. To give them feedback, find their row in the original judge sheet. Scroll all the way over to the right. Just before the purple headings are two “average judge” headings: one for raw scores and one for average z-scores. Look at their lowest number of the 4 dimenstions under “average judge.” If it’s below about 3.5, you can share it’s the dimension they have the most room to improve. Their score is below what judges felt was the minimum standard. If their lowest score is above about 4.5, tell them that they did well!Itʼs just other high quality projects did better. Nevertheless, tell them that the place they can improve the most is their lowest dimension. If students score above about 6.5 on any dimension, highlight that to the student as a place that judges were especially impressed and felt they did better than they would have expected from a science fair project.

Generally qualitative feedback is much more helpful for students and teachers. The judge’s guide emphasizes how to give feedback and another handout provides sample feedback. It’s useful to help them realize what they can focus on and if they begin their comments with the general principles on the handout, teachers can see patterns more easily and help their students next year!

I hope some of the materials at CopernicanRevolution.org are helpful. Thank you for putting in so much time and effort to help children learn and grow with science.
About the Author
Katie Hope Grobman actively contributes to science fairs including serving as Lead Judge at county & state science fairs since 2006 and Co-Director of the Monterey County Science Fair from 2020 to 2023. She has mentored numerous students; some have competed at the international science fair. Her research of scientists across disciplines led to the development of a refined rubric. This rubric has increased inter-rater reliability among judges and greater predictive validity for awards at subsequent competition levels.

Preferred APA Style Citation
Grobman, K. H. (2010). Organizing Your Science Fair. CopernicanRevolution.org