Fixing Traditional Report Cards

Report cards stink. Think about it from a parent perspective. What does the 86% or the B tell the parent? Does this mean your son or daughter only knows 86% of the material? If everything is graded reliably, this might mean your son or daughter accumulated 86% of the total points from a period of time. Even if the grading is reliable and valid (which is a far stretch), the information is not worthwhile. How, as a parent, do you help your child improve from an 86%? Traditional report cards tell you nothing. Parents deserve a more robust account of their child’s progress.

Here are my problems with traditional report cards:

  • The number or letter grade assumes the grading was reliable and valid. Also, these assessments and grading scheme were similar enough (or the same) as a teacher who teaches another course. I would assert this isn’t true in public education.
  • Letter or numeric grades do no inform parents or students how to improve one’s performance to the required skills necessary in a subject or grade level.
  • Why are grades reported in quarters? If we have electronic grade reporting, why do we report individual terms at all? Why cannot grades accumulate over the entire year?

My solution to the traditional report card:

  • Teachers should only grade SKILLS and CONTENT, not behavior. You should not grade a student on how pretty his or her work is. Grading should be based on clearly defined skills and content, not subjective or ambiguous ideas.
  • Grading should last all year, not quarters, trimesters, or semesters. Accumulate a grade for the entire year.
  • Grades should not be reported with numeric or alpha characters. These are meaningless. Yes, a conversion is important for colleges. However, let this be a simple 4 (advanced), 3 (proficient), 2 (basic), 1 (below basic), or 0 for each course. This ONLY communicates to colleges a GPA and serves no other purpose.
  • Teachers should communicate to parents a robust portfolio of skills and content assessments linked to work turned in throughout the year. This type of assessment should be available throughout the year, with a formal report written at least quarterly.

How can this be accomplished? If teachers create clear assessments with skills to master (i.e., linked to the Common Core standards), then assessments could be tracked based on skills. Each assessment would be graded on a common rubric. Rather than assign a grade, a text summary of a student’s progress for each particular skill would be reported. If connected electronically, this would be quite easy to do. A parent could then map progress on one skill throughout the year, or gain an overall snapshot of an entire child’s skill assessment.  At given intervals, teachers could report additional qualitative data (in text or video) to parents about individual student progress.

Impossible? Not at all. If education moves more toward electronic reporting, this would be quite easy. If a database of common rubrics existed, one could merge class lists with the rubrics. Teachers could grade student work with a tablet, marking skills on the rubric. Teachers could upload a picture of the submitted assignment (if it were not electronic), and add other comments to the report if necessary.

If common rubrics were used for assignments, this all works fairly easily. Yes, the structure is important to get right, but an interactive, easy-to-use mobile app would make this very easy.

I think its clear that our current reporting system is not informative for parents or students. An electronic reporting system could improve this process.

Perhaps, if the rubrics were designed and benchmarked, these same classroom assessments and progress reports could replace standardized tests!

Advertisements

6 thoughts on “Fixing Traditional Report Cards

  1. bobhhoffmann December 4, 2012 / 8:22 pm

    To deal with the issue of priority and focus of instruction and learning in the STEM curriculum, I extended my “(C, B, A) Lattice Framework” to describe the assignments and grading used for my classes. Students would complete a progression of assignments, first earning a “C” grade, then a “B”, and finally, an “A”, if they so desired.

    A unit assignment sheet was delivered at the beginning of each unit, containing all the assignments for that unit, much like a “job task list” to check off as items were completed. Students appreciated this because they could work ahead, or catch up later, depending on their home and work schedules. Except for the first unit, all the homework was to be presented in a “workplace report” form. The assignments “packet” was due when the student took the test, and there was no “extra credit” nor “make-up” given.

    Each reading and practice assignment was labeled with a “C”, “B”, or “A” code, indicating what portion of the unit grade it applied to. The “C = Core Content Competency” work indicated readings, questions, and problems that the students would NEED TO KNOW for the class. My agreement with the students was that ONLY the “C-work” would be covered during class time, and assessed on the unit test. Everyone appreciated that there were no “gimmick” or “trick” questions on the tests, and that the content objectives and the assessment matched and were clearly spelled out.

    The “B = Basic Workplace Skills” labeled questions and problems that required additional related information, or critical thinking and problem-solving steps. These included items that the students SHOULD KNOW when they enter their chosen occupations.

    Finally, the “A = Applications and Achievement” items were generally occupational vignettes or news articles connecting the unit topics to various careers. Students would write a “book report/ abstract” about their reading, and how they might apply it to their future career. These were things that were NICE TO KNOW about a student’s future job.

    Once my students caught on to this method, they used it in various ways. Some “A-students” decided to do just the “C-work” so they could “just pass the class” as they dealt with other family and work priorities. Students who worked in small study groups would tell me that they tended to each spend the same amount of total study time on the course, with the “B” and “A” students completing the “C-work” faster, but then doing more work. In terms of the total homework to do, I made sure that about 80% of time was “C-work”, and the other grades adding 10% increments of time, so the students would not feel overwhelmed with the extra efforts.

    Over the two decades that I used this grading method, I found that “the Completers were the Succeeders”, meaning that virtually everyone who finished the final exam passed the course. Students were able to set their personal goals for the course, and then apply the effort and ability to accomplish their desired results. The only complaints I got about this method were from other instructors and administration, who preferred the traditional “elitist” percentage-scoring methods.

    Note that this “(C, B, A) ” grading method fits in with my proposal for a lattice framework for a comprehensive STEM curriculum. If we can get ALL learners to acquire the Core Content Competencies, plus develop additional Basic Workplace Skills, and achieve in Applications related to various career pathways, we will have accomplished a major educational reform for STEM. The “mediocre” and the “average” will become the “expected” and the “essential” in preparation for the 21st Century workplace.

    • Justin Staub, Ed.D. December 5, 2012 / 2:03 pm

      Bob, thanks for your feedback. I think much of what you added to my ideas is crucial for remaking the grading system in public education.

    • bobhhoffmann December 5, 2012 / 9:02 pm

      When setting up my course assessment plan using a “CBA” approach, I clearly identified that a “C” for the overall course grade meant that the student had met 100% of the core content standards and objectives. The intention was to identify all passing students as “Succeeders” by demonstrating competency in the course requirements. Such competency could be shown by a combination of participation in daily class activities, completion of assignment homework, work on a semester-long application project, as well as the unit and term tests. The weighting for these components was provided in the course syllabus.

      This grading “formula” obviously does not match the traditional percentage methods of grading, in which an “A+” grade indicates a 100% performance level of the content objectives. One problem with such a scale is that anything less than “perfect” means a “defect” or “failure” to some degree. The emphasis in school should be on building success, not on emphasizing failure, to build self-confidence and motivation.

      While some situations may require an absolute 100% performance level (safety and security, for example), children and young adults should not be expected to achieve such high goals while they are learning and practicing new content material and skills. So if a 100% score is not achievable for ALL students, what should be the percentage for a “C = Competency” grade.

      Some of my colleagues used 80% as a “C – D” passing cutoff score, saying “We have tougher standards”, ignoring the fact that such scores could be achieved with easier tests. Students also became confused when different instructors used differing percentage intervals on the test scores. Rather than argue opinions about what scale to use, I looked for some mathematical rationale for establishing the grade interval percentages for test results.

      If all the students taking a particular course are considered to be a “population” that follows a “normal distribution curve” of test results, “grading on the curve” would give about 10% “A” and “F” grades, 20% “B” and “D” grades, and 40% “C” grades, giving intervals roughly one standard deviation per grade away from the median.

      Considering the test scores as a “cumulative distribution function” (ogive) of the normal curve, we find an inflection point at approximately 82% of the maximum. This becomes a “point of diminishing returns” in the number scores in the grade intervals above, indicating that more “effort” is needed to achieve those higher percentages. This, I think, should be the upper limit of the “C” grade interval. To be consistent with common practice, I used 80% as this transition point, giving the traditional 10% grade intervals with the 60-70-80-90-100 grade boundaries.

      One further note with regard to the difficulty of the tests themselves. I did item analysis on each of my tests to be sure that the results were neither too easy or difficult. I used 1.5 times the square root of the number of students taking the test as the limit of acceptable incorrect answers for a particular question. A greater number of errors indicated that the question was confusing, that the topic had not been presented properly, or that too many students actually had incorrect understanding, in which case I provided the correction during the review of the test. This item analysis would allow 15 incorrect results for a given question with a sample size of 100 test-takers, again getting close to that inflection point of 82%.

      The expectation with this scale, then, is that 70% of the students would have earned a “C” score or better on 70% of the test questions. Pushing the percentage cutoffs higher would reduce the number of passing students, skewing the grade distribution. These assumptions, of course, relate to a population of “normal” students, which does not exist. But we have to begin our justification somewhere, and statistics is a better place to start than with just vague opinions.

      So, on the whole, I found that the traditional 10% grade intervals on test scores to be useful and fair, while adjusting the difficulty of individual test items to be near my limit of acceptable errors. Over the two decades that I used this method in physics and math classes, the “D” grades were generally lower than expected, and usually only one or two failed a test per class. Whether these results were because “all the students are better than average” is a matter for conjecture.

  2. Paul Solarz December 5, 2012 / 12:55 am

    I really like this Justin – I’m glad I read it! This might guide my thinking as I work towards redesigning my units according to Common Core. Thanks!

    • Bob Hoffmann December 5, 2012 / 12:07 pm

      The issue of grading was hotly discussed some years ago in the statewide science and math content standards committees in which I participated. Various rationales and methods of grading were proposed, all with distinct advantages and difficulties. I attempted to integrate them into a course assessment plan that cross-referenced the percentage, itemized, and holistic approaches.

      My key assumption was that ALL students should be able to BE SUCCESSFUL by showing COMPETENCY of the CORE CONTENT STANDARDS with a “C” grade, or 75% equivalent. This is the target score, meaning that ALL students should show 100% proficiency of the core content information with a “C” course grade. Note that this says nothing about students achieving a goal “according to their abilities”. It means that they have completed a “check-off list” of tasks, quizzes, and tests demonstrating their knowledge and understanding of the essential content of the course. They have successfully “crossed the bar”, and should pass the course.

      So the grading rubric indicated that “C = Core Content Competency”, “B = Basic Workplace Skills”, and “A = Application and Achievement”. The higher grades indicated that the students performed additional tasks above and beyond the required core content assignments. These items were not covered in class, and were not included on the unit tests. Students could stretch their learning beyond the core, if they demonstrated ability and effort by doing the work according to the assignment sheet.

      Generally, the “C students” did just the C-work, scored C-grades on the tests, but were successful with all the core content competencies in doing so. The “B students” generally did the extra B-work as well, while scoring above 80% on the tests. Likewise, the “A students” were able to do more work in the same amount of study time, with test scores above 90%. The student abilities, efforts, and grade results seemed to correlate well. Students told me they thought the reward matched the work.

      What about those who missed the bar, for whatever reason? The grade of “D = Delayed Completion”, meant that late work was accepted at any time, but the lower grade was given, even if all the other work was completed. Since many of my students at a technical college had various life, home, work, and military service issues to work around, they appreciated the opportunity to get this partial credit for their work, even when it was not submitted exactly according to the class schedule. No “extra-credit” work was ever given.

      Obviously, the grade of “F = Failure to Complete/ Try Again” is available for those circumstances. Note that it identifies that the student’s WORK was not completed, not that the STUDENT is a failure.

      In two decades of using such a “CBA” rubric, I generally found that the “average” students were proud that they could succeed with subjects that they had “failed at” before. Only a couple of students would usually fail in each class, (until the arrival of the “Millennials” in my last teaching years). So I feel such a rubric will work well in a comprehensive STEM curriculum framework for the 21st Century.

    • Justin Staub, Ed.D. December 5, 2012 / 2:05 pm

      Paul, we all have a lot of work to go with our grading system. I feel that grading the ways it’s always been done will not meet the needs of a digital learner, or the problem-solver/inquirer we demand our students to be. As teachers, we need to start reforming our practices to better inform our students so they can master their skills and content.

Share your thoughts. I'd love to hear what you think!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s