Opinion
Fallacies of finals
It is time to forgo our yardsticks of quality assessment of campuses based on marks secured by studentsBarsha Paudel
Four years ago, during my undergraduate studies at Pulchowk Campus, the semester finals overlapped with the World Cup football finals. Given the comprehensive and cumbersome nature of exams prevalent in engineering studies, and the desire to watch the once-in-a-four-year event, it was very taxing for us students to maintain a balance between the two finals. In the end, the results of our school finals appalled us far more than the trouble the exams had caused.
More than 50 percent of students failed in a single subject and even the ones who managed to pass scored far less than what they normally did. Such results from the class of a top engineering institute is not an incident to be effaced from the records, for many reasons. But apart from becoming a subject of discussion between the teachers and students for a short while—much of which involved blaming students for their insincerity and inadequacy—the issue never made it beyond classroom walls. The root cause of that incident was never pondered upon. Many of us were left to the only available resort: retaking the exams.
Orthodox system
Our exam systems are orthodox in every aspect. While many universities across the world use diversified, periodic and creative methods of evaluating students throughout the year, we are stuck in an age-old system that does nothing more than punish and put pressure on students. This method of recognising students’ performance, based on a mere three hour or so test, with occasional recycled questions, has not been rewarding for students.
A student who has performed brilliantly in class throughout the year, but unfortunately falls ill during the exam period should not be at a disadvantage. Likewise, a student who is creative and has a tendency to think outside the box should not be penalised for not sticking to the traditional means of answering run-of-the-mill questions.
In our country, university education is more skewed towards endorsing a culture of mugging up and forging theses rather than creating independent thinkers and problem-solvers. In practical terms, a university education (or any education, for that matter) should bolster not just the knowledge and memory-capacity of students, but also their aptitude, oratory skills, competence and confidence. Only this way can education be a real enabler of performance. Universities should be collaborative spaces where students can learn, test and refine their knowledge. This is also their first milestone towards professional life. Hence, they should be honed to develop aptitudes for problem-solving in real life.
Exams, on the other hand, should be able to evaluate all these dimensions of learning and not just confine students to notes, slides and guess-books. Instead of teaching students to solve the same problems or their derivatives again and again, they should be trained on the methodology and the approach of solving a problem.
Continuous evaluation that involves measuring students’ abilities to find practical solutions, conducting research, analysing articles/papers, looking up solutions to new problems, performing in regular subject-based term projects and other appropriate skills is imperative. Periodic assessment in an inventive and stimulating way not just improves the well-rounded performance of students but also maintains their interest in the subject. It is for these reasons that we need to overhaul our existing exam system and incorporate the above-mentioned metrics to evaluate students.
Assessing the assessors
The flaw in the education system extends far beyond the evaluation practice. Considering all the negligence that goes into assessing exam papers, the lack of provisions to request re-assessment upon dissatisfaction is another bump in the system. This gives unquestioned authority to examiners, encouraging recklessness. The system further protects examiners, more so reckless ones, by maintaining their anonymity and forbidding students from seeing how their answer sheets were graded, even upon a formal request. As long as the assessors are scrupulous and fair in checking the answers sheets, there would be no need for this kind of protection.
Given the volume of students that are examined under this one examination system, it is understandable that the facilitation of these provisions is hard. But the odds only justify the fact that the prevalent system is unfair, flawed and not up-to-date. Instead of plastering up flaws with stringent regulations, it is high time we first evaluated the education system itself.
The umbrella system
What is fairly accepted is that this completely question-based one-time central exam system is putting a majority of students at a disadvantage. True evaluation of students cannot be made through one umbrella exam system when there is heterogeneity in the milieu of campuses across the country. Hence, the attempt to evaluate all these students, who are subject to different grooming facilities, resources and rest, under the same criteria, cannot be justified. If we are to introduce a more just, practical and realistic evaluation system, the concept of a single, long test at the end of the academic year is not the answer. Students should be evaluated by their own professors, who will be in a better position to assess students continuously throughout the year. Furthermore, this practice will also account for the difference in settings the students are subject to.
The culture of competition among campuses (and schools), solely based on the grades of their students in board exams, gives them enough incentive to guide their students towards exam-centric learning. It is time to forgo this yardstick of quality assessment of campuses based on marks secured by students. If campuses were to be regularly evaluated for their professors’ qualities, resources, open-learning initiatives, graduates’ placement in job markets, student involvement/performance in extracurricular activities etc, then the concentrated effort, spent only to secure more marks in exams, would be utilised more productively in disseminating practical knowledge and fostering creativity.
Paudel is an economics graduate with an interest in public policy ([email protected])