Provincial exams are very useful

2009-05-04

in Canada, Geek stuff, Rants

Shoes and map

Speaking with people involved in the Ontario school system, I was surprised to learn that they do not have provincial exams for courses in the last years of high school. To me, this seems like a mistake. Provincial exams provide vital information to university application departments: specifically, they let you gauge how much grade inflation is happening at any particular school. A school where the mean class mark is 90%, but where the mean provincial mark is 60%, is clearly inflating grades. Conversely, a school where the mean class grade is 60% but where students get 90% on the provincial exam clearly has high standards.

Given that individual schools and teachers have a strong incentive to inflate the grades of their students, provincial (or federal) exams seem to be a key mechanism for keeping them honest. Otherwise, there is simply far too much opportunity to call mediocrity excellence, without anyone else being the wiser.

I very much hope B.C. retains the provincial exam system, and that it becomes universal across Canada.

Report a typo or inaccuracy

{ 45 comments… read them below or add one }

Alex May 4, 2009 at 8:07 am

We have a similar problematique in Germany. Would your logic of argumentation not force you to argue for a national exam system? Provinces also might be interested in inflating the grades of their students. In the German context, federalism prevents such a move.

Milan May 4, 2009 at 8:38 am

Logic does not guide the Canadian federation, except perhaps the logic of the provinces holding on to every scrap of autonomy they have.

Milan May 4, 2009 at 8:53 am

Another advantage of provincial exams is that they create a common base of knowledge for people entering university.

For instance, when a university sees that a student has taken Chemistry 12 or Literature 12, it can assume that knowledge to be present when planning first year courses.

. May 4, 2009 at 8:55 am

BC Provincial Exams: SFU, what do you think?

When I graduated high school (in 2005), we had to write exams in most grade 12 courses—I had to write them in English 12, English Literature 12, French 12, History 12, Geography 12, and probably a few others.

Nowadays, the requirements are a little different. High school students only have to write provincials in English 12, and the other exams are optional. Most Ontario universities have discontinued the requirement for BC students, so BC students don’t have to write the provincial exams and can still be offered admission.

Tristan May 4, 2009 at 9:17 am

The discontinuation of provincial exams has to do with moving away from the merit-model of university, towards the market demand/supply model. It used to be that there would be X number of spots, and Y number of applications, and Y exceeded X by a great margin, so there was a need of a fair criteria to see who from Y would be an X. Now, if Y exceeds X we call that over-demand, and increase the number of spots (and tuition, to help fund the extra spots), until Y is hardly greater than X. Universities sell qualifications, not education now. This was very clear during the York strike when the admin refused to reduce tuition to students who were receiving less education because of the strike, on the grounds that they would still receive the same number of credits.

R.K. May 4, 2009 at 9:33 am

Such exams also make teachers more accountable. By comparing the results of their students against the overall results, their performance can be evaluated by school administrators and parents.

That will lead to them ‘teaching to the test,’ but I don’t see that as a big problem, as long as the knowledge being tested is appropriate.

Anonymous May 4, 2009 at 10:03 am

In reality, most people write the B.C. provincial exams just for pride (and perhaps a scholarship or two). With many students being accepted by guaranteed early admission, there was little practical incentive to do well on them – even when they were mandatory. A student with an 85% school mark (not particularly high for those entering university) would have already passed their course – generally the only condition placed upon those who are given a guaranteed offer.

My preference would be to hold provincial exams in Grade 11, when it actually does matter for university admissions. Not in every subject, but in the critical ones of English (for all students) and Math (for science and business students). To hold exams after admissions have been decided seems a little unfair to those who put in the effort to do well on them.

Tristan May 4, 2009 at 10:09 am

First term exam grades always counted towards admissions?

Milan May 4, 2009 at 10:14 am

Provincial exams are also useful preparation for university, since they follow the same basic model as most Canadian university courses. By giving students a glimpse into how university works, they may help the students to make a more informed choice about whether to go.

alena May 4, 2009 at 12:00 pm

……….It seems that provincial exams are history in B.C. and only those students who hope to get a scholarship sometimes write them. Most of my grade 12 students this year were accepted to the university of their choice by February and hence there is no reason for them to write the exams. Maybe the SAT system in the States is better and more universal. It has unfortunately also created a whole sub-system of tutoring and preparation courses to beat the competition.

Sasha May 4, 2009 at 12:23 pm

I couldn’t possibly disagree with you more Milan, alas, and this is as someone who deals with them daily.

First, on grade inflation, provincial exams cause far more of it than they prevent. Students and parents hear rumours that exam scores often drag down marks and consequently put even more pressure on teachers for higher marks. I have had parents come in waving a copy of their child’s test and trying to argue for more marks, and I know of senior classes in our district where class averages are 80%+ because demographics are such that the whole group is expected to go to university (think affluent areas). Teachers I know cite exams as creating pressure to increase marks, not the other way around.

Moreover, the tests themselves are not well-designed and they distort curriculum. Students are taught fewer actual skills because the lists of terms students are expected to know for exams continue to expand. I know my head aches every time I have to teach the difference between metonymy and synecdoche to someone who still struggles to write a coherent paragraph – but they’re supposed to know it for the exam…

But even all of that misses the point. I completely object to our high schools and graduation standards for them being manipulated to cater to universities. In BC, some 49% of high school students will go to university. That we let their final year in high school focus on something most of them are not doing is an absolute outrage. They need reading, writing, numeracy, analysis skills far more than they need a list of 100-300 terms per exam. If universities want entrance tests, they can run them themselves, as is done in most other progressive jurisdictions.

The icing on the cake for me is that these tend to be quite bad tests. Experts have shown them to have a Western bias, a gender bias, and I’ve been shocked by how many discriminate against immigrant students by implicitly requiring out-dated Canadian cultural knowledge – for example, short stories where understanding the theme requires a person to know something about rural Canada in the 1939s, etc. It’s really quite ridiculous.

Milan May 4, 2009 at 12:56 pm

First, on grade inflation, provincial exams cause far more of it than they prevent. Students and parents hear rumours that exam scores often drag down marks and consequently put even more pressure on teachers for higher marks.

This is why the class marks versus provincial marks comparison is so useful. If schools are inflating the former to compensate for the latter, it should be easy to detect by comparing the two distributions.

Milan May 4, 2009 at 1:15 pm

Sasha,

It seems that most of your objections are about the curriculum, rather than about provincial exams themselves. I agree that the curriculum could be improved in many areas. For instance, a lot of the material in science courses is rather dated, and I think the approach taken for teaching mathematics is a train wreck.

To briefly list the advantages of provincial exams described above:

  1. Allows the identification of grade inflation.
  2. Providing a common base of knowledge for everyone who takes a certain course.
  3. Making it easier to hold teachers and schools to account.
  4. Getting people used to the structure of university courses.

I agree with the argument that teaching to the test is only really a problem when you are testing the wrong things. Remedying that is about changing the content of courses, not getting rid of a useful and relatively objective benchmark of school, student, and teacher performance.

Matt May 4, 2009 at 2:01 pm

“I think the approach taken for teaching mathematics is a train wreck.”

Funny, I found math to be one of my most enjoyable high school subjects. I don’t dispute that everyone has a different experience, though.

“With many students being accepted by guaranteed early admission, there was little practical incentive to do well on them.”

In my grade 12 class there was a guy who was granted early admission to UBC in Science only to have it revoked following a poor showing during his exams. I remember this because of a little friendly competition we had regarding grades. I was disappointed to find I hadn’t been granted early admission into Applied Science (which had a similar requirement to Science) but I killed myself studying to bring my grades up with good final exam scores. He didn’t have the same incentive to do well (so he thought) and got a nasty surprise.

. May 4, 2009 at 2:07 pm

A Mathematician’s Lament
by Paul Lockhart

Sadly, our present system of mathematics education is precisely this kind of nightmare. In fact, if I had to design a mechanism for the express purpose of destroying a child’s natural curiosity and love of pattern-making, I couldn’t possibly do as good a job as is currently being done— I simply wouldn’t have the imagination to come up with the kind of senseless, soulcrushing ideas that constitute contemporary mathematics education. Everyone knows that something is wrong. The politicians say, “we need higher standards.”

The schools say, “we need more money and equipment.” Educators say one thing, and teachers say another. They are all wrong. The only people who understand what is going on are the ones most often blamed and least often heard: the students. They say, “math class is stupid and boring,” and they are right.

Tristan May 4, 2009 at 2:23 pm

If you actually look at the annual evaluation of high schools, it’s obvious that the whole evaluatory project is meaningless without provincial exam grades. The most important numbers on that sheet are the average provincial exam grades in the different subjects for each school – and even more important – the mean differential between class grades and exam grades. Bad schools have a high differential. Sasha’s school, I’m absolutely sure, has a low differential – this is required for a School to be considered a good school. I don’t doubt that provincial exams create an incentive for parents to pressure teachings into inflating grades – but that incentive needs to be weighed against the incentive of having your grade-inflation be reported to the entire province.

Without provincial examinations, it would be frankly impossible to evaluate schools with each other.

Milan May 4, 2009 at 2:36 pm

It also seems likely that teachers at expensive private schools will be more willing to cave to demand from parents. After all, the schools are getting tens of thousands of dollars per pupil.

While private schools probably do a better job of preparing their students for provincials (because they have more resources, etc), having the standardized exams does at least offer one form of protection against them inflating their grades, as well as tolerating activities like plagiarism.

Actually, the fact that they are hard to cheat at may be another significant point in favour of standardized tests. It is certainly an argument for in-class rather than take-home essays.

Peter May 5, 2009 at 1:08 am

There are certainly holes in our current model of education, but I think the presentation of provincial testing as a panacea is somewhat naïve. Any system of evaluation we choose to implement is going to create classes of winners and losers on the basis of dubious merits and as such, can be contested by some party within the system. To prevent a quick reduction of my position, I believe there are clear standards and that it is possible to differentiate educations in terms of quality, so “dubious” doesn’t convey the sentiment that we must always lack clear, strong, and desirable standards of evaluation, but that even perfected versions of evaluation systems underlie value judgments about the respective importance of attributes that aren’t universally shared, and certainly aren’t irrefutably grounded. The dichotomy between Japanese testing and American innovation springs to mind.
The example brings me to the first of two core failings of standardized testing: There is a conception of education that differs from the memorization of facts, and standardized testing doesn’t necessarily prevent grade inflation, especially when those test results are used to evaluate teaching ability.
“That will lead to them ‘teaching to the test,’ but I don’t see that as a big problem, as long as the knowledge being tested is appropriate.” (Matt)
There are strong objections to be made against the form of standardized testing. Significant details of my second criticism are included in my reply to Milan, so I will just note that provincial testing doesn’t necessarily prevent grade inflation. That function is determined by a specific property of the test. If provincial testing (as it has in the past) only provides a universal check on content, so that essay questions are marked locally, grade inflation and social pressure on teachers aren’t addressed by simply implementing standardized testing. We could simply address these issues directly. It becomes theoretically possible to have varied curriculums with limits on grade inflation by implementing these features. Ex. Rather than forcing everyone in Ontario to read The Lord of the Files, external evaluators could come mark a test given on varied class content. It is the external status of the grader than prevents pressure and grade inflation, rather than testing on a specific book. When standardized grades are used to evaluate teachers’ abilities there is a strong impetus to cheat, and important social factors and inequities are ignored. (To be expanded on later)
My primary objection is that there is a notion of education that is different from memorizing facts. I’m shocked that, not only has Tristan failed to defend this position, but appears to argue against it in favour of recitation of fact. Teaching the test risks reducing education to trivia. This isn’t desirable, and I shall defend it even if I have to sound like Tristan – that isn’t real philosophy, it’s not engaged with the question of Being (joke), there isn’t any thinking going on there (not joke). I think Sasha makes excellent points about core abilities. Skills tend to run deeper than facts.
I’m not completely unsympathetic, because we have lost some educational rigor compared to historic levels where the memorization of facts was emphasized, but I think that is a general decrease in expectation. Learning some facts and having a shared body of knowledge is important, but only if they serve developed skills – the ability to research, the ability to put things in context, the ability to evaluate and organize facts into coherent narratives, the ability to expand on ideas, creativity, the ability to innovate. Standardized testing doesn’t eliminate these skills, but it compromise them in direct proportion to how important we make the test. Teaching to the test certainly undermines the development of those skills.
Varied curriculums also afford the potential for self-direction. This allows for both really good and really bad teachers. Just as poor teachers do little work and inflate the grades of their students, good teachers will alter the curriculum to teach neat things, reflect their areas of interest and competence, and track student interest. The whole notion of trying to re-engage students’ interest in education just doesn’t seem plausible when reducing to cold, mechanical cramming is the best strategy for success. I guess I’m defending the good old days of cap on the floor professors.
To some degree it depends on whether you want to see universities reduced to trade schools. It has been subtly suggested by those in favour of standardized testing that this is a negative, however this isn’t a consistent position. Either the memorization of specific content, and the possession of mechanical abilities is the desired outcome, or it isn’t. Setting up a test with those properties, while trying to oppose those properties in universities and corresponding results in society (like emphasis on conformity) isn’t a coherent approach. It’s not that black and white, since any specialization requires memorization of specific knowledge, but that happens much later than high school (read – hopefully after the students have their core abilities) and doesn’t happen across all subjects.
I’m aware that I sound a little idealistic, but I truly believe that education provides an auxiliary benefit to society beyond economic gains. I think better education makes for better citizens, but this only works if you emphasize abilities. I’m operating under the premise that you want a population that is engaged, civic minded, can think critically, knows how to research topics, knows how to formulate opinions, etc. The indefinite article “you” was used, since I am aware some (eye on Tristan) are likely to protest that this isn’t what social elites want, need, and so on, but the “you” refers to the class of people who like to debate things on blogs, especially if you are seeking to inform and energize people against the problem of climate change, by appealing to their civic and rational nature.
Milan,
I think your response to Sasha’s comment is terribly uncharitable.
“It seems that most of your objections are about the curriculum, rather than about provincial exams themselves.” (Milan)
The following appear to be criticisms of the structure of provincial exams, rather than the curriculum:
“…on grade inflation, provincial exams cause far more of it than they prevent.” (Sasha)

“Students are taught fewer actual skills…” (Sasha)

“Experts have shown them to have a Western bias, a gender bias, and … requiring out-dated Canadian cultural knowledge.”(Sasha)

I believe the synecdoche and metonymy example was stressing the lack of core skills. (Sasha paraphrase) We are not only missing the point when we substitute trivia (memorization of facts) for education (critical abilities, understanding, research strategies), but that it is impossible to convey some of the advanced material without a sufficient core.

In terms of bias, I’ll be charitable and assume you meant the “content” of the tests rather than “curriculum”, since improved tests could theoretically reduce or eliminate the biases. The structural problem is that any test set by a relatively small number of individuals is going to reflect some set of values. The evaluative criteria of provincial tests are static, so there is no allowance of alternative demonstrations of ability. This necessary feature, since it is the lack of teachers’ discretion that secures standard evaluation, means it is very likely that the test will be biased in some way. Whereas discretion allows alternative strategies to be rewarded, which more accurately tracks the real world. This is not to say that teachers are not biased as individuals, but they avoid the institutional, uni-directional bias reflected in standardized tests, and very rarely does one teacher possess as much influence over your future than evaluative standardized testing (such as the SATs, LSATs, MCATs). (Disclosure – I was raised in a small town with a singular high school and student population of less that 400, so I’ve seen close to the most extreme examples of influence a teacher can have over students. It can be sizable. But I still don’t think it compares to the SATs)

Improving the scope of the tests might affect how much is taught, but it depends on how one interprets “skills”. You’re right to allege that the basis of facts could be expanded to some acceptable level, but you risk ending up producing students whose only skill is taking tests.

In response to your summary:
“To briefly list the advantages of provincial exams described above:
1. Allows the identification of grade inflation.
2. Providing a common base of knowledge for everyone who takes a certain course.
3. Making it easier to hold teachers and schools to account.
4. Getting people used to the structure of university courses” (Milan)
1. This is questionable. You’ve framed the benefits of provincial tests by imagining the least sophisticated methods that would be employed to cheat standardized tests and the most idealized form of testing. We had provincial testing in Ontario at one point, and we decided to get ride of it, due to a reform that suggested skills and abilities were more important than content. The idea was that it was more important for students to be able to understand and think critically about novels they have read, than it was for them to read any specific novel. Since then the trend has somewhat reversed itself in your favour. However, the form is of particular importance, those tests merely applied a universal check on content, and were marked by your teacher. Many standardized tests are still graded locally, so this will do nothing to address favouritism, social pressure, and most of the other reasons for grade inflation. It will establish a minimum, so there is benefit, but it does so at the cost of a great deal of diversity and potential student interest. Off site tests, (ex. SATs) solve this problem, but limit the things that can be tested à see below.
2. A common base of knowledge doesn’t say anything about the ability of students to apply that knowledge. I’ve already admitted that a common base and specialized knowledge is important, but I stand by the general trust of my criticism. You risk twisting education into something shallow, where developing skills and abilities, which convey the greatest benefits, isn’t a priority.
3. This is a potentially dangerous idea.
“This is why the class marks versus provincial marks comparison is so useful. If schools are inflating the former to compensate for the latter, it should be easy to detect by comparing the two distributions. (Milan)”
This only works if you assume the provincial testing is accurate. Read the first chapter of Steven D. Levitt’s Freakonomics. When the grades from standardized testing are fully accepted as accurate and used to evaluate teacher’s performance, the first proposition is even more unlikely because teachers now have an incentive to cheat. Levitt’s book isn’t great, but it does serve as a practical example of how standardized tests first reduced core abilities of students, as teachers learned to teach to the test, and then lead to outright cheating and inflation when jobs and bonuses were dependent on test results. Additionally there has been considerable debate over how to fund schools, since merit suggests rewarding high scores where the teachers supposedly make the best use of resources, while the right to a quality education suggests the underperforming schools require the funds. Simply ordering underperforming schools to increase performance while withholding additional funding because they aren’t effectively teaching their students is punitive as students in those schools are likely to start with less knowledge (lower scores), “inferior teachers” (according to the testing logic) and less funding. Once one becomes convinced that test scores are ability evaluative with regards to teachers, then normal statistical variations (a class of bad students) and a whole host of social factors (wealth, parental education, community programs, race) simply disappear from view and only alleged “poor” teaching remains. The social factors are the things we need to be considering, and it is best to simply approach them directly. In similar fashion, it is important to assess the specific properties of evaluation systems that convey benefits and weakness, rather than present provincial testing as a panacea.
4. In what way are students of a university course subjected to provincial testing? They write an end of term exam, just as high school students write an end of term exam, and the degrees are relatively based on institutional reputation. Additionally, professors select the course material to reflect their interests, areas of specialization and competency and potentially their students’ interests, and then trade their reputations as currency when it comes to grades and letters of recommendations. Test are still marked locally, so favouritism and pressure to inflate still occur, and professors are given an incredible amount of discretion in requirements, changing the requirements and evaluation.

Tristan May 5, 2009 at 9:34 am

Peter, I’d agree with all of your criticisms if BC provincial exams looked like the kind of standardized test you’re criticizing.

“My primary objection is that there is a notion of education that is different from memorizing facts. I’m shocked that, not only has Tristan failed to defend this position, but appears to argue against it in favour of recitation of fact”

And, – whoa “Test are still marked locally, so favouritism and pressure to inflate still occur, and professors are given an incredible amount of discretion in requirements, changing the requirements and evaluation.”

No. The tests are not marked locally, they are marked by people you’ll never meet, in Victoria.

The simple fact is, the provincial exams are not simple regurgitation of facts. For example, the History 12 exam is only about half multiple choice, the rest analytic short answer, a document question (primary source analysis), and an essay. There was very little “standardized test” about the English 12 final, other than the fact everyone in the province wrote it. Even the physics exam required a fair bit of thinking on the fly.

The truth is, these tests are put together by a team of teachers who teach the given class. They actually take a semester off their normal teaching duties to be part of this team – and coming up with the exam, and grading the exam, is a more than full time job for half a year. They do a good job.

Milan May 5, 2009 at 12:12 pm

Peter,

To respond briefly to your very detailed comment:

1) I agree that provincial exams are not a panacea. I simply think they do a lot more good than harm, for the reasons listed above

2) I agree with Tristan that Provincials are not exclusively (perhaps not even primarily) about memorization. They certainly differ significantly from subject to subject (French, for instance, requires a lot of memorization of irregular verbs), but I think the degree to which they require memorization generally matches the degree to which it is actually important in that subject table. While creativity is important, it’s also important to know your multiplication tables.

3) I agree that provincials are not a 100% defence against grade inflation, but I can’t see how they could do more harm than good on this front. As Tristan indicates, they are actually graded quite rigorously and consistently, in a double-blind manner by teachers with no knowledge of the students writing.

“It is the external status of the grader than prevents pressure and grade inflation, rather than testing on a specific book.”

I agree, and I think having impartial and double-blind marking is a vital part of the system. Students should just be a number to the examiners. With computerized tests, there isn’t even handwriting to give hints about gender, etc.

4) “My primary objection is that there is a notion of education that is different from memorizing facts.”

Provincial exams are a component of education, not its entirety. They are legitimately counterbalanced by other elements, and never make up the majority of a student’s overall mark. Skills and abilities are very important, but that doesn’t mean we shouldn’t rigorously test knowledge and the ability to express it in those cases where it is possible to do so.

I agree that education provides benefits well beyond the economic, but I think it does so best when standards are high. Provincial exams are one way to help maintain those. Within my high school, the ability of teachers to prepare students for provincials was certainly considered a key aspect of their competence. Both students are parents were keenly aware of it, and the classes taught by such teachers were desirable for reasons beyond good test prep. Rather than being dull, ‘teaching to the test’ exercises, they were the classes preferred by serious students for the overall quality of education and engagement.

5) On the matter of biases, I recognize that they exist with standardized tests and steps should be taken to mitigate it. That being said, bias also exists (possibly more strongly) in other forms of evaluation. This is an argument for good test design, not for scrapping them.

6) On university exams, it is very common for university courses to be graded as a combination of essays and a final exam worth 40-60% of the mark. While these are likely to be less objectively marked than provincials (not double-blind), they do require similar skills to succeed at. As such, preparing people earlier for them seems to have some value. Having university exams be more like provincials may also be desirable.

Peter May 5, 2009 at 6:08 pm

Tristan,

I apologize for the appearance of my post. It was nicely formatted in Word, but the post is disaster, so I’ll have to forgive confusions like misquoting. My quote was actually, “Many standardized tests…”, and I had specific ones in mind when I wrote the sentence. I deliberately stayed away from comment on the BC provincials for a simple reason – I don’t have first hand experience of them. I was directly referring to the historical incarnations of Ontario’s provincial tests which was mention immediately before the statement. I was also thinking of Levitt’s book, which I realize is structurally confusing since I cited it after. However, I fear that you might have missed the point. I did address standardized tests that aren’t locally graded like the SATs, and LSATs. The main point was that it is the external marker that secures the benefit opposed to the standard curriculum.

The SAT example can be contested because the form emphasis memorization, however, I did deliberately reference the LSAT because of the essay section on the test. You are correct that testing with essay questions resists a strong reduction to memorization, however, I will point out that teaching to the test in essay form still exists, and I would contend disrupts core skills and student interest because it enforces a rigid curriculum. I’m happy to hear about dedicated testers, but this kind of proves my point, we could have external testers come into a classroom and familiarize themselves with the material the teacher or students selected and produce the same counter-inflationary effects. My point is that these benefits aren’t the result of a standardized curriculum, where teaching to the test undermines the core abilities that student could pursue with any book. As opposed to critical engagement with say Nietzsche, I slammed Coles Notes – insert curriculum mandated book here – and memorized the required phraseology like LSAT takers do. Essay questions do prevent the reduction to memorization line considerably, but not completely. The very existence of an industry dedicated to summarizing the “important texts” of the educational system reveals the danger.

Milan,

1. I agree that there are good aspects and preference my comment appropriately. The main observation still stands. I prefer to approach problems directly. We can just analyze the properties of those tests that convey the benefits, determine whether they are incidental, and then very likely apply them differently to achieve the same desired results.

2. Agreed – somewhat. There are interesting debates over whether being able to do calculations in your head is desirable in the age of calculators. I’m not going to weight in, but I do think skills and to some degree application should hold the emphasis. I’ve already admitted the need for specialized knowledge, a body of common knowledge and I am happy to add core knowledge, but it still has to remain in service of skills, which don’t need a universal curriculum and are not emphasized by teaching to the test. (see above)

3. Once again, I wasn’t dealing specifically the BC provincials. But Levitt’s book serves as a practical example where grade inflation occurs. I was specifically reacting to your third feature. When test scores are taken to be evaluative teachers are usually given incentive to cheat for their students. Levitt’s examples take it to the extreme, like the great teacher all the parents loved until a kid came home and said, ‘She is such a nice lady. She taught us by writing the answers on the board during the test.’ That is one example, which actually happened, of how standardized testing when combined with your third premise can lead to more harm than good.

3.5. We agree, but the main point was that it was the independent evaluation and not just provincial testing that secures the benefits. By now it should be apparent, I like to address things directly. If a specific property conveys the benefits, accurately report it, and then push the property, and not necessarily the test.

4. I accept and value high standards. I’ve provided numerous examples of how high standards aren’t necessarily congruent with standardized testing. We might employ other means, and we might employ certain properties of the test that directly convey the benefits in a form other than the tests. The additional posited benefit is that a more interactive education might engage the students. However, I can only refer back to my preface of the post. Any method we choose will have problems, and there are some real benefits conveyed as standardized tests, I simply reject the wholesale presentation of the concept and the benefits. Careful analysis can show properties that are detrimental, as well as properties that convey benefits that are incidental to provincial testing. Those properties could be extracted.

5. Individual teachers can be even more biased that certain test, but since you are dealing with different people you aren’t subject to the same one way bias, I was suggesting the testing groups have come under social and corporate pressure to highlight certain attributes. Additionally it is very rare that one teacher (even in my school) was able to influence your future as heavily as a good performance on the SATs or LSATs could. Previously I meant bias in fairly innocent terms, as each teacher has their own hot topics, way of phrasing things, value judgments, etc. However, the ability to remove any specific teacher is a nice feature that counteracts the stronger biases individuals may possess. There is precedent to demand another evaluator when confronted with blatant racism or sexism, whereas the bias in standardized test is below the surface. Now that is said, we should strive for good tests and progress is being made in this field.

6. I don’t understand how this pertains to provincial testing. It sounds like you want the form, weight, and possibly content of high school exams to more closely resemble university exams. It isn’t like students who don’t take provincials don’t have exams – or at least I hope that hasn’t become the case. Anecdotally, most of my OAC classes mirrored university expectations pretty well – we weren’t required to show up, there was less moralizing as it was assumed we were all serious students, proficiency was key, exams were heavily weighted. So, I’m not against moving to a university model for late high school students, but I don’t see what that has to do with provincials.

I gave a long list of similarities between high schools and universities to show that both are not standardized systems. In fact, reputation and specialization is often defended as a virtue of the university system. Considering the exams in particular, they don’t have the features your promoting with provincial testing. We could make any end of term exam worth more, which has nothing to do with a common body of knowledge. University exams are like high school exams, specific course, marked by familiars, with the same pressures to inflate. When I was a TA, I marked the exams of the students in my tutorial. New professors warn about career repercussions of deviation from the mean and regularly handicap the second exam to secure the 71% class average. I think I am completely missing your point, because the similarities you want to establish (like weight) have absolutely nothing to do with benefits of standardized testing that you are lobbying for.

Tristan May 5, 2009 at 9:05 pm

There are “coles notes” versions for some of the provincial exams. However, they are just as long and comprehensive as textbooks.

Peter May 6, 2009 at 1:46 am

So the lenght of a test prep book ensure some minimum quality of thought and understanding over memorization or habituation?

Tristan May 6, 2009 at 1:49 am

The test prep book is the coursebook. The entire course is required knowledge – including as I pointed out, analytic skills needed to deal with the document question, and argumentative skills to write the essay.

Milan May 6, 2009 at 11:30 am

One point that hasn’t gotten much consideration is this one Sasha made:

“But even all of that misses the point. I completely object to our high schools and graduation standards for them being manipulated to cater to universities. In BC, some 49% of high school students will go to university.”

The question of how to orient the last few years of high school is a tricky one, given that some people will need university preparation and others will not. What is the best way to deal with this? Separate courses for those intending to pursue university studies? (My high school, for instance, had a split between math courses for the university-bound and other courses for those without this intention. Separate schools for academic and vocational tracks?

Nick Graves June 18, 2009 at 6:05 pm

No. The tests are not marked locally, they are marked by people you’ll never meet, in Victoria. <– this is not true, only grade 12 English and some higher level courses, otherwise they are marked by the teacher.

Furthermore, like Peter said provincials are so flawed, that even the majority of universities in BC no longer use them to deal with admissions.

It is good to discuss education and the way it is delivered, but it is bad to spout off with little to no factual evidence backing it.

. June 19, 2009 at 3:56 pm

A-levels ‘too much like sat-nav’
By Katherine Sellgren
BBC News education reporter

The A-level exam has become “hollow preparation” for university, by undermining independent study and original thought, says a think tank.

The Reform group claims exam modules have created a “learn and forget culture” – which it likens to using a sat-nav rather than map-reading skills.

It says universities should ensure the quality of A-levels, taken by pupils in England, Wales and Northern Ireland.

Ministers said the extended project and the new A* would address concerns.

peter October 14, 2009 at 7:32 pm

dont tell that shit anymore ok?
u know that it is pressure for people who wanna get into university or college?

Emily October 15, 2009 at 12:39 am

Peter,

Very persuasive!

Jerkstore October 15, 2009 at 8:46 am

Looks like someone could have used some provincial exams ;)

Milan October 15, 2009 at 12:54 pm

Joking aside, there ought to be pressure for people wanting to get into universities or colleges. Having people just drift automatically from high school to post-secondary education seems likely to sap the quality and enthusiasm of those who turn up in university classrooms.

. February 16, 2010 at 10:52 am

Detecting Cheating by Analyzing Erased Answers

I had no idea this was being done, but erased answers are now analyzed on standardized tests. Schools with a high number of wrong-to-right changes across multiple tests are presumed to have cheated: teachers changing the answers after the students are done.

. March 18, 2010 at 4:55 pm

Schools and testing
The finger of suspicion
Is too much weight given to testing?

Feb 25th 2010 | ATLANTA | From The Economist print edition

But in 2009 13 teachers in Georgia were punished for cheating, including the principal and assistant principal at one elementary school. They changed answers on completed tests for fear that otherwise their school would not make “adequate yearly progress”, as required by the federal No Child Left Behind (NCLB) law. A school that fails to do so for two years running must offer pupils the opportunity to transfer to better schools. Teachers and administrators can be fired, and the school can be taken over by the state.

And therein, say many education specialists, lies the problem: the immense weight that NCLB places on a single test. Teachers spend an increasing amount of time “teaching to the test”, because they know the results may determine their futures. A study of the Chicago school system conducted for Harvard’s Kennedy School found that the more weight given to tests, the more likely alteration becomes. Verdaillia Turner, who heads Georgia’s and Atlanta’s teachers’ unions, complains that the tests have turned teachers into “little robots. The best and brightest do not go into teaching any more.”

. May 2, 2010 at 9:11 pm

Half of English schools ‘boycotting Sats’

By Angela Harrison BBC News education correspondent in Liverpool

alf of England’s primary schools will boycott national tests due to be taken by 11-year-olds in just over a week, teaching unions claim.

National Union of Teachers general secretary Christine Blower was addressing members of the National Association of Head Teachers.

She said 50% of England’s 17,000 schools would take part.

Results from the national tests – in English and maths – are used to make up the primary school league tables.

They also underpin reports by Ofsted inspectors.

Ms Blower said the unions could succeed in their boycott and that the two unions should find ways to boycott Ofsted inspections next.

The two unions have pitted themselves against the government in opposing what they say is the damaging effect of the national tests on children’s education and schools.

They are particularly angry about the publication of the results in league tables, which they say humiliates and demoralises schools and do not reflect their true achievements.

. May 10, 2010 at 9:48 am

“Meanwhile secondary schools switched pupils from harder subjects to easier ones in the chase for good exam results. The number in state schools studying the core subjects of history, geography, languages and the sciences to age 16 has fallen dramatically since 1997, with a rise in easier-to-pass subjects such as media studies and psychology. Teacher-assessed courses in subjects like sport or “travel and tourism” are given a spurious equality with traditional exams in government figures, and hardly anyone fails them.

Grade inflation has occurred across the board. Officially, 80% of children leave primary school now at the expected standard in reading and 79% in mathematics, up from 63% and 62% respectively in 1997. About 70% of 16-year-olds get five good GCSEs or the vocational equivalent, up from 46%. More 18-year-olds take A-levels, the university entrance exams, and they get far higher grades: 26.7% of all entries receive the highest grade, up from 16.3%.

The government takes these soaring results as evidence of ever-rising standards. Independent experts disagree. One group of academics in Durham, who test random samples of pupils leaving primary school each year, find only a modest rise in English and mathematics before 2000, and none since. Its analysis of GCSEs and A-levels is no more encouraging: the tests have become so much easier that a student of the same ability could expect to get half a grade higher now than in 1997.”

. May 10, 2010 at 9:49 am

“The Conservatives also intend to tame grade inflation by giving control over exam-setting and -marking to universities, who have a natural interest in keeping results informative. And they say they would insist on having the results of different types of exams reported separately, so that less demanding qualifications do not drive out better ones.”

. November 3, 2010 at 12:01 pm

In 1900 the presidents of twelve leading northeastern universities had set up an organization called the College Entrance Examination Board, which administered admissions tests. These were lengthy essay examinations in specific subjects, which students took over a period of days and which were then shipped to the College Board office in New York and laboriously hand-graded by professional readers. The purpose for which the College Board had been invented was not really selection, which was almost a non-issue. It was to standardize the admissions process administratively and to force New England boarding schools to adopt a uniform curriculum — they would have to fill their students with the information required to pass the exams — so that undergraduates would arrive well prepared.

The College Boards were of little use to Henry Chauncey’s new project. They weren’t administered until June (too late to select students for scholarships), weren’t administered at all in most of the Midwest, and most boys who hadn’t studied the boarding-school curriculum couldn’t pass them anyway. What Chauncey needed was a uniform means of comparing students from all across the highly localized American education system — an academic equivalent of the standard gauge that the railroad industry had adopted after the Civil War. The United States had already become a national society in most ways, having generated, in addition to the standard gauge, a bureaucratized federal government, big corporations, and national communications media. But education — an enormous field with importance beyond its size, because of its role as a handler of people — remained a local matter.

john chan January 23, 2011 at 4:32 am

At richmond high school, all the IB students get less than 90% in regular classes, yet regularly score 99-100 on provincials.

. August 5, 2011 at 3:59 pm

Atlanta’s public schools
Low marks all round
The city’s school system has cheated its pupils. Now it must clean up the mess

AT ITS heart—as in so many scandals—lay a simple thing: the friction of rubber on paper. Too many wrong answers were erased, too many right ones inserted. Questions about dramatic improvements in standardised-test scores taken by children in Atlanta’s public schools (APS) were first raised a decade ago. They were thoroughly answered last week when Governor Nathan Deal released a report that found cheating throughout Atlanta’s school system, not by pupils but by teachers, with the superintendent and her administration either encouraging it or turning a blind eye.

Cheating occurred in 44 of the 56 Atlanta elementary and middle schools examined, and with the collusion of at least 178 teachers, including 38 principals. (And the report cautions that “there were far more educators involved in cheating, and other improper conduct, than we were able to establish sufficiently to identify by name in this report”). Answer-sheets in some classrooms found wrong-to-right erasures on test sheets that had standard deviations 20 to 50 times above the state norm. According to Gregory Cizek, who analysed test scores for the special report, the chance of this occurring without deliberate intervention is roughly the same as that of the Georgia Dome, a 70,000-seat football stadium, being filled to capacity with spectators who all happened to be over seven feet tall.

Some teachers gave pupils answers. Some filled in answers themselves. Some pointed to answers while standing over pupils’ desks. Others let low-scoring children sit near—and copy from—higher-scoring ones. One group of teachers had a test-changing party over the weekend.

Peer reviewed science October 7, 2011 at 6:18 pm

Easy A
Melissa McCartney

Most students enter college aiming for a 4.0 GPA. Given that grading in American educational institutions is unregulated, how meaningful is a 4.0? Rojstaczer and Healy examined grade distributions from 200 American colleges and universities over the past 70 years. They report that movement away from the traditional bell-shaped grading curve began in the 1960s and 1970s in order to help students avoid the military draft. A continual rising of grades followed, without the accompaniment of increased student achievement. Graduation rates have remained largely static for decades, the literacy of graduates has declined, and college entrance exam scores of applicants have fallen. America’s educational institutions have gradually created an illusion where excellence is widespread and failure is rare. In fact, “A” is now the most common grade. Efforts at grade regulation are controversial, but without grading oversight, either on a school-by-school or national basis, it is unlikely that meaningful grades will return to American education.

Teach. Coll. Rec. 114 (2012).

http://www.sciencemag.org/content/333/6048/twil.full

. January 31, 2012 at 8:55 pm

Exams in South Korea
The one-shot society
The system that has helped South Korea prosper is beginning to break down

Dec 17th 2011 | SEOUL | from the print edition

ON NOVEMBER 10th South Korea went silent. Aircraft were grounded. Offices opened late. Commuters stayed off the roads. The police stood by to deal with emergencies among the students who were taking their university entrance exams that day.

Every year the country comes to a halt on the day of the exams, for it is the most important day in most South Koreans’ lives. The single set of multiple-choice tests that students take that day determines their future. Those who score well can enter one of Korea’s best universities, which has traditionally guaranteed them a job-for-life as a high-flying bureaucrat or desk warrior at a chaebol (conglomerate). Those who score poorly are doomed to attend a lesser university, or no university at all. They will then have to join a less prestigious firm and, since switching employers is frowned upon, may be stuck there for the rest of their lives. Ticking a few wrong boxes, then, may mean that they are permanently locked out of the upper tier of Korean society.

. July 3, 2012 at 3:52 pm

Opposition to the present orgy of testing is now wrongly interpreted as unwillingness to be held accountable.

For those who buy that fiction, a list of some of the real reasons for educator opposition may be helpful.

Teachers (at least the ones the public should hope their taxes are supporting) oppose the tests because they focus so narrowly on reading and math that the young are learning to hate reading, math, and school; because they measure only “low level” thinking processes; because they put the wrong people — test manufacturers — in charge of American education; because they allow pass-fail rates to be manipulated by officials for political purposes; because test items simplify and trivialize learning.

Teachers oppose the tests because they provide minimal to no useful feedback; are keyed to a deeply flawed curriculum adopted in 1893; lead to neglect of physical conditioning, music, art, and other, non-verbal ways of learning; unfairly advantage those who can afford test prep; hide problems created by margin-of-error computations in scoring; penalize test-takers who think in non-standard ways.

Teachers oppose the tests because they radically limit their ability to adapt to learner differences; encourage use of threats, bribes, and other extrinsic motivators; wrongly assume that what the young will need to know in the future is already known; emphasize minimum achievement to the neglect of maximum performance; create unreasonable pressures to cheat.

http://www.washingtonpost.com/blogs/answer-sheet/post/the-complete-list-of-problems-with-high-stakes-standardized-tests/2011/10/31/gIQA7fNyaM_blog.html

. July 3, 2012 at 3:54 pm

But the problem with my alma mater is that over time, the mechanisms of meritocracy have broken down. In 1995, when I was a student at Hunter, the student body was 12 percent black and 6 percent Hispanic. Not coincidentally, there was no test-prep industry for the Hunter entrance exam. That’s no longer the case. Now, so-called cram schools like Elite Academy in Queens can charge thousands of dollars for after-school and weekend courses where sixth graders memorize vocabulary words and learn advanced math. Meanwhile, in the wealthier precincts of Manhattan, parents can hire $90-an-hour private tutors for one-on-one sessions with their children.

By 2009, Hunter’s demographics were radically different—just 3 percent black and 1 percent Hispanic, according to the New York Times. With the rise of a sophisticated and expensive test-preparation industry, the means of selecting entrants to Hunter has grown less independent of the social and economic hierarchies in New York at large. The pyramid of merit has come to mirror the pyramid of wealth and cultural capital.

http://www.thenation.com/article/168265/why-elites-fail#

Sara January 16, 2014 at 12:17 pm

I realize this was posted years ago, but I would like to add two things.
1. Canadian teachers have no incentive to inflate marks. There is no funding, no extra money, for how many of your students get into university. This is different at a private school though as they use these numbers to market their school. At the public level, your grad rates are your grad rates…
2. The provincial exam for English Language Arts marks a VERY limited aspect of the course. As such, class marks may difffer vastly depending on how much of the remainder of the curriculumn a teacher assesses. Sadly, in Alberta, where exams are 50% of the course mark, teachers really only assess what the exam does. That leaves a majority of the curriculumn, what people have said they want students to learn, under assessed as many students excel in those areas (presenting, creating text, etc) whereas they are weaker in the areas of essay writing and reading comprehension and it would create a greater discrepency between the class mark and the provincial exam mark. (The reading comprehension in Alberta is not your basic reading comprehension. It is all inference and analysis. No recall. It is difficult)
Anyway, I do believe testing is important. I also believe holding teachers accountable and making sure that all teachers are assessing students in a similar fashion is important. But there are better ways than standardized tests: objective based assessment, portfolio presentations to an objective panel, as well as global assessments like final exams.

. February 10, 2014 at 3:58 pm

Adjusting GPAs: A Statistician’s Effort to Tackle Grade Inflation

A recent analysis of 200 colleges and universities published in the Teachers College Record found that 43 percent of all letter grades awarded in 2008 were A’s, compared to 16 percent in 1960. And Harvard’s student paper recently reported that the median grade awarded to undergraduates at the elite school is now an A-.

Leave a Comment

You can use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

{ 1 trackback }

Previous post:

Next post: