General Critique of the Regents Math A Exam
Level of Difficulty of the June, 2003, Exam
Other Analyses of the Regents Math A Exam
Recommendations to the Commissioner
The main body of the article, up to the Recommendations, was written at the end of June, 2003. The Addenda are later, as indicated there.
This Web article offers an analysis of the Regents Math A exam based on the instances of August, 2002, and of January and June, 2003. I look at the quality of the exam in general, and at the level of difficulty of the June 2003 exam relative to the two previous instances. I conclude with some recommendations for the New York State Education Commissioner.
Related Web page: Detailed Critique of the New York State Regents Mathematics A Exam.
The Regents Math A exam that was given on June 17, 2003, has received much negative press attention, as seen in the following headlines. (Links may break over time.) New York Daily News: Test Mess Threatens Diplomas. New York Newsday: Math Test Too Tough?. New York Post: Testy Teachers Blast 'Too Hard' Math Exam. New York Times: This Year's Math Regents Exam Is Too Difficult, Educators Say. Rochester Democrat and Chronicle: Huge Numbers Fail Math Test. Buffalo News: Many Seniors Fail Crucial Test.
The year 2003 is the third year in which students were required to obtain a passing score on the Regents Math A in order to obtain a New York State graduation diploma. However, the flack over the June 2003 instance of the exam led the State Commissioner of Education, Richard P. Mills, to waive the Regents Math A requirement for this year's graduating seniors.
Past instances of the exam and the associated scoring keys and conversion tables are posted on the Regents Examinations Web site, under the link to Mathematics A. Procedural information related to the exam is posted at the State Assessment site under High School General Information. I have read with care and will discuss here the Math A exams of August, 2002, and of January and June, 2003.
The format of the Mathematics A exam is identical over the three instances. There are 35 questions. Questions 1-20 are four-way multiple choice questions worth 2 points each, questions 21-25 are open response questions worth 2 points each, questions 26-30 are open response worth 3 points each, and questions 31-35 are open response worth 4 points each. Partial credit is possible on the open response questions. When scores on the Regents are discussed one must be careful: sometimes the scores refer to the number of points out of the 85 max, and at other times the scores are expressed on a scale with a 100 max. For the year 2003, before cancelling the Math A requirement, New York State had required a scaled score of 65 for a Regents diploma and a scaled score of 55-64 (at the discretion of local districts) for a local diploma. For the June 2003 instance a raw score of 51 points out of 85 max would give the scaled score of 64 and a raw score of 43 points would give the scaled score of 55.
Students must be supplied with a calculator and with a straightedge and compass. The straightedge and compass is required on only one or two questions each exam, but the calculator will be used extensively.
The following critical remarks are substantiated in a Detailed Critique of the New York State Regents Mathematics A Exam, which looks at individual items from the three recent instances of the exam from August, 2002, through June, 2003. It is apparent that mathematicians have had at most a peripheral role in the writing of the Regents Math A exams.
The companion detailed critique shows several instances of questions in which students are forced to guess what is meant, for example by looking at the candidate answers in the multiple choice section. (Aug 2002, Q4 and Q14; Jan 2003, Q8 and Q18; Jun 2003, Q4 and Q14.) On their own these are not major errors, because indeed students can figure out what is meant. (The most serious blunder in this category is June 2003, Q14, for which the scoring rubric had to be adjusted.) This will explain also why these errors were not found in the pre-test field trials. However, these ambiguous formulations are damaging in an indirect way, in addition to being just plain unprofessional. The frequent use of imprecise language creates among students and teachers an understanding that the authors of the Regents exam do not necessarily mean what they write or write what they mean. Students may then find themselves re-interpreting the authors' intent also when the question means exactly what it says.
Besides the imprecise and ambiguous questions there are many instances of poor mathematical language - I don't even list all of them in the more detailed critique. In particular, the test authors persistently avoid the words "equals" and "is" and use instead the weightier "is equivalent to".
One cannot function mathematically without knowing the names of some important concepts. I do not object to see questions on the Regents Math A that require students to know what is the distributive property in arithmetic, or what is the difference between similarity and congruence in geometry. In many cases, however, the Regents Math A places undue emphasis on purely linguistic matters that are of little import.
In geometry one uses all the time properties such as that a certain pair of angles in a figure adds up to 90 degrees or to 180 degrees. The concept is important and it is proper to see that students have to use it on the exam, but I don't think it particularly important that they will know which are called complementary and which are called supplementary angles. Many mathematicians would have to guess. Certainly very few would know or care what is meant by a pair of "alternate interior angles" in a geometric figure of a line crossing a pair of parallel lines, but students on the Regents Math A must have memorized this concept.
The key topic in elementary logic is the manipulation of predicates. Only one kind of manipulation seems ever to be tested on the Regents Math A, and that is the transformation of p=>q to ~q=>~p, which students must be able to identify as the replacement of an implication by its contrapositive. In addition, it seems that every instance of the Regents Math A contains at least one question in which students are asked to identify the inverse or the converse of an implication or to distinguish between the two. Many mathematicians would have to guess what is meant.
The "mode" of a data set is another concept that receives disproportionate attention in the Regents Math A exams. Mathematicians certainly care for the mean and the median, but the concept of the mode of a data set will be unfamiliar to most working mathematicians. It is a very poor measure for characterizing a data set.
The most serious source of pollution of the Regents Math A exam is the policy of relying on calculators.
The faulty question 14 on the June exam, for which the scoring rubric had to be changed, is just one manifestation of this fault. The question is: "If the expression 3-4^2+6/2 is evaluated, what would be done last?" Of course the authors could have been clear and asked "If ... is evaluated without any regrouping ...", but it is a reflection of the presence of calculators that they saw it fit to ask a question such as this at all. Without calculators they might just have asked what is 3-4^2+6/2 (never mind here the grade school level of the question).
A perfectly fine elementary question for the decimal equivalent of 8.375*10^(-3) becomes a calculator question in the present regime. A multiple choice question that involves simplifying a square root becomes a guess and check calculator question. The combinatorial questions become an exercise in when the use the "P" function and when to use the "C" function on the calculator, without students having to know what these functions are.
The June 2003 exam was widely criticized for being more difficult than earlier instances. State Education Commissioner Richard P. Mills affirmed as much in his decision of June 24 to allow districts to use local course grades in place of the results of the Regents exam. However, at the time of this writing (end of June, 2003) no published analysis is available that clearly shows that the exam was in fact more difficult than earlier instances. The mentioned announcement by commissioner Mills only refers to preliminary indications.
My reading of the three exams from August, 2002, to June, 2003, does suggest to me that the open response section of the June exam was more difficult than that of the two previous instances, but I don't view it as an entirely clear matter. Some press reports have given the impression that the June 2003 exam was entirely different from earlier instances and an utter failure, and that impression is certainly not correct. The exams, once again, are posted at the Regents Mathematics A Web site. There they are in PDF format, but for the convenience of some readers and for my own email needs I have created a plain text html translation of the June 2003 exam here.
I would say that Part I, the multiple choice section, of June is absolutely not more difficult than it was in the two earlier instances (taking into account the scoring correction for question 14 on the June instance). Part II (questions 21-25, 2-point open response) also looks straightforward to me, with level of difficulty completely in line with expectations based on earlier instances.
In Part III (questions 26-30, 3-point open response) there are problems with questions 26 and 29, as discussed in the companion detailed critique. However, only one point is at stake for students that interpret question 26 differently than was intended and then provide a correct treatment. Question 29 is a rather difficult and verbose counting question for license plates having specified number of letters and digits. It is somewhat less difficult than would appear to the uninitiated, because the students are expected to key in the appropriate "P" function on their calculator. (See the detailed critique for more.) Among the three tests reviewed here, the most difficult Part III question that I found occurs in the August 2002 exam, question 27.
The most difficult questions are in Part IV (questions 31-35, 4-point open response), and here it does appear to me that the June exam is more difficult than the previous two instances. However, the reader should look at Parts IV of the earlier exams as well, and I would suggest to take special note of the relatively difficult questions 31 and 35 on the August 2002 exam, and question 33 of January 2003.
Question 31 on the June exam has a very strict scoring guideline, as explained in the accompanying detailed critique, and the examiners would have done well to correct that at the same time that they decided to revise the scoring guideline for the faulty question 14.
In connection with question 32 of June, note as well question 3 of the August 2002 exam. The questions both require the student to know the concept of a line of points equidistant from two intersecting lines or from the sides of a triangle, but in June 2003 4 points are at stake and in August 2002 only 2.
Question 33 of June is completely in line with questions on earlier exams, but the scoring guideline is rather strict in requiring, for full credit, a table of values, which is not explicitly asked for in the question. One point is at stake for students that fail to provide such a table, and a change in scoring guideline may have been in order.
Question 34 of June in effect asks students to find on the spot the three-dimensional version of the theorem of Pythagoras (which is not in the curriculum). It is an unreasonable question for this exam.
Question 35 is the following. "The senior class is sponsoring a dance. The cost of a student disk jockey is $40, and tickets sell for $2 each. Write a linear equation and, on the accompanying grid, graph the equation to represent the relationship between the number of tickets sold and the profit from the dance. Then find how many tickets must be sold to break even."
If students have seen any economic modelling at all then this question is easy. We are given the fixed cost and the unit price, and there is a single commodity. The question can be difficult only if this very elementary situation has not been covered in the curriculum. Apparently a similar question has not previously appeared on the Regents Math A, and many students were not prepared for this one. The scoring key says to subtract two points if a student graphs y=2x (revenue instead of profit) and intersects this with y=40 to obtain the correct break-even point.
Overall, parts II and III (questions 26-35) of the June, 2003, exam have a somewhat less "standard" flavor than the corresponding questions in the previous two instances. In the earlier exams, at least there were always a few questions in which the students had to set up two linear equations in two unknowns, and that category of question is missing in the June 2003 exam. Related to that, the June, 2003, exam seems to tilt a bit more towards a test of aptitude rather than a test of school learning, relative to the previous two instances of the exam. This is not to say, however, that all the questions in August 2002 and January 2003 were of a standard and predictable form, and it is not to say that the June 2003 exam is plainly a test of aptitude and the earlier ones plainly a test of school learning.
The New York State Council of School Supervisors, NYSCOSS, offers an analysis of the June, 2003, Administration of Physics and Math A Regents. Concerning Mathematics A they criticize in detail the level of difficulty of the majority of questions in Parts III and IV of the exam, and their assessment of the level of difficulty of the exam is more severe than mine. However, they do not attempt a comparison with previous instances of the Math A exam.
The nysmathab email discussion group of the Association of Math Teachers of New York State is used by teachers and others to vent their opinions.
First of all it is appropriate to commend Commissioner Mills and the New York State Board of Regents (or the legislature if they are responsible) for the policy to provide full public transparency to the Regents exams. This is important for the integrity of the exam and must be maintained.
Second it is important to stress that the Regents Math A exam does not require a complete overhaul. The faults that I identify can all be corrected without fundamental change in the format and general structure of the exam.
Third, it is essential to have professional mathematicians involved at all stages of designing the Regents Math A. All questions that are currently in the pipeline for future use should should be reviewed for mathematical style and content. The many flaws that I have noted in the last three Math A exams suggest the presence of many more flaws in the preliminary exams, to be uncovered in field testing. This is itself an intolerable situation. The expensive and time consuming process of field testing should be used to calibrate the difficulty of the exam and as a safeguard against an isolated poorly worded question. It should not have to serve as well to weed out an abundance of mathematical flaws and errors that could have been uncovered by competent professional reading.
Fourth, related to the previous recommendation, the observed categories of trivia questions must be removed from the pipeline. Students must not be quizzed again on what is the inverse and what is the converse of an implication or on what are complementary and what are supplementary angles.
Fifth, the calculators have to go. This is a change that will not be implemented for the next instance of the exam, but the sooner it happens, the better. I discussed here how the presence of calculators degrades the exam, and one can imagine how much more it degrades the process of exam prepping and of mathematics teaching in general.
Sixth, the Commissioner should have in place a better mechanism to deal with emergencies such as the controversy over the June 2003 exam. Nullifying the exam does not strike me as a desirable outcome. It should be routine practice that after every instance of the Regents a proper sampling of 5-10% of the papers is very quickly subjected to a preliminary grading in order to uncover any needed adjustments to the correction key. In the case of the present exam this would probably have led to more adjustments than only for question 14, and perhaps to an adjustment of the scaled score calibration as well, and it would then not have been found necessary to toss the results entirely.
Seventh, as a longer term project the Commissioner and Regents should arrange to rewrite the state mathematics content standards. The 1998 review and the 2000 review of State mathematics content standards, performed by Ralph Raimi and Lawrence Braden for the Fordham foundation, may serve as a guide for this revision.
Finally, the Commissioner and the Regents should stand by the integrity of a New York State high school graduation certificate as certified by the Regents exams. I don't think that they have adequately expressed that there is a real cost to tossing out the results of an exam or to lowering the standards. A high school diploma serves as the gatekeeper to a great many jobs. The cost of lowering the standards or tossing an exam is that the certificate is devalued, and prospective employers will ask for a two-year college degree instead.
Education Week has an article on the exam: N.Y. State Seniors Flunk Exit Exam, But Get Diplomas, by David J. Hoff. The article contains some further preliminary statistics. According to Hoff's reporting, quoting deputy commissioner Kadamus, 5,800 seniors, or about 4% of the graduating class of 2003, participated in the exam. At least 70% of the juniors and seniors that took the exam did not pass; on the other hand, an estimated 80% of the freshman that took the exam passed it.
Hoff also quotes Kadamus as saying that the June, 2003, exam had more problem-solving questions than previous exams, because the state is gradually raising its expectations. This is a remarkable statement, because all previous reports indicated that the added difficulty of the June exam was unintended and had taken the Department entirely by surprise. There seems to have been a problem of communication somewhere within the department, and the result was a serious setback for New York State's testing program.
(Return to Links, Articles, Essays, and Opinions on K-12 Education or to BJB Essays and Opinions or to New York City HOLD.)
The views and opinions expressed in this page are strictly those of the page author. The contents of this page have not been reviewed or approved by New York University.