Part One
Part One: What Got Us into this Mess?
According to Websterâs Dictionary, data is factual information (as measurements or statistics) used as a basis for reasoning, discussion, or calculation. Data has become the latest focus for teachers, administrators, and superintendents alike. Where did this all start?
Over the past thirty years, Americaâs educational system has been under scrutiny due to comparisons of studentsâ scores on tests. When compared, American studentsâ test scores ranked below most other countries, including some developing nations. Many felt there was no excuse for this disparity due to the access American children had to a plethora of resources not available to other children throughout the world. Therefore, these reports, which started with A Nation at Risk, caused political powers, such as Ronald Reagan and every president following him, to get involved and make education a national issue that needed to be âfixed.â This became a popular campaign topic that was broad enough to hook parents wanting their children to receive a quality education, while convincing educators that these candidates who spoke about improving education may care enough to create positive change in the schools. This idea quickly changed as politicians began to focus more on making schools accountable (No Child Left Behind âNCLBâ and other accountability laws), changing the field into a more government-controlled top-down model approach. Regardless of the changes, the main reason was that the reputation of the United States was being threatened by the fact that American students were simply not making the cut.
Citizens of the United States feel that the title of âsuperpowerâ is what best describes their status in the world. Being American encompasses overconfidence in facets beyond reality. The pressure of portraying the idea that the United States is the best at everything except for education caused scrutiny in the educational field due to the sudden spotlight being highlighted on the apparent problems throughout.
At first, it started with a simple approach of knowing that there needed to be accountability and working on ways to create a process of checks and balances. As the years passed, the ideas turned to measurement tools, and those tools became tests. Those tests or assessments became more widely used, despite the unclear reason behind them. As a result, the tests did not improve the overall education of students, which led to examining if there was a certain type of test that would improve studentsâ test scores.
Tests became more focused and were broken into two main classifications: norm-referenced and criterion-referenced testing. The norm-referenced tests examined the overall norms common in groups of students, which included factors such as grade or race. Those factors, along with some others, were then compared in an attempt to establish a baseline normal among groups of students, so that parents would be able to understand their childâs ranking compared to other students. Conversely, the criterion-referenced test examined the specific criteria that a certain group of students were expected to know and measured that knowledge based on how much of the required information the students actually knew. In theory, the criterion-referenced test seemed to be more sensible; however, the educational systemâs hurdle was determining exactly what criteria should be used in the measurements.
The mid-1980s to the mid-1990s looked to states like California and Texas to come up with norm-referenced tests that would set a standard of norms for students across different age groups throughout the country. But these were just the first steps that the country took in trying to look at the big picture. These tests were given toward the end of the year and are still given to many primary students across the country. These tests provided a snapshot of the skills that students in different grades grasped. The pressure was not as it is today because teachersâ jobs were not being measured by this data. Instead, it was a way to show parents how their students compared to others within their age group. For a long time, this was the only standardized formative assessment that was given until students reached high school. Otherwise, students were taught materials and were expected to retain it from year to year.
Still, some critics felt this was not enough to gauge studentsâ actual knowledge. A man by the name of E. D. Hirsch was adamant that students were graduating school without having the necessary foundational skills they needed. He claimed students lacked knowledge of the classics and needed to be more âculturally literate,â coining the idea of cultural literacy and resisting the progressive educational movement. He saw that there were specific skills that âevery child needed to knowâ for each grade level and fought through the twenty-first century to maintain the idea that this would help strengthen education of students as a whole.
By the time George H. W. Bush took office, he brought back the topic of education and the need for its reform. Although his reform movements were subtle, the move toward more criteria-reference tests became apparent. States began to work in committees to come up with a set of state standards that would establish a common thread among schools within the same state. These standards took years to establish, but by the mid-1990s were introduced as a preliminary benchmark for all teachers so that they had a basic idea of what they should be teaching. Still, the push was not focused on what the lower grade schools were doing, but instead on what the high schools were doing to prepare students to become high school and college graduates.
The issue of wanting students to not only graduate high school but also to attend college was the main cause for internal teacher strife. Teachers in high school voiced concerns that the students did not come to high school prepared with the skills needed to succeed, which forced high school teachers into reteaching the fundamentals. Middle school teachers complained that many students should have been retained and many came to middle school reading two- to three-grade levels below, and therefore, were unable to grasp grade-appropriate material. Primary teachers would protest that the previous years, teachers did not equip students with the skills needed, and kindergarten teachers would reply that parents did not send children into preschool with the reading readiness and social skills needed to come to school. As the blame game became common in schools across America, the remedy through excuses was acceptable to make the problem go away.
The only true measures used to see if students grasped the information required to be able to handle college-level work were the Scholastic Assessment Test (SAT) and the American College Test (ACT). All students interested in going on to college were required to take the SAT, formerly Scholastic Aptitude Test, or the ACT. The focus of improving the professional workforce in America by creating more doctors, lawyers, and white-collar professionals led to the push to have more students get accepted and graduate from a four-year college. This also caused the score on the SATs to take precedence, which became the start of the idea of teaching to the test.
Many companies, such as the Princeton Review and Kaplan, were established to help students learn test-taking strategies. While the Princeton Review did not âteach to test,â it gave the students that participated the skills necessary to achieve higher scores regardless of the studentsâ knowledge base, creating a system of âpay to test well.â These companies were able to monopolize on the need for students to do well on the SATs, boasting that they would improve studentsâ scores by at least one hundred points, or they would be able to retake the course for free. Despite these efforts to advance studentsâ educations, none of these really impacted the students in the classroom. Teachers were left to teach what they felt was right using the few state standards they had, hoping that they covered all of the necessary topics. As groups of students began to graduate lacking foundational skills that were needed for college or even everyday life, colleges were forced to adjust.
The rise in attendance of community colleges began, and the caliber of students that attended these two-year degree schools changed. Community colleges, although created in the early 1800s, had begun to be a way for not only military or trade skills students, but for students looking to eventually graduate with a four-year college degree. This provided a second chance for students who did not have a strong enough academic foundation to be âsuccessfulâ in a four-year college. During the 1980s, there was an emphasis on collaborations between high schools and community colleges to prepare students for vocational and technical trades. These partnerships allowed alternatives for students who could not yet handle the rigor of the four-year college. This also led to more students testing into remedial classes that were prerequisites for college-level math and English classes. As parents started to realize they had to pay for classes that their children were not even going to get credit for, the need became more urgent to fix all schools at the K-12 level so that all students would have the choice to pursue college or not, rather than the choice being made for them.
As the No Child Left Behind Act was being implemented in schools across the nation, there was a set of new terms in education that became key. Highly qualified teachers, Adequate Yearly Progress (AYP), focus schools, priority ...