How colleges can measure up in teaching ‘critical thinking’
Loading...
After he became president of Purdue University in 2013, Mitch Daniels asked the faculty to prove that their students have actually achieved one of higher education’s most important goals: critical thinking skills. Two years before, a nationwide study of college graduates had shown that more than a third had made no significant gains in such mental abilities during their four years in school.
Mr. Daniels, a former governor of Indiana, needed to justify the high cost of attending Purdue to its students and their families. After all, the percentage of Americans who say a college degree is “very important” has fallen from 75 percent in 2010 to 44 percent today.
Purdue now has a pilot test to assess the critical thinking skills of students as they progress. Yet like many college teachers around the United States, the Purdue faculty remain doubtful that their work as educators can be measured by a “learning outcome” such as a graduate’s ability to investigate, reason, and put facts in context. Many still prefer the traditional system of course grades in specific fields or overall grade averages, despite serious concerns by employers about “grade inflation.”
The professors need not worry so much. This week, the results of a nationwide experiment were released showing for the first time that professors can use standardized metrics to assess the actual coursework of students – across many schools and disciplines – and measure how well they do in three key areas: critical thinking, written communication, and quantitative literacy.
The project involved more than 125 professors judging 7,000 samples of students’ class work from 59 institutions in nine states. It was initiated by the Association of American Colleges & Universities (AACU) and the State Higher Education Executive Officers.
The idea partly derives from the frustration among colleges over the many attempts by “outsiders” – from U.S. News & World Report to the Obama administration’s new “College Scorecard” – to rank or rate schools for consumers of higher education. Rather than continue to have professors remain on the defensive, the AACU took the offensive to show that faculty can define the generalized “rubrics” of what students should be learning.
Despite the success of the project in showing that teachers can design such assessments, the actual results are worrisome, and mostly confirm earlier studies. Of the students who attended a four-year institution and who completed most of their coursework, fewer than a third of their finished assignments earned a high score for a critical thinking skill (“using evidence to investigate a point of view or reach a conclusion”). Less than half of their coursework drew “appropriate conclusions based on quantitative analysis of data.”
The project organizers summed it up this way: “Of the three outcomes evaluated, far fewer students were achieving at high levels on a variety of dimensions of critical thinking than did so for written communication or quantitative literacy.” And that conclusion is based only on students nearing graduation. The project did not measure the work of students who have completed less than 75 percent of their coursework.
American universities, despite their global reputation for excellence in teaching, have only begun to demonstrate what they can produce in real-world learning. Knowledge-based degrees are still important. But given the pace of discoveries in many fields, employers are demanding advanced thinking skills from college grads.
If faculty can now work with college presidents, government leaders, and others to accurately measure the intellectual worth of a college degree, more people will seek higher education – and come out better thinkers.