Thứ Sáu, 19 tháng 11, 2010

Questionable Science Behind Academic Rankings

LONDON — For institutions that regularly make the Top 10, the autumn announcement of university rankings is an occasion for quiet self-congratulation.
When Cambridge beat Harvard for the No. 1 spot in the QS World University Rankings this September, Cambridge put out a press release. When Harvard topped the Times Higher Education list two weeks later, it was Harvard’s turn to gloat.
But the news that Alexandria University in Egypt had placed 147th on the list — just below the University of Birmingham and ahead of such academic powerhouses as Delft University of Technology in the Netherlands (151st) or Georgetown in the United States (164th) — was cause for both celebration and puzzlement. Alexandria’s Web site was quick to boast of its newfound status as the only Arab university among the top 200.
Ann Mroz, editor of Times Higher Education magazine, issued a statement congratulating the Egyptian university, adding “any institution that makes it into this table is truly world class.”
But researchers who looked behind the headlines noticed that the list also ranked Alexandria fourth in the world in a subcategory that weighed the impact of a university’s research — behind only Caltech, M.I.T. and Princeton, and ahead of both Harvard and Stanford.
Like most university rankings, the list is made up of several different indicators, which are given weighted scores and combined to produce a final number or ranking. As Richard Holmes, who teaches at the Universiti Teknologi MARA in Malaysia, wrote on his University Ranking Watch blog, according to the Webometrics ranking of World Universities, published by the Spanish Ministry of Education, Alexandria University is “not even the best university in Alexandria.”
The overall result, he wrote, was skewed by “one indicator, citations, which accounted for 32.5% of the total weighting.”
Phil Baty, deputy editor of Times Higher Education, acknowledged that Alexandria’s surprising prominence was actually due to “the high output from one scholar in one journal” — soon identified on various blogs as Mohamed El Naschie, an Egyptian academic who published over 320 of his own articles in a scientific journal of which he was also the editor. In November 2009, Dr. El Naschie sued the British journal Nature for libel over an article alleging his “apparent misuse of editorial privileges.” The case is still in court.
One swallow may not make a summer, but the revelation that one scholar can make a world class university comes at a particularly embarrassing time for the rapidly burgeoning business of rating academic excellence.
“The problem is we don’t know what we’re trying to measure,” said Ellen Hazelkorn, Dean of the Graduate Research School at the Dublin Institute of Technology and author of “Rankings and the Reshaping of Higher Education: the Battle for World Class Excellence,” coming out this March. “We need cross-national comparative data that is meaningful. But we also need to know whether the way the data are collected makes it more useful — or easier to game the system.”
Dr. Hazelkorn also questioned whether the widespread emphasis on bibliometrics — using figures for academic publications or how often faculty members are cited in scholarly journals as proxies for measuring the quality or influence of a university department — made any sense. “I understand that bibliometrics is attractive because it looks objective. But as Einstein used to say, ‘Not everything that can be counted counts, and not everything that counts can be counted.”’
Unlike the Times Higher Education rankings, where surveys of academic reputation make up nearly 34.5 percent of the total, Shanghai Jiao Tong University relies heavily on faculty publication rates for its rankings; weight is also given to the number of Nobel Prizes or Fields Medals won by alumni or current faculty. The results, say critics, tip toward science and mathematics rather than arts or humanities, while the tally of prizewinners favors rich institutions able to hire faculty members whose best work may be long behind them.
“The big rap on rankings, which has a great deal of truth to it, is that they’re excessively focused on inputs,” said Ben Wildavsky, author of “The Great Brain Race,” who said that measuring faculty size or publications, or counting the books in the university library, as some rankings do, tells you more about a university’s resources than about how those resources impact on students. Nevertheless Mr. Wildavsky, who edited U.S. News and World Report’s Best Colleges list from 2006 to 2008, described himself as “a qualified defender” of the process.
“Just because you can’t measure everything doesn’t mean you shouldn’t measure anything,” said Mr. Wildavsky, adding that when U.S. News published its first college guide in 1987 a delegation of college presidents met with the magazine’s editors to ask that the whole exercise be stopped.
Today there are over 40 different rankings — some, like U.S. News, focused on a single country or a single academic field like business administration, medicine or law, while others attempt to compare universities on a global scale.
Mr. Wildavsky freely admits the system is subject to all kinds of bias. “A lot of ratings use graduation rates as a measure of student success,” he said. “An urban-setting university is probably not going to have the same graduation rate as Dartmouth.”
“But there’s a real need for a globalized comparison on the part of students, academic policymakers, and governments,” he said.
The difficulty, Dr. Hazelkorn said, “is that there is no such thing as an objective ranking.”
Mr. Baty said that when Times Higher Education Magazine first set up its rankings in 2004 “it was a relatively crude exercise” aimed mainly at prospective graduate students and academics. Yet today those ratings have an impact on governments as well as on faculties.
Dr. Hazelkorn pointed out that a recent Dutch immigration law explicitly targets foreigners who received their degree “from a university in the top 150” of the Shanghai or Times Higher Education rankings.
According to Mr. Baty, it was precisely the editors’ awareness that the Times Higher Education rankings “had become a global news event” that prompted them to overhaul their methodology for 2010. So it is particularly ironic that the new improved model should prove so vulnerable. “When you’re looking at 25 million individual citations there’s no way to examine each one,” he said. “We have to rely on the data.”
That may not convince the critics, who apparently include Dr. El Naschie. “I do not believe at all in this ranking business and do not consider it anyway indicatory of any merit of the corresponding university,” he said in an e-mail.
But if rankings can’t always be relied on, they have become an indispensable part of the educational landscape. “For all their methodological shortcomings, rankings aren’t going to disappear,” said Jamil Salmi, an education expert at the World Bank. Mr. Salmi said that the first step in using rankings wisely is to be clear about what is actually measured. He also called for policy makers to move “beyond rankings” to compare entire education systems. He offered the model of Finland, “a country that has achieved remarkable progress as an emerging knowledge economy, and yet does not boast any university among the top 50 in the world, but has excellent technology-focused institutions.”

Không có nhận xét nào:

Đăng nhận xét