On April 4, 2016, the Honourable Minister of Human Resource Development announced the top educational institutions in the country with much fanfare. Rankings were announced for Universities, Engineering, Management and Pharmacy. For a nation, where credible information on institutions is hard to come by, a ranking exercise by the state is a welcome move. But the outcome has been quite miserable, to say the least.
Here is a set of issues for the ranking agency to consider.
1. In the final ranking sheet, across domains, there are a few outstanding institutions, 20% good ones, about 30-35% average ones and downright odd entries make up the rest. For instance, in case of B-Schools, see the article by Maheshwer Peri, which illustrates this point (http://www.outlookindia.com/website/story/open-letter-to-smriti-irani/296812). The NIRF needs to answer this question. Why it is ranking any institute that came its way?
2. Take the university listing. We have NAAC, which also creates a comprehensive score after flying in a team of experts who spend days looking at data and engaging with faculty. And we have NIRF with its faceless bureaucrats and mechanical data processing. Both NIRF and NAAC finally produce a score. Among the top 100 universities listed by NIRF, we have ten schools whose NAAC scores are less than 3; five have less than 2.6. And here are some of the top universities that do not find a mention in the NIRF list despite a good NAAC score.
3. Assuming these schools did not participate why did the NIRF end up ranking anybody and everybody who participated? One should be aware that in terms of scores both NAAC and NIRF allocate 30% each to teaching and research. Thus it stands to reason that there must be at least good correlation between these two systems even after accounting for the rest 40%. Can this wide variance be at least explained, to say the least?
4. Why are student-preferences missing in this national-level ranking? The cut-offs determine the student’s future in most admissions. And it is an impressive indicator of an institution’s standing in the mind of the students. SRCC B.Com (Hons) admissions demands 100 percentile, so does a College of Engineering Anna University admission, while as an average college in Delhi University will give you admission even at a 65 percentile. One would have assumed that the state with all its powers could have used this occasion to give the student an indicator of the standing of an institution based on its best and worst cut-off.
5. Most recruiters use the cut-off as a proxy to assess the quality of a cohort in a school and even take recruiting decisions based on the same. This is primarily the reason why a new IIM housed in a rented two-story building in engineering colleges still gets recruiters because he or she is assured of a 90 percentile there. The brand does rest of the magic. But for some inexplicable reason this most crucial parameter is left out of reckoning. The result is that odd institutions that cannot even fill 15% of their seats find a place in the NATIONAL RANK SHEET!
6. Let us take the first component of NIRF methodology - “Teaching and Learning”. This accounts for 30% of the total score. In addition to inane issues like ‘investments in library etc.’, the core factors considered are Student/Faculty ratio, PhD/non-PhD ratio and experience of the faculty. And how does NIRF count experience? It is based on the age of the teacher minus 15 years, with 15 years being the ideal years of experience. The net result is that any university with maximum number of professors will get full score on that count!
7. What is the result of this amazing methodology? In the top 15 universities 4 are agriculture/veterinary universities (no offence meant to them). And in management. IMI Kolkata is slightly better than IMI Delhi - the parent school. IMI must withdraw from the ranking forthwith!
8. And 5 out of the 15 are very specialized small schools, which are either INIs or Deemed Universities. Both will have good S/F ratio and PhD/non-PhD ratio.
Universities listed by NIRF with low NAAC score
Better NAAC scoring institutions missing in NIRF listing
Name of the Institute
9. The point is, as a ranking agency NIRF must move beyond investments, and area of buildings in terms of sq. feet. It reminds one of “License Raj”. Teaching innovations have no mention here and even curriculum development has not been ascribed any score. You, as the government, have the ability and power to go beyond these numbers. But you didn’t. You are the government. Get better and smarter data. Or simply move out of the ranking exercise.
10. Let us get to the next important rating component - Research output. A fascinating tool that is used world over to rate good schools. But we are a great nation. So what we do? Instead of one index we use three, Scopus (I will discount the typo), Web of Science and Google Scholar. Never mind, the journals are duplicated and it would be ridiculous to average them out. We too did that with complicated weights that vary for both citations and publications and again vary for each domain. Who can argue with the government? So look for journals indexed in Scopus, Web of Science and Google scholar. One paper will get you three points!
11. This has resulted in odd names like Goa University, North Maharashtra and Guwahati University scoring above 75% in research. They just don’t publish enough to be even among the top 50 institutions. It becomes much worse when it comes to B-Schools. Beyond the top few schools most have published in single digits and ended up with scores in two digits. Either NIRF must explain why they awarded such ridiculous scores or have the courtesy to apologize to the nation!
12. Let us now move on to the next parameter - Outcomes. Schools that place 10% of their total strength at average salaries varying from Rs. 20,000 to 25,000 per month are in the top 50 school listings. But then it is difficult to blame NIRF. Outcomes are notorious to measure. So like the drunkard who searches for the lost needle, wherever there is light instead of wherever he lost it, NIRF decides – “We will rank based on what we can measure”. So, for universities, the measure is the pass percentage (50%) and pass percentage in public examinations (50%). So what if most private universities and half the public universities in the country would be out of the reckoning in this parameter?
13. And we are not even beginning to talk about rating affiliating universities like DU. That too with colleges as diverse as St. Stephen’s and ARSD. So for a Delhi University, NIRF will consider constituent schools, affiliated colleges or autonomous schools or all of them?
14. Or take a VTU or Anna University each with about 300-500 affiliated engineering colleges? How will the outcomes be measured? Can we at least start a discussion, NIRF?
15. Now comes the best part - peer perception. This is the tool which most ranking agencies use to push or pull a difficult customer. The database is opaque (composition, level of pollsters, geography, education-qualification) nothing is available in the public domain even for globally respected rankings like THE or QS. Which is why ARWU sometimes throws up very diverse names vis-Ã -vis the other two.The latter is completely objective.
16. Now NIRF attempts something different. It uses peer perception. For B-Schools it also provides data on public perception, but not for others. So much for data transparency. But interestingly 36 out of the top 100 players in universities get a ‘0’ score in peer perception. Should they at all be in the top 100 listing?
17. But the scoring gets much more complicated. Those universities with 0 peer score in universities’ ranking also get ‘0’ marks. But when it comes to B-Schools, 34 out of top 50 have ‘0’ peer score, but all of them get marks between 24 and 46. Is it because most schools have no standing amongst peers in NIRF listings and that is too much to stomach for the rankers? So it ended up giving some score to all of them.
18. Lastly, if quality improvement is NIRF’s goal why create this artificial distinction between teaching+research schools (Type A) and teaching only (Type B). This is all the more laughable since all that differs between the teaching and teaching+research school is the weightage you allocate for research in scoring. Whom are you kidding and why?
This is a sample list of issues. As we dig the data further, we could find more issues. I do sympathize with NIRF. Most issues that I have identified are also issues that any serious ranking organization would face including Careers360. But we make efforts to make the list representative, by going beyond mechanical processing of data supplied. We identify rogues and remove them. We apologize when there is an error. We work closely with stakeholders. And we definitely do not go with perception. But all that will take a bit of humility and the willingness to look beyond IITs and IIMs. The country needs it. Will the real NIRF stand up and be counted?
And in the meantime, please withdraw the current list. It is a disgrace.
Stay tuned to university.careers360.com for more news and updates on University Rankings