Medicine
Rankings of Medical Schools: Do they tell us anything?
.
Often it appears that Americans are obsessed by “rankings”. I am not talking about which is the best: car, TV, stereo, video game, and all the other consumer products we buy, and which are evaluated and often ranked by various organizations such as Consumers’ Union, based upon explicitly stated criteria. I am talking about the more subtle and subjective of rankings of various organizations and providers of services, particularly universities. More specifically, I will address the rankings of schools of medicine, and most specifically use as examples those in primary care and family medicine.
The
US News and World Report rankings of colleges, and graduate schools in a wide variety of areas, including medicine, are the most well-known and “respected” (in the sense of “paid attention to”[1]) of the national rankers. The question is, what do the rankings mean? How are they derived? What do they reflect about the “product” being evaluated? Are they using criteria that are accurately assessing what I am looking for in a school? Are these down-to-earth, utilitarian, “
Consumers Report”-type evaluations or are they more James Bond-like brand-name dropping[2]? Of course, if what I am looking for in a school is indeed cachet -- its status, fame and brand-name recognition -- then there is no difference. If, however, I am looking for outcomes – what is the success of that school in educating people in the area in which I wish to be educated, it is important to look at the criteria being used and the degree to which they accurately predict outcomes.
In general, most educators do not feel that
US News rankings accurately reflect what they purport to be ranking – quality of the school in a particular area. These criticisms probably are more vocal from those who believe that they are ranked lower than they should be, but even those ranked highly will usually acknowledge, sotto voce, that they are not completely accurate – although they are pleased to be ranked highly. Recently, probably in response to ongoing criticism from the higher education community,
US News has begun to publish the criteria that they use for ranking, the weight that they give to each criterion, and the method that they use to gather the information. This helps us to assess the validity of those criteria. (Validity is a concept that is used in research to evaluate the quality of a tool being used – how well does it actually measure what it is that I am using it to measure?).[3]
Medical schools are comprehensively ranked by
US News in Research and in Primary Care. For Research the criteria include “peer assessment” (by other Deans, Chairs and Residency Directors), selectivity (how high were the pre-admission grades and scores on the Medical College Admissions Test of its students, percent of applicants accepted – low is ‘good’), faculty:student ratio, # and $ amount of research grants. For Primary Care, peer assessment and selectivity are again considered but rather than measuring research grants, they look at the total number (#) and percentage (%) of graduates entering primary care residency training. In addition, US News reports top-ranked schools in a variety of program areas (AIDS, Family Medicine, Geriatrics, Internal Medicine, Pediatrics, Rural Health, Women’s Health); in these areas the rankings are done entirely by peer assessment.
The Peer Assessment counts for about 40% of the weight of the rankings for primary care (and 100% for the program areas listed above). Deans of medical schools, department chairs in the “primary care” specialties, and directors of residencies in those primary care specialties are asked to list the top schools, in their opinion. These are then cumulated and weighted. Selectivity accounts for about 15%, faculty:student ratio another 15%, and is the same as measured for Research. The final 30% consists of the schools self-report of the % of students graduating who enter the primary care specialties, defined by
US News as family medicine, general internal medicine, and general pediatrics. Let us deconstruct those three sets of criteria.
Percent of students actually entering the primary care specialties might seem to be the most objective, outcome-based criterion, and thus the most important. However, there are some problems in the data. What is, for example, the definition of entering a “general internal medicine” residency? Virtually all schools count everyone entering an internal medicine residency because, after all, the first 3 years, the residency they matched in, is indeed general medicine. The problem, of course, is that after completing that residency a percentage of graduates will enter medicine sub-specialty training (to become cardiologist, gastroenterologists, endocrinologists, etc.) and not practice primary care. And, as detailed in previous entries (“A Quality Health System Needs More Primary Care Physicians” December 11, 2008, Ten Biggest Myths Regarding Primary Care in the Future” by Dr. Robert Bowman January 15, 2009, “More Primary Care Doctors or Just More Doctors? April 3, 2009, and others) in recent years the percent entering subspecialty fellowships on completing their residencies has been increasing so much that the number of students entering internal medicine residencies who actually become primary care/general internists is becoming vanishingly small.[4] [5] So measuring those entering internal medicine residencies dramatically overstates the actual production of primary care doctors. But at least everyone does it.
Arguably, the most sensitive indicator is entry into family medicine; the reason is that virtually all family medicine residents become primary care doctors, so when the number of students entering family medicine is up, it means that interest in primary care is up, and it is likely that the percent of students entering internal medicine who will become general internists is also up. When, as now, the number entering family medicine is down, so is the number of internists entering general internal medicine.4,5
Peer assessment may be good, but it also has flaws. These include: people’s memories are dated (they may remember that a place was good and so assume it still is), they may assume that a place that is good in many things is good in everything (e.g., Harvard gets votes for great family medicine, even though there is no family medicine at Harvard!), and the ratings (especially from deans and chairs) may reflect the prominence of the faculty in primary care rather than the school’s success in producing primary care physicians. This is not to minimize the latter; “Best” primary care school does not equal “most students entering primary care”; it also includes the scholarship and prominence of the faculty on the national and international stage. Finally, because the chairs and residency directors surveyed are from all three specialties, the degree to which one or more is particularly strong or weak (or perceived as particularly strong or weak) can color the assessment.
Selectivity is an ironic criterion. The simple fact is that the more selective a school is the lower the primary care production. This is explained in many of the previous posts; in brief, students from medical families in upper class suburbs who had great schools and thus the likelihood of the highest grades are the least likely to enter primary care, while those from rural and inner-city backgrounds, as well as those from minority and lower income backgrounds are more likely to. High faculty:student ratio sounds good, but probably doesn’t matter to students unless they are teaching. In fact, schools with higher faculty:student ratios don’t usually have more teachers; the additional faculty are either doing research (good for the research criterion, less obviously so for primary care) or providing clinical care in a variety of settings that have little or nothing to do with educating students.
So what is the correlation between high
US News primary care rankings and entry of students into primary care? I have only the data on family medicine, but given, as above, that this is the most sensitive indicator of primary care, it is probably worth using. Here it is:
Of the
US News’ “Top 50” schools in Primary Care:
· -Only 10 were among the top 15 in either percent or total number of students entering family medicine.
· -Fully half (26) of these “Top 50” primary care schools were below the national average of 8.2% of students entering family medicine. Thirteen had 5 of fewer students entering family medicine, and 7 had 2, 1, or 0!
Conversely, only 6 of the top 15 schools in percent of students entering family medicine, and only 9 of the top 16 (4 way tie for 13) schools ranked by number of students entering FM, made US News’ “Top 50” for primary care.
What about
US News’ “Top 10” for Family Medicine (remember, these are ranked only by “peer assessment”)?
Only 3 of these medical schools were in the top 15 for students entering FM by percent, and 3 by total number of students entering FM residencies. Two schools were both, so a total of 4 of US News’ “Top 10” medical schools for family medicine were in the top 15 in either category. And of these 4, the highest rank for percent of students was #11, and for total number, the highest rank was #4.
Among that group of “Top 10 Family Medicine” schools, 3 (30%) were below the national average for percent of students entering FM, and 3 of them were quite low: 7 students (4.1%); 6 students (4%); and 2 students (2.2%)!
Again, conversely, only 3 of the top 15 schools by number of students, and only 4 of the top 15 by percent of students, entering family medicine residencies made US News’ Family Medicine top 10.
So how valuable are these rankings? The answer is: it depends. If you want high status, they are “it”. If you want a school that is actually successful at producing graduates who enter primary care, don’t count on them.
[1] Also as in “you’re not respecting me – but you will now that I’m pointing this gun at you!”
[2] I presume there is some newer name-brand dropper, but Ian Fleming was the master at one time.
[3] Not always so obvious; I could ask people if they smoke, but the answers might have limited validity if people don’t tell the truth. A blood test for a nicotine breakdown product might, e.g., be a more valid test.
[4] Garibaldi, RA, Popkave C, Bylsma W, “Career plans for trainees in internal medicine residency programs”, Acad Med 2005 May;80(5):507-12
[5] Hauer KE, Durning SJ, Kernan WN et al., “Factors associated with medical students’ career choices regarding internal medicine”. JAMA 2008;300(10):1154-64
-
Why Do Students Not Choose Primary Care?
We need more primary care physicians. I have written about this often, and cited extensive references that support this contention, most recently in The role of Primary Care in improving health: In the US and around the world, October 13, 2013. Yet,...
-
Beyond Flexner: Taking The Social Mission Of Medical Schools To The Next Level
In my blog entry for June 20, 2010, A New Way of Ranking Medical Schools: Social Mission, I discussed the article by Mullan, Chen and colleagues that had just been published in in the Annals of Internal Medicine, “The social mission of medical...
-
Primary Care Specialty Choice: Student Characteristics
. I have written about both the characteristics of medical schools (recently, in A New Way of Ranking Medical Schools: Social Mission, June 20, 2010; also Rankings of Medical Schools: Do they tell us anything?, September 5, 2009) and of medical students,...
-
Medical School Rankings Are A Means, Not An End
Two weeks ago, I asked, "Where will new primary care docs come from?" A group of researchers from George Washington University has provided an answer in this week's issue of the Annals of Internal Medicine. In response to the U.S. News and World Report...
-
Why Do Students Not Choose Primary Care?
This is a re-post of a great article written last month by Dr. Joshua Freeman and posted on his blog Medicine and Social Justice. It is re-posted with his permission here on Education in Medicine. Enjoy! Joshua Freeman, MD We need more primary...
Medicine