Executive Summary
Education research is a vast, multi-disciplinary field. In trying to understand it or make judgments about importance, influence, or where the action is, it can be helpful to see the big picture and not be swayed by where we happen to sit in the field. A map of education research derived from citation data can help us see the big picture. Responses to one of my recent Evidence Speaks postings and its relation to the annual Education Week Edu-Scholar Influence Rankings serve as an example of how a map might help.
Introduction
Education research is a large field involving, among others, psychologists, economists, sociologists, anthropologists, and political scientists, working on diverse problems and issues. When we make judgments about the field, we might tend do so from the vantage point of our own research community. A map of education research can help us appreciate other perspectives, allowing us to moderate and refine our judgments.
Responses to one of my own Evidence Speaks postings and the recent Education Week RHSU Edu-Scholar Influence Rankings provide examples of how a map showing the big picture can broaden our understanding.[i]
In my earlier posting, based on a bibliometric analysis of the education literature, I reported that the three largest research communities within the field were Instructional Design (called Cognitive Load Theory in that posting), Motivation, and Science Education. Among the most highly cited, influential, and currently active scholars in these communities were John Sweller, Richard Mayer, Albert Bandura, Herbert Marsh, and Michelene Chi. This made perfect sense to me because for the past 35 years, I have been viewing education research from my own perspective grounded in psychology and cognitive science.
Some readers pointed out that many education researchers might never have heard of cognitive load theory or how mental representations affect solving physics problems. This is a valid criticism.
My claims looked particularly dubious when compared to the RHSU Edu-Scholar Public Influence Rankings. Each year Rick Hess ranks the 200 scholars (hereafter, the public influence scholars) who have most influenced the national discourse on education during the previous year. Instructional design and science education do not leap out from the names on that list. Heading the list for 2016 were Diane Ravitch, Linda Darling-Hammond, Howard Gardner, and Gary Orfield. None of the scholars I mentioned even appear on it.
A map of education research can help us understand this divergence in view. It also generates some interesting questions about the Influence Rankings. The rankings tend to over-represent scholars working in some research communities and under-represent others. Interestingly, it under-represents scholars working on instruction and subject-matter learning. Why might this be the case?
Analysis
A document co-citation map represents the semantic and intellectual structure of a research field by depicting co-citation relations among cited documents. The map used in this analysis is based on 2,327 articles published during 2014 in the top quartile (by impact factor) of Web of Science Education and Education Research journals (52 journals). These articles cite 90,000 other articles, i.e., there are 90,000 nodes in the co-citation map. Note that these cited articles are not necessarily published in education research journals. They are published in journals representing numerous disciplines other than education: psychology, sociology, anthropology, policy studies, economics, linguistics, and neuroscience. The map contains 2,775 citations authored by 195 of the 200 public influence authors. A community detection algorithm applied to the co-citation network identifies over 110 research communities within it.[ii] The largest communities contain around 5–6 percent of the citations.
One can compare the distribution of citations to the work of public influence authors across the research communities to the distribution of all citations across the communities. This reveals communities within which public influence authors are over, under, or comparably represented compared to the field as a whole. One research community contains over 16 percent of the 2,775 public influence author citations.
Note that the citation data is used to map the field, to identify research communities, and to determine which scholars appear in which community. The analysis does not assume that highly cited researchers are necessarily influential outside of the academic literature. The number of times a scholar is cited or co-cited is not important in generating the map. No citation threshold is employed. We want to see where scholars appear on the map, even those cited only once. Identifying scholars who influence public discourse is a different task than identifying highly cited or co-cited scholars within the education research community; although, one might hope, or expect, that many of the public influence scholars are also among the academically prominent.
Table 1 shows the results. Community labels are derived from inspection of the cited references, (including book titles and reports) and the journals in which they were published. The labels are intended to capture the main theme of each community. The research foci of the communities are obvious from the labels with possibly two exceptions. Instructional Design focuses on research that attempts to design instructional interventions sensitive to limitations on human working memory (cognitive load). Politics and Education contains work on the political implications and political commitments of education and education policies, i.e., Common Core or state involvement in early childcare and parenting.
Table 1.
Top 10 Education Research Communities |
|||
Community |
% Educ. Res. Citations |
% Public Influence Scholars |
|
Instructional Design |
6.2 |
2.0 |
|
Early Childhood |
6.1 |
7.1 |
|
Science Education |
6.0 |
1.4 |
|
Reading |
5.8 |
1.7 |
|
Teaching, Teacher Evaluation |
4.8 |
9.6 |
|
School Organization, Management |
4.7 |
13.6 |
|
Motivation |
3.7 |
2.6 |
|
Second Language Education |
3.6 |
0.1 |
|
Learning in Social Contexts |
3.5 |
1.7 |
|
Politics and Education |
3.5 |
6.0 |
|
|
|
|
|
Top 10 Education Week Research Communities |
|||
Community |
% Educ. Res. Citations |
% Public Influence Scholars |
|
School Effectiveness |
16.5 |
3.0 |
|
School Organization, Management |
13.6 |
4.7 |
|
Teaching, Teacher Education |
9.6 |
4.8 |
|
Race and Ethnicity |
7.6 |
1.6 |
|
School Desegregation |
7.5 |
1.9 |
|
Early Childhood |
7.1 |
6.1 |
|
Politics and Education |
6.0 |
3.5 |
|
Educational Technology |
3.0 |
3.9 |
|
Motivation |
2.6 |
3.7 |
|
Instructional Design |
2.0 |
6.2 |
|
The top panel lists the ten largest research communities in the education research literature. The second column gives the percentage of public influence author citations appearing in that community. Seven of the 10 communities contain research directly relevant to student learning and motivation, particularly research on specific subject matter learning (science, reading, and second language). Three communities—Teaching and Teacher Education, School Organization and Administration, Politics and Education—address issues and research questions about education as an institution or profession.
For the Early Childhood and Motivation communities, the percentages of citations in the literature and the percentage of public influence author citations in those communities are comparable. However, for the Instructional Design and most subject matter communities, public influence authors are markedly under-represented. Instructional Design, the single largest community, contains 6.2 percent of all citations but only 2 percent of the public influence author citations. Public influence scholars are also under-represented in Science Education, Reading, and Second Language Education.
The table’s lower panel shows the ten largest public influence scholar communities. School Effectiveness is the largest such community. Race and Ethnicity is the fourth largest, followed by School Desegregation. In these top five communities, public influence scholars are highly over-represented, for School Effectiveness by a factor of five.
Where are the top four public influence scholars? Diane Ravitch’s work appears within Politics and Education, although her work is not highly cited in the article set. Ravitch probably has a larger presence in the education and general media than in the strictly academic literature. Darling-Hammond is among the most cited authors in the article set. Her work appears in several communities, although her primary community is Teaching and Teacher Education. Howard Gardner is moderately cited within the article set and his work appears primarily within a research community on creativity. The creativity community contains around 2 percent of the 90,000 citations. Gary Orfield is the highest cited author within School Desegregation and the author of one of its most central papers.
Implications and Speculations
These results explain the apparent discrepancies between my earlier findings and the somewhat different picture implicit in the Influence Rankings. Instructional Design and Science Education, research communities dominated by psychology, again emerge as two of the largest research communities. However, in the big picture, the citations contained in these communities represent only 12 percent of all the citations found in the literature. My critics were right. It may well be the case that scholars working in the communities that contain the other 88 percent of the citations are unfamiliar with the work and with the scholars that appear in Instructional Design and Science Education. My conclusions were obvious and made perfect sense to me because for the last 35 years I have been observing education research from a vantage point of psychology and cognitive science. Likewise, one could not readily infer the prominence of Instructional Design and Science Education within education research from an examination of the public influence scholar rankings. Those research communities tend to be under-represented in the rankings of public influence.
The map and the distribution of citations among research communities raise an interesting question: Why are some research communities over- and under-represented among the public influence scholars compared to the education research literature as a whole? Of course, there is no obvious reason why a list of 200 education scholars with high public influence should be representative of the field of education as a whole, but there is no obvious reason why they should not be, either.
One might speculate that the work of some research communities is perceived as having broader policy implications or having more general interest than others. School Desegregation, Race and Ethnicity, and Politics and Education are likely candidates. School Effectiveness research might also fall into this category. If this is the case, it is likely that the education and general media not only pick up on these traits, but also amplify them through their coverage. It might also be the case that scholars working on socially and politically significant issues, such as School Desegregation, feel an obligation, or have an inclination, to engage in public discourse. Scholars working on how misconceptions affect science learning or the importance of phonological awareness in learning to read, while eager to share their findings, might not feel the same obligation or inclination. If this is the case, scholars from what are perceived to be the more academic research communities will either have to embrace their secondary status in the public arena or consider changing their collective mindset and make a greater effort at public engagement.
One might also speculate that scholars working in Instructional Design or subject matter learning simply have less to contribute to the public dialog than do scholars from what are perceived to be the more socially and politically relevant research communities. I think this is unlikely, given the emphasis on instruction and subject matter learning that does appear in the policy literature. For example, in a recent Evidence Speaks posting, Brain Jacob (a public influence scholar whose work most often appears within the School Effectiveness community) laid out several challenges that must be met to improve computer assisted instruction and e-learning environments. Instructional Design is the source of the basic and applied science needed to address these challenges. Also, there is a national emphasis on improving STEM education. Among the most cited and influential publications within the Science Education community are the National Research Council reports of 1996, 2000, and 2011.[iii] These reports are intended to influence policy. It is hard to believe that the authors and contributors to these reports have little to contribute to the national dialogue. One could make a similar claim for literacy and second-language education, perennial policy issues.
A third possibility is that there are features in the public influence scholar selection process that affect the balance among research communities. Each year the top 150 ranked scholars are retained from the previous year; 75 percent of scholars carry over from one year to the next. Any initial imbalance in choosing public influence scholars is likely to persist. A selection committee, composed of other ranked scholars, nominates new candidates and votes on them. Those nominees voted into the top 50 are added to the retained 150 and ranked using criteria such as h-index, book publications, and media presence. All scholars on the list were selected at one time or another.
Notice that scholars are selected for their public influence by other public influence scholars and then ranked using the criteria. The ranking criteria appear to play no explicit role in the selection process.
This year’s selection committee members come overwhelmingly from just three research communities: School Effectiveness, School Organization and Administration, and Teaching and Teacher Education.[iv] Nearly 50 percent of the selectors’ citations appear within these communities. (For the record, contributors to Evidence Speaks congregate in School Effectiveness (34.5 percent of their citations) and Early Childhood (24.3 percent).) If selectors tend to nominate new candidates from the vantage point of their own research community, even with the best intentions, the selection process could lead to a list of public influence scholars that under-represents some research communities and over-represents others. The under-represented communities could even be actively engaged in our national discourse.
The ranking criteria cannot correct flaws in the selection procedure. The criteria serve to rank-order scholars who have been pre-selected as being influential. Numerous scholars have very high scores on those criteria, but may or may not be considered influential. Table 2 lists the ten highest cited authors in the co-citation map. All of them have an h index greater than 50, which would give them the maximum Google Scholar score of 50 in the rankings. This would place all these authors within the top 100 based on Google Scholar h-index alone. For 2015, Richard Mayer would receive a score of around 99 based on the criteria, putting him among the top 15 on the list were he ever selected to appear on it. The way to game the system is to get selected for inclusion in the first place, not to manipulate your score on the metrics.[v]
Table 2.
Highest Cited American Scholars |
||
Scholar |
Primary Community |
Citations |
R.E. Mayer |
Instructional Design |
259 |
A. Bandura |
School Organization |
216 |
M.H.T. Chi |
Instructional Design |
155 |
S.W. Raudenbush |
School Effectiveness |
127 |
P.R. Pintrich |
Motivation |
126 |
L. Darling-Hammond |
Teaching |
122 |
R. Deci |
Motivation |
100 |
C.A. Perfetti |
Reading |
93 |
R. Pianta |
Early Childhood |
84 |
J. Lave |
Teaching and Teacher Education |
83 |
Influential authors appear in boldface.
The presentation of the RHSU Edu-Scholar Influence Rankings scoring rubric acknowledges its weaknesses, grants that the rankings are far from perfect, and welcomes advice. The map of education research allows us to examine how various research communities are represented in the rankings. This raises some questions about the perceived salience of some research communities to policy and how effective research communities are in public engagement. As far as the rankings themselves, an understanding of the research community structure suggests that the scholar selection process merits some attention. Maybe the map can help with that too.
[i] http://blogs.edweek.org/edweek/rick_hess_straight_up/2016/01/the_2016_rhsu_edu-scholar_public_influence_rankings.html
[ii] The Slow Local Moving algorithm applied to the document co-citation network is used is used to identify the research communities.
[iii] See http://www.nap.edu/catalog/4962/national-science-education-standards, http://www.nap.edu/catalog/9596/inquiry-and-the-national-science-education-standards-a-guide-for, and http://www.nap.edu/catalog/13165/a-framework-for-k-12-science-education-practices-crosscutting-concepts.
[iv] http://blogs.edweek.org/edweek/rick_hess_straight_up/2016/01/the_rhsu_edu-scholar_public_influence_scoring_rubric_1.html
[v] The median scores on influence metrics are: Google Scholar ≈ 30, Book points ≈ 4, Highest Amazon Ranking = 0, Education Press Mentions = 1, Web Mentions = 5 – 6, Newspaper Mentions = 3, Congressional Record Mentions = 0, Klout Score = 0.