Shared scientific instruments help address big challenges in research. The Hubble Space Telescope has helped hundreds of astronomers do studies that none of them could have done if they had to also build the instrument they use. Physicists benefit from shared particle accelerators that are expensive to build and they test their own ideas to better our understanding of the universe. Shared scientific instruments make science more efficient since no single scientist need create and validate the instruments.
How social sciences can benefit from the shift to a digital world
Social sciences could benefit from such an instrument to continue the forward push to do more randomized controlled experiments. Social science is reaping the benefits of our cultural shift to the digital world and the many ways people interact with online platforms. This is going to be explored at an upcoming conference at MIT that will bring together researchers who use digital platforms to do randomized controlled experiments in various disciplines (e.g., economics, computer science, and psychology). The conference materials say, “the ability to rapidly deploy micro-level randomized experiments at population scale is, in our view, one of the most significant innovations in modern social science.”
Google, Facebook, and other web-based companies run hundreds of experiments a day on all of their users to maximize things like revenue from ads and user experience. What if we harnessed that power for education? What would a shared scientific instrument for education look like that could run experiments to help the world maximize student learning?
ASSISTments: An example of a shared platform
I am running one such platform utilizing ASSISTments. Over 50,000 students use it in class and for homework that teachers assign. The assigned content is from our library with a wide range of topics including math, science, and foreign language though the largest content library is middle and high school math. Content is also created and shared by teachers. Due to funding from NSF, the Department of Education, and others, we can make the whole system free to students, teachers, and researchers.
ASSISTments blends ASSISTance with assessment. It works by giving students feedback as they go, while teachers get actionable, timely reports. I have used this tool to publish two dozen studies so far, but now, due to a new NSF grant, we created ASSISTmentTestBed.org which allows external researchers to propose better ways to help students learn with technology. Researchers get their results through anonymized data emailed to them, with some preliminary automated data analysis telling them if there is an effect on their desired outcomes. A dozen researchers have submitted studies. All materials and data are made publicly available after giving the researcher time to publish the study.
The idea of archiving data to a repository is not new (e.g., ICPSR and PSLC DataShop); there are large data sets readily available to researchers. But to my knowledge there are no other shared open-platform in existence for researchers to propose and carry out their own experiments and get their data into a repository, which is the real contribution of ASSISTments.
Shared instruments allow a type of collaboration that otherwise is hard to arrange. All good studies in education need a variety of things including 1) measurement experts who care about measuring student learning with validated instruments, 2) researchers to design new ideas, 3) methodologists who understand how to design and analyze randomized controlled experiments and 4) teachers to work on these design teams to help come up with good ideas worth testing. In our platform, we bring the first and third to the table with quality instruments to measure student learning embedded into a platform that supports experiments. What we need now is more ideas from researchers.
In my view, we as education researchers need to think more holistically about supporting shared education research platforms. Astronomers and physicists know they need to lobby the government to get funding for large scale instruments, but education researchers don’t see the forest for the trees, failing to ask for larger tools they cooperatively share. This commonly devolves into educational researchers Balkanized into competing for scarce research funds, and policymakers consequently wondering if educational researchers are serious about doing science. I believe nonprofit, university-based platforms could help more researchers do better open science more efficiently, and increase our understanding of human learning dramatically, while at the same time deliver better educational outcomes.
 The most similar attempt has been the excellent PERTS Mindset Challenge that supported researchers to propose studies to be run in Khan Academy, Kaplan University and Florida Virtual Schools to figure out how to best optimize changing students mindset. Carol Dweck and others have done a lot of work on Mindset, but we want a shared scientific instrument where researchers can pursue whatever research idea they have to improve student learning. PERTS was focused on Mindset but there are lots other ideas that can help students achieve.
The Brown Center Chalkboard launched in January 2013 as a weekly series of new analyses of policy, research, and practice relevant to U.S. education.
In July 2015, the Chalkboard was re-launched as a Brookings blog in order to offer more frequent, timely, and diverse content. Contributors to both the original paper series and current blog are committed to bringing evidence to bear on the debates around education policy in America.