Brown Center Chalkboard

# An unexpectedly positive result from arts-focused field trips

Most education research focuses on math and reading outcomes or educational attainment because those are the measures that the state collects and are readily available to us. Less is known about how students are doing in other subjects and whether their progress in those areas has important benefits for them and society. So, a number of current and former students and I have been examining what students get out of arts experiences during field trips. Both art education and out-of-school field trips are long-standing educational practices, but we have relatively little evidence from randomized experiments about what those practices do for students.

## Jay P. Greene

### Distinguished Professor of Education Reform - University of Arkansas

In a new experiment, we are conducting on the effects of arts-focused field trips—and we have a positive result that we totally did not expect. The study is funded by the National Endowment for the Arts and examines long-term effects of students receiving multiple field trips to the Woodruff Arts Center in Atlanta. The Woodruff Arts Center houses the High Art Museum, Alliance Theater, and Atlanta Symphony, all on one campus. We randomly assigned 4th and 5th grade school groups to get three field trips per year–one to each of Woodruff’s arts organizations–or to a control condition in which students received a single field trip. We administered surveys to collect a variety of outcomes from students at the beginning and end of the school year, and also collected administrative data from the participating school district. We are currently examining the results after a single year, but some students will get a second treatment of three field trips, and we will continue tracking students over time.

The surprising result is that students who received multiple field trips experienced significantly greater gains on their standardized test scores after the first year than did the control students. If we combine math and ELA tests, we see a gain of 12.4 percent of a standard deviation at p < 0.01, which translates into roughly 87 additional days of learning. Breaking out the results shows gains of similar magnitude for both math and ELA, but the math result is only significant at p < .10 while the ELA result is statistically significant on its own. The treatment and control groups do not differ in their baseline test results and otherwise appear similar, so these changes seem to be the result of the treatment.

The reason these results are so surprising is that previous research had suggested that arts instruction tended not to “transfer” into gains in other subjects. Most famously, Ellen Winner and Lois Hetland conducted a systematic review of the literature and concluded, “We can see that studying the arts, and studying an academic curriculum in which the arts are somehow integrated, does not result in higher verbal and math achievement, at least as measured by test scores, grades, or winning academic awards.” Hetland and Winner make a convincing case that there is little to no rigorous evidence that art improves performance in math or reading, just as there is little evidence that math or reading improve performance in art. Each subject teaches its own particular content and skills, and there is relatively little transfer between them.

Hetland and Winner were so convincing that in our past research on the effects of field trips to visit art museums or see live theater, we didn’t even bother to examine the possible relationship between these activities and math or ELA test scores. Instead, we focused on how arts experiences affected student values and interest in future consumption of the arts, and found positive results that were consistent with previous findings and theory. Collecting test information would have been logistically complicated in those past studies.

### Related

In our new study, however, there was no extra burden associated with examining test scores because we were already collecting other administrative data. So when we conducted the analysis on the effects of treatment on test scores, we expected to find no statistically significant effects, just like almost all previous rigorous research.

Finding a positive effect of multiple arts-focused field trips on math and ELA test scores does not mean that Hetland and Winner were wrong. We still do not believe that arts instruction and experiences have a direct effect on math or ELA ability. We think this because the bulk of prior research tells us so, and because it is simply implausible that two extra field trips to an arts organization conveyed a significant amount of math and ELA knowledge.

Our best guess is that test scores may have risen because the extra arts activities increased student interest and engagement in school. Looking at two different measures of student conscientiousness, the likelihood that students would fail to respond to survey items or would respond carelessly on those items, we find that treatment students experienced a significant increase on these outcomes, which may be indicators of school engagement. If students are trying to be responsive on our survey as a result of being exposed to interesting arts activities, perhaps they are also trying harder in school more generally. Maybe arts-focused field trips do not teach math or reading, but they do make students more interested in their school that does teach math and reading.

But this is just a guess. We don’t really understand why test scores went up. It’s always possible that this positive result was just a fluke, and there is really no relationship between field trips and test scores, directly or indirectly.

## The difficulties of unexpected results in academia

The odd thing about trying to write a paper with these results to present at conferences and submit to a journal is that there is strong pressure for us to pretend like we expected our findings all along. Discussants and reviewers generally don’t want to hear that you found something you didn’t expect and don’t really know why. They want to hear a clean story about how your results make sense and follow from your theory and literature review. In short, social science favors the false appearance of confidence.

This is the inverse of the file-drawer bias problem, where researchers bury studies with null findings so that the research literature is saturated with significant results that over-state confirmation of their hypotheses. In this case, we have a positive result but no real theory to explain it. Journal editors and reviewers hate unexplained results as much as they hate null results, but it’s easy enough for researchers to avoid this problem–they just adopt a theory post-hoc and claim they expected the result all along.

How many times have you been at a paper presentation where the audience speculates about what “explains” the results? Of course, when you generate explanations after you know the results, all you are doing is rationalizing a result that is not actually explained. It’s fine to generate these hypotheses after you know results so that you can test them on new data, but you can’t really say that you have accounted for why you got the result you did. I wish more social scientists were comfortable admitting that they don’t really understand why they found what they did. I wish more journals were willing to publish these speculative results so that people did not feel pressured to pretend that they knew the explanation all along. Social science works best when we are candid about what we don’t know. And in the case of arts-focused field trips and test scores, we got really interesting results but we don’t truly understand why we got them.

Fortunately, this year we are adding students to the study from six new schools as well as a second cohort of students from our four original schools. We are also following the original cohort into a second year. Additional subjects and more time might help us test speculative explanations for our current results. In the meantime, we have a positive but mysterious result on our hands.

The Brown Center Chalkboard launched in January 2013 as a weekly series of new analyses of policy, research, and practice relevant to U.S. education.

In July 2015, the Chalkboard was re-launched as a Brookings blog in order to offer more frequent, timely, and diverse content. Contributors to both the original paper series and current blog are committed to bringing evidence to bear on the debates around education policy in America.

Read papers in the original Brown Center Chalkboard series »