Introduction
The COVID-19 pandemic upended standardized testing for college admissions. Most colleges went “test optional” due to the mass closures of testing centers during the pandemic. Recently, some colleges have reinstated testing requirements, and others continue to consider test scores for admissions, placement, and even merit-based aid. While the landscape shifted after COVID-19, standardized tests like the SAT and ACT remain highly relevant to prospective college students.
Although students, researchers, and policymakers may understand the continued importance of standardized testing, what’s less certain is the value of “test prep.” Debates over the fairness of the SAT and ACT in the college admissions process often center on concerns that the tests can be “gamed” with test preparation, undermining the value of the scores for predicting success in college and favoring students who have access to high-quality test preparation. Unfortunately, reliable research on how students prepare for the exam, how that has changed over time, and whether these practices work is limited and outdated.
In this report, we review the (somewhat limited) evidence on who uses test preparation (or “test prep”) and how effective it is at improving scores. We begin by showing how the use of test prep varies by race and socioeconomic status using nationally representative data on the high school class of 2013. Not surprisingly, high school juniors from low- and middle-class families are less likely to have engaged in specific test prep activities and taken the SAT or ACT than their economically better-off peers. Disaggregating by race, we find that Asian American and Pacific Islander (AAPI) students are by far the most likely to have prepared for and taken the SAT or ACT, followed by Black, white, and Hispanic students.
Overall, research suggests that students benefit from preparing for college admissions exams and that individual tutoring or coaching is likely to be more effective than other approaches such as classes, online programs, or self-study. However, the effects of test prep are generally modest, especially compared to the claims of some test prep programs. Notably, there are no rigorous studies showing the effectiveness of the high-priced, one-on-one tutoring or “guaranteed score” packages offered by commercial test prep companies that are the focus of media attention. (The share of students who use this kind of prep is likely small considering the high price.) While the expansion of free, online programs has the potential to increase access to test prep, engagement with these resources is often low, and there are not rigorous studies of this type of test prep.
While the paucity of high-quality studies makes it difficult to draw strong conclusions, the evidence suggests that test prep can affect scores enough to matter for admissions. But test prep per se can only go so far for students who have not mastered the math, language, and other skills covered on the exam, and those growing up in higher-income families enjoy many educational advantages relative to their lower-income peers. Test prep likely accounts for a small share of the enormous class gaps in test scores, which are also present for in-school measures of academic preparation such as high school grades and course-taking. While ensuring access to test prep opportunities is important considering the continuing relevance of the test, it won’t be enough to meaningfully close the class gaps in test scores or enrollment in college.
Finding a middle ground on both the role of standardized tests in admission and test prep is essential. While all students should have opportunities to prepare for and retake the test if desired, extensive preparation focused narrowly on test-specific strategies and excessive retaking likely offer diminishing returns relative to the financial, time, and emotional costs involved.
The standardized testing landscape
The SAT and ACT have been widely used for college admissions since the mid-20th century, aiming to predict students’ success in college and simplify the admissions process. Some also hoped that tests could help level the playing field for college admissions by making it possible for students from lesser-known high schools to signal their achievement.
Disparities in SAT and ACT scores across socioeconomic status and race/ethnicity are longstanding, as are disparities in other measures of academic preparation. A recent study from Opportunity Insights finds that test takers in 2011, 2013, and 2015 from the top fifth of the family income distribution were eight times more likely to score at least 1300 out of 1600 on the SAT (about the 90th percentile and equivalent to about 29 out of 36 on the ACT) compared to their counterparts in the bottom fifth. Children in the bottom family income quintile were not only less likely to post a high score but were also less likely to even take the test, with just one quarter of students taking the SAT or ACT at all, compared to well over three-quarters of students in the top quintile.
In 2023, the mean SAT score of high school seniors was 1028, and the average scores of Black (908) and Hispanic (943) students were significantly lower than for Asian (1219) and white (1082) students. The ACT, too, reports substantial score gaps between Black and Hispanic students and their Asian and white peers.
These disparities reflect long-standing differences in access to educational opportunities and can have important implications for economic mobility. While information about how exactly colleges use test scores in admissions decisions—and what they do when scores are not provided—is scarce, a strong score can help a student gain admission to selective institutions where students are particularly likely to experience upward economic mobility.
Many colleges made submitting standardized test scores optional during the pandemic. Some adopted “test-blind” policies (meaning they will not consider the scores even if the student has taken an admissions test), including both public four-year college systems in California. The share of Common App member institutions requiring standardized test scores fell from 55% in 2019-20 to just 5% in 2021-22 and remained around that level through 2023-24. Today, most colleges remain test-optional, but how admissions committees interpret a missing score is unclear. And some have reinstated their test requirements, citing a variety of reasons, including the desire to recruit a diverse student body.
Evidence on the effect of test-optional policies on campus diversity is mixed: One paper finds moving to test-optional is associated with a 3-4% increase in enrollment of Pell Grant recipients and a 10-12% increase in underrepresented racial minorities, while other research detects no change in racial or socioeconomic diversity. These studies rely on data from the pre-COVID era, when the dynamics around test-optional policies were different, so the results may not be generalizable to the current moment.
Test prep trends
In the context of college admissions exams, defining “test prep” is not completely straightforward. Test performance is determined by the quality of education that students receive over the course of many years, both in and out of the traditional classroom—arguably more so than the weeks or months spent specifically “prepping” for the SAT or ACT. Test prep that happens outside of school is part of a larger phenomenon some have labelled “shadow education,” which includes a range of “educational activities that are firmly rooted within the private sector” and can have meaningful impacts on student achievement.
Throughout this report, we focus on “test prep” defined as programming specifically marketed as preparing students for the SAT or ACT, while acknowledging that this is just one form of shadow education that can impact test performance. We first discuss trends in types of test prep, then turn to differences in the use of test prep by socioeconomic status and race/ethnicity. Unfortunately, data on who uses what type of test prep are spotty and, in many cases, out of date, but the existing evidence suggests that 1) test prep is growing in popularity, and 2) higher-income and Asian American and Pacific Islander (AAPI) families use prep more.
Today, college admissions test prep is a multibillion-dollar industry, even in the wake of test-optional admissions. Major industry players include Kaplan, The Princeton Review, and Khan Academy, each with a range of offerings designed to familiarize students with the structure of the exam and test-taking strategies. “Informal” prep includes student-directed use of workbooks and practice exams, while “formal” prep includes classroom-based courses, online programs, and one-on-one or small group tutoring.
Students’ use of test prep, both informal and formal, has evolved over time. A report from the National Association for College Admission Counseling shows an increase in the use of a range of test prep methods, especially online programs, and finds that self-guided workbooks (i.e., informal prep) was the most common method. Private tutoring, on the other hand, was the least common. The data from this report are outdated, covering 1992 to 2004, so they do not reflect the modern test prep environment. Nearly 4 in 10 students reported studying for the SAT/ACT with an online program in 2004; today, that share is likely higher.
Some online prep programs are expensive and likely cost-prohibitive for many students. The cost of self-guided commercial online programs ranges from $159 to $397, and the addition of one-on-one virtual tutoring costs nearly $1,000. In the last decade or so, the College Board and ACT, Inc. have attempted to make both informal and formal test prep more accessible to students from lower socioeconomic backgrounds. Both companies offer free practice exams, and the ACT provides a free, self-paced course to students who qualify for exam fee waivers. In 2015, the College Board partnered with Khan Academy to create the Official SAT Practice (OSP), a free, personalized online test prep course. OSP is freely available to all students.
Private tutoring has also grown in popularity but comes with an even higher price tag. For example, the Princeton Review’s SAT private tutoring costs between $175 and $364 per hour. Between 1997 and 2022, the number of private tutoring centers more than tripled, from 3,000 to 10,000. Private tutoring centers tend to be concentrated in areas with high income and parental education and high shares of Asian American families. College admissions test prep is not the only service these tutoring centers provide, but these results still point to differences in demand for and access to this form of intensive test prep across socioeconomic status and race/ethnicity.
We report the findings from our own analysis of the High School Longitudinal Study (HSLS) in Figures 1 and 2. HSLS is a nationally representative survey that follows students from 9th grade through high school graduation and beyond. We analyze test prep use by socioeconomic status and race/ethnicity. Specifically, we examine whether students have taken the SAT or ACT, whether they are taking a course to prepare for the exam (formal prep), and if their school assists with test prep.
College admissions exams are typically taken during junior spring or senior fall semester, and the HSLS was administered during students’ junior spring (in 2012). Due to the timing of the survey wave, our analysis likely underestimates the true share of this cohort who eventually took the SAT/ACT, but the trends are nonetheless indicative of between-group differences. The data also do not allow us to distinguish between different types of formal test prep, such as a class offered during the school day versus private tutoring, but still reveals differences in the use of any prep course.
Most students (60%) had not yet taken a college admissions exam at the time of the survey (junior spring). Among students who had never taken the exam, nearly a quarter had taken a prep course, suggesting that they planned to take the SAT/ACT at some point. Overall, 41% of students had completed a test prep course, and 90% reported that their high school helps with exam prep, although the survey offers no detail on what form that assistance takes.
Unsurprisingly, differences in test preparation and completion by socioeconomic status are substantial. We divide students into quintiles based on the index of socioeconomic status (SES) available in the survey (based on parent/guardians’ education, occupation, and family income). Among students in the first, second, and third quintiles of SES, just over half had not prepared for nor taken the SAT/ACT when surveyed in their junior spring. The share of juniors who had prepared for the exam, taken it at least once, or both is higher among students in the fourth (56%) and fifth (69%) quintiles of socioeconomic status. These findings are consistent with other research that finds that students with higher parental income are significantly more likely to prep for the SAT through a private course or tutor than their lower-SES peers.
We show the same data disaggregated by race/ethnicity in Figure 2. Unfortunately, due to small sample sizes, we do not present results for American Indian and Alaska Native (AIAN) and multiracial students.
Differences in test prep use across racial/ethnic groups do not necessarily mirror racial/ethnic gaps in test performance. AAPI students are the most likely to have completed test prep, taken the exam, or both (65%), Black and white students have similar rates (56% and 55%, respectively), and Hispanic students have the lowest rates (47%) of having taken or prepared for an admissions test.1 Conversations about racial and ethnic disparities on the SAT have increased attention to the need to mitigate test score inequalities. In turn, high schools, colleges, and community organizations have increased outreach and test preparation activities for minority students, which could explain why Black students have similar rates of test prep and test taking as white students. However, it seems that this outreach has been less effective in raising test prep use among Hispanic students, who are the least likely to have taken the SAT/ACT, taken a prep course, or both.
Is test prep effective?
Despite the widespread adoption of test-optional policies, test prep companies are still encouraging students to submit scores as a way to stand out and remain competitive. Test prep providers make strong claims about the effectiveness of their programs: Kaplan and The Princeton Review offer money-back guarantees on their prep courses. Students can get a refund if they don’t earn a higher score on their next test as long as they complete all required homework and practice tests. But the published evidence on the effectiveness of test prep is more mixed than the money-back guarantees would suggest.
It is difficult to evaluate the effectiveness of test prep for a couple of reasons. Some studies look at how students’ scores change after they have done test prep (“before-and-after” studies). But test scores tend to increase between the first and second time a student takes a test, in part because they’re more familiar with the structure of the test and the types of questions. If this natural growth is not accounted for, the effects of test prep will be overstated. And test prep providers have little incentive to subject themselves to rigorous, independent evaluations that address this problem.
Another approach is to compare scores for students who used test prep to scores for those who didn’t. But students who choose to use test prep likely differ from students who do not prepare in ways that researchers cannot observe. Controlling statistically for a prior test score and other characteristics that may affect scores can help but does not necessarily solve the problem. All else equal, a student who chooses to study (or study more) differs meaningfully from a student who doesn’t study in ways that the researcher can’t easily account for. Experimental studies, where students are randomly assigned to participate in test prep, can provide better estimates of the true effects of prep, but most of the studies that use this approach are old or have other limitations.
With these caveats in mind, we summarize studies of test prep, starting with the programs from Kaplan and Khan Academy. These reports were sponsored by the testing companies themselves and may overstate the effect of test prep, but we discuss them here because they are commonly cited (importantly, by the College Board).
ACT, Inc. has sponsored multiple reports on the effectiveness of three test curricula they offer. They find that among low-income students, ACT composite scores for participants in Kaplan Online Prep Live increased by 1 point more than for nonparticipants (comparable to around 20-30 SAT points). Students who used the self-guided ACT Online Prep (AOP) course also increased their Composite score by around 1 point, and students who used the system intensively (e.g., for over 21 days, over 55 lessons, etc.) saw average composite score gains of 1.5 points. Put another way, the score improvement associated with extensive use of ACT Online Prep is roughly 30% of the unadjusted Black-white score gap and 40% of the Hispanic-white gap. None of those studies used experimental methods, though, so we can’t be sure that those who used test prep were otherwise similar to those who did not.
ACT, Inc. conducted a randomized controlled trial (RCT) of AOP, their self-guided online offering. That study found no difference in scores for students in the treatment group, who were given free access to AOP, compared to those in the control group. The study authors note that even though all study participants expressed interest in using ACT Online Prep, students in the treatment group used it very little: Over half never used the program at all, and among those who ever used the program, the average time spent studying was just 2.1 hours (much of it right before the test date). Low engagement with online test prep was not particular to this sample. The same study found that about half of students who paid for AOP access never used it ($39.95 at that time), though those who did engage spent more time than students in the RCT study (about 8 hours on average).
The College Board has coauthored reports on OSP, the free SAT online prep program they created in partnership with Khan Academy. Their 2017 analysis claimed that six to eight hours of OSP use was associated with a 90-point increase, and students who spent 20 hours on OSP saw average composite score gains of 115 points. Importantly, this finding does not account for the expected score growth between the PSAT and the last SAT, regardless of prep. A 2020 follow-up report offered a more modest estimate: Students who used OSP for at least six hours scored 21 points higher, compared to those who did not use OSP. Moreover, the small share of students who also used one of the OSP “best practices” scored 39 points higher, on average, than students who did not use OSP. They found that Asian and Black students were more likely to study for at least six hours, but as in the study of ACT Online Prep, overall student engagement was low: Only 10% of students practiced for six or more hours.
Some test prep companies’ willingness to offer money-back guarantees on high-priced tutoring packages creates an impression that those services must be highly effective. That could be the case, but no rigorous studies have evaluated such programs, and the guarantees come with some fine print. In particular, students have to show they completed not only the tutoring sessions or classes, but also hours of out-of-class work and multiple practice tests. Given that engagement with online prep is often low, it is plausible that many students do not complete the extensive work required to get a refund, and students might achieve similar gains at lower cost with an equivalent amount of self-study (though it could be difficult to stay on track without guidance).2 This is consistent with other research documenting low rates of completion across free online learning platforms.
Beyond the study of AOP discussed above, there have been a few RCTs of test prep, but unfortunately they are outdated, having been conducted in the 1990s or early 2000s, and many of them rely on small samples.
One RCT found that a six-week computerized vocabulary building program increased scores on the verbal section of the SAT by about 40 points, a significant effect. However, not only is this study old, but the SAT verbal section, now called “reading and writing,” has been redesigned to place considerably less emphasis on vocabulary. Another intervention conducted in the early 1990s incorporated general test taking strategies and ACT practice items into a high school mathematics curriculum for 10 weeks, which improved math scores substantially. A third RCT conducted in the late 1990s found that a nine-week computerized coaching program did not have a statistically significant effect on SAT scores, but the sample size was small. Finally, a more recent RCT study of a college access intervention conducted in 2010, which lasted two years and had ACT prep as a major component, did not show effects on ACT test scores.
We have not addressed the effect of private tutoring on test scores because there are no studies evaluating this form of intensive test prep. One researcher argues that private tutoring does little to improve students’ chances of admission, instead serving mostly to reassure parents during the application process. But without data on which students use private tutors at what cost—or how effective private tutors are—we cannot say for sure. It is also worth noting that students from families wealthy enough to afford expensive tutors often have had many more opportunities to learn the material covered by these tests, perhaps including private tutoring on academic content. The line between those other advantages and expensive “test prep” is not always clear.
It’s possible that private tutoring would have important effects if offered to the students who need it the most: lower-performing students who may not have learned the content of the exams in their regular classes. Evidence that one-on-one tutoring is effective for learning academic content is strong. But since those students do not typically use test prep in practice, it’s difficult to say.
Implications
As some colleges return to requiring standardized tests and many will consider scores if students provide them, high-quality research on test prep effectiveness is needed. Access to standardized admissions tests has increased, as many states now offer or require students to take either the ACT or SAT in school; indeed, requiring high schoolers to take a college entrance exam in school increases college enrollment. However, the opportunity to take the test is only one piece of the puzzle; students also need to be equipped to do well on the test. The existing evidence suggests that test prep can raise scores measurably but not substantially, and the effects are likely much smaller than commercial providers claim. For students with weaker academic skills, high-quality prep—especially when it is focused on improving content knowledge rather than test-taking strategy—could have benefits beyond admissions test scores, but standardized test prep alone won’t be enough to close large race and class gaps in scores; the investment of time and money in these activities must be balanced against other educational priorities.
-
Footnotes
- The difference in the share of students who have completed test prep, taken the exam, or both is not statistically significant between Black and white students, but it is significantly different for all other racial/ethnic comparisons.
- For example, The Princeton Review’s “SAT 1400+” guarantee requires that students 1) attend all scheduled class sessions (36+ hours in total), 2) complete all 4 required practice tests according to the syllabus schedule (observing official time limits for all sections and breaks), and 3) complete and enter all required homework according to the syllabus schedule.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).