Extra time in assessments: A review of the research literature on the effect of extra time on assessment outcomes for different students
Published 27 November 2025
Applies to England
Authors
- Benjamin M. P. Cuff
- Ellie Keys
- Darren Churchward
- Lauren Kennedy
- Stephen Holmes
How to cite this publication
Cuff, B.M.P., Keys, E., Churchward, D., Kennedy, L., & Holmes, S. (2025). Extra time in assessments: A review of the research literature on the effect of extra time on assessment outcomes for different types of student. (Ofqual research report 25/7267/1). Ofqual. Retrieved from https://www.gov.uk/government/publications/extra-time-in-assessments
Executive summary
Background
Extra time is one of several different adjustments provided in assessments in England. Under the Equality Act 2010, disabled people are entitled to reasonable adjustments: adaptations that address a disabled person’s substantial disadvantage so that they can access the same opportunities as non-disabled people. For exams and assessments, awarding organisations and schools and colleges are responsible for making reasonable adjustments for disabled students. Ofqual’s rules require awarding organisations to design assessments to be as accessible as possible and to have clear, published, arrangements for making reasonable adjustments where necessary.
Awarding organisations can also provide adjustments as special consideration for non-disabled students who have an access need. Special consideration is given to a student who has temporarily experienced illness, injury or some other event outside of their control at the time of the assessment. For GCSE, AS and A level, and some vocational and technical qualifications (VTQs), these 2 types of provision – reasonable adjustments for disabled students and special consideration for non-disabled students – are administered by the Joint Council for Qualifications (JCQ) under the umbrella term ‘access arrangements’. Awarding organisations offering VTQs outside of the JCQ have their own systems and processes.
Evidence suggests that 25% extra time is the most common access arrangement granted by the exam boards for GCSE, AS and A level. This paper reviews the literature on the effectiveness of extra time in mitigating the impact of time pressure on those receiving the adjustment, without conferring any additional advantages.
Key findings
Most studies reported at least some benefits of extra time. In some studies, all students were found to benefit from more time, whether they would normally receive extra time or not, suggesting that the test in question may have been somewhat time-limited, or “speeded”, for most students. More frequently, disabled students benefited more than non-disabled students, while in a few studies it was only disabled students who benefited from extra time. The rare cases where no-one benefited from more time suggested that the test time limit was generous in these studies. Where this report uses the term ‘benefits’ it refers to any improvement in exam performance of students, irrespective of their level of need for extra time. ‘Benefit’ does not refer to other effects of extra time, such as decreases in exam stress.
The reviewed studies differed in a variety of ways, including the type and subject of the test, the demands of the test (including how tight the time limits were), the study design and the students involved. The nature and size of the benefit of extra time varied substantially. It is therefore difficult to draw general conclusions except to observe that effects are test- and cohort-specific.
Extra time works well when those students without extra time are able to perform at or close to their best under standard time conditions, but those students with access needs cannot due to their slower speed of working. In this case the provision of extra time to students with needs should allow them equal opportunity to demonstrate their best performance.
Limitations
The existing literature is predominantly based on studies investigating the effects of extra time in the United States of America (USA) and in the context of university-level assessments of English and maths. This limits the extent to which these findings are directly applicable to the context of high-stakes general qualifications in England. However, it does suggest that there may be different benefits of extra time for high-stakes exams in England depending on the type of tasks set and the specific needs students have.
Introduction and context
Extra time is one of several different adjustments provided in assessments in England. Under the Equality Act 2010, disabled people are entitled to reasonable adjustments: adaptations that address a disabled person’s substantial disadvantage so that they can access the same opportunities as non-disabled people. For exams and assessments, awarding organisations and schools and colleges are responsible for facilitating reasonable adjustments for disabled students[footnote 1]. Ofqual’s rules require awarding organisations to provide reasonable adjustments in accordance with equalities law, and to have clear, published, arrangements for making these adjustments. The rules also require awarding organisations to design assessments to be as accessible as possible as standard, thus minimising the need for reasonable adjustments.
Awarding organisations can also provide adjustments as special consideration for non-disabled students who have an access need. Special consideration is given to a student who has temporarily experienced illness, injury or some other event outside of their control at the time of the assessment. For GCSE, AS and A level and some vocational and technical qualifications (VTQs), these 2 types of provision – reasonable adjustments for disabled students, and other adjustments for non-disabled students – are administered by the Joint Council for Qualifications (JCQ) under the umbrella term “access arrangements”. Certain VTQs offered by JCQ members also follow this approach, while other awarding organisations offering VTQs have their own systems and processes.
Theoretically, any type of adjustment can be used to meet a student’s access needs, so long as it is proportionate and does not compromise the assessment’s fairness or validity (that is, what the assessment is trying to measure). For example, it would not be appropriate to give a student a calculator as an adjustment if an exam was intended to assess a student’s ability to make calculations themselves. In practice, the range of adjustments available are defined by the need to comply with the JCQ access arrangement regulations, which are used by awarding organisations as a means to achieve their legal obligations under equalities law. Evidence suggests that 25% extra time is the most common access arrangement granted by the exam boards for GCSE, AS and A level, with volumes in approved requests for this arrangement increasing over recent years.
Ensuring that assessments are equitable for all candidates has clear importance for fairness and for assessment validity. Given its relative prevalence, the appropriateness of extra time is particularly important. However, concerns have been raised in both the media and academic literature that extra time might inflate outcomes and give some candidates unfair advantage over those working under standard time conditions, ultimately affecting the reliability or validity of outcomes (see Fuchs & Fuchs, 2001; Lovett, 2010; Pardy, 2016; The Guardian, 2019; The Telegraph, 2019). Concerns have also been raised over the burden that the current system for applying for access arrangements (not just extra time) places on schools (Woods, James, & Hipkiss, 2018).
The purpose of this paper is to review the academic literature on experimental studies that attempt to measure the benefit of extra time, to determine what effects this type of accommodation appears to have on students’ assessment outcomes, and so to consider how effective the current provision might be. Part of this is the consideration of whether it is only those students who qualify for extra time who improve their outcomes with more time, or whether the benefit of extra time is more widespread. If the latter were true, it might suggest that other students also find that standard time-limits affect their performance in the assessments.
It is worth noting that different amounts of extra time may be applied for. Most granted applications for extra time in England are for 25%, even though there might be variation in need across the students who receive this amount. For applications for extra time over 25%, SENCos must specify how much extra time each candidate needs (for example, 40%; JCQ, 2025). Different qualifying criteria based on standardised testing scores define the amount of extra time that an individual may be eligible for. It is noteworthy that in England 25% is used as the ‘default’ amount of extra time, compared to the 50% extra time that is more common in the USA (Lewandowski, Cohen, & Lovett, 2013).
Finally, when discussing implications for the English context, focus is primarily given to GCSEs, AS and A levels, but it is important to remember that reasonable adjustments can and should be available for any type of assessment[footnote 2]. Other, non-experimental studies and relevant materials are referenced for added context in the introduction and discussion.
Literature review
Articles for the review were primarily sourced through Google Scholar and ERIC[footnote 3] using the following search terms: (extra time OR additional time OR extended time OR untimed) AND (assessment OR exam OR test). Reference lists of identified articles were then also perused for further articles of interest. To reduce the number of articles for review, dissertations were excluded along with articles published before 1990. Papers exploring the use of multiple adjustments in combination were excluded so to focus only on the effects of extra time. Main findings are summarised in Table 1, which can be found in the Appendix to this report.
Some characteristics of the identified literature became immediately apparent during the review. For example, around half of the studies identified (17 of 37) have explored the effects of extra time in a university-level assessment context. Most have also come from the USA (32 of 37), and most (27 of 37) focus only on extra time in assessments of English language proficiency (for example, writing, language, reading, with 10 of them using the Nelson-Denny Reading Test) or maths. Standard blocks of extra time tended to be used, most commonly in the region of 50% or 100% of the standard testing time (18 of 37), although some allowed unlimited time or untimed extra time (9 of 37).
In terms of extra time, most studies did report positive effects on outcomes of extra time for at least certain groups of candidates. However, the size of these effects and for whom they apply showed some variation. Broadly speaking, most studies either showed that most or all candidates benefitted from extra time, or that only a certain sub-group of candidates showed this benefit (for example those with a learning disability). A smaller number of studies suggested no effects of extra time for any group of candidates. Some studies found that individual effects outweighed any group effects (that groups were not homogenous enough to allow for meaningful group comparisons to be drawn), or that effects varied across different tests or subject areas. Each of these themes shall be discussed separately below.
Because the majority of the studies reviewed were based on tests requiring use of language, the allocation of extra time in these was based upon reading and writing difficulties. The studies therefore refer to groups of students with learning disabilities or not. We follow this terminology in this review, while noting that more broadly the question of the benefit of extra time is about comparing outcomes for students who normally receive extra time, for any identified access need, and those who do not normally receive extra time.
Positive effects for most candidates
Several studies reported positive effects of extra time for the majority of students, suggesting that benefits were not limited just to certain groups (for example candidates with a learning disability). For example, Huesman and Frisbie (2000) reported that reading test scores improved with extra time at a similar rate for candidates with or without learning disabilities. Miller, Lewandowski, and Antshel (2015) reported the same for candidates with or without attention deficit hyperactivity disorder (ADHD). Kellogg, Hopko, and Ashcraft (1999) reported comparable benefits of extra time in a maths test for candidates with low, medium, and high levels of maths anxiety, and Powers and Fowles (1997) reported positive effects in a writing test for (self-reported) slow, average, and fast writers.
Other studies reported that while most candidates seemed to benefit from extra time, those with lower abilities or diagnosed learning disabilities benefitted to a greater extent (Alster, 1997; Gilbertson Schulte, Elliott & Kratochwill[footnote 4], 2001; Bridgeman, Cline, & Hessinger, 2004; Lesaux, Pearson, & Siegel, 2006; Ofiesh, Mather, & Russell, 2005). This has been referred to as a ‘differential boost’ (for example, Tindal & Fuchs, 1999) or as the ‘interaction hypothesis’ (Sireci, Scarpati, & Li, 2005). The reason for this differential boost may be because non-disabled candidates are already working relatively close to their maximum potential under standard time conditions, meaning that extra time only allows a limited increase in scores.
Candidates with learning disabilities, however, may have improved to a greater degree because they were working at a lower level under standard time conditions, relative to their maximum potential (Zuriff, 2000), and thus could benefit more from the extra time given. Indeed, Lesaux et al. (2006) found only a small increase in the number of items attempted with extra time for higher-attaining candidates but a large difference for those with a learning disability (both groups approached a ceiling in terms of items attempted in the untimed condition). Ofiesh et al. (2005) also reported that most non-disabled candidates were able to finish their test under standard time conditions, whereas most of those with a learning disability were not. Bridgeman et al. (2004) reported similar findings.
Interestingly, some other studies reported that non-disabled candidates benefitted more from extra time than those with learning disabilities or ADHD. Examples include studies by Lewandowski and colleagues (2007; 2008; 2013). In these studies, strict time limits were deliberately set so that the test was still speeded[footnote 5] for all candidates even in the extra time condition. Abedi, Hofstetter, and Baker (2001) also reported that fluent English speakers benefitted from extra time in a maths test to a greater degree than those with limited English proficiency. Mandinach, Bridgeman, Cahalan-Laitusis, and Trapani (2005) similarly reported some benefits of extra time for middle and high attainment candidates, but almost no effect for lower attaining candidates. As argued by Mandinach et al., it is possible that these kinds of effects could be explained by a lack of knowledge, understanding or skills required to answer the questions, thus extra time could bring no benefit. In other words, it is important to remember that extra time can only address speed deficits, not a lack of subject knowledge or understanding. However, another explanation to consider suggested by Lovett and Leja (2015) is that specific candidate need or disability may preclude effective use of extra time in some cases. Lovett and Leja (2015) found that students with ADHD symptoms were less able to benefit from extra time because of their symptoms and hence it cannot be assumed all SEND students benefit from extra time equally. In some cases, SEND students may not benefit at all.
The results of Fuchs, Fuchs, Eaton, Hamlett and Karns (2000) suggest that the direction of a differential boost may be dependent upon the content area being assessed. They found that the beneficial effect of extra time in a maths test was greater for non-disabled candidates in 2 areas (‘computations’ and ‘concepts’), but was greater for candidates with a learning disability in another (‘problem solving’). Similar to Mandinach et al. (2005), Fuchs et al. (2000) argued that candidates with a learning disability in their study may not have had the knowledge, skills and understanding required to answer all computations and concepts items, thus could not benefit as much from extra time because they were not disadvantaged for reasons of deficits in speed or time. However, for problem-solving items, which impose demands in terms of reading and writing speeds, the additional time given to candidates was able to accommodate for candidates’ reading or writing speed deficits, thus was found to be beneficial.
Positive effects only for certain groups of candidates
Some studies only reported effects of extra time for certain groups of students. For example, both Ofiesh (2000) and Runyan (1991) reported a benefit of extra time in a reading test for candidates with a learning disability, but not for those without a disability. Crawford, Hedwig and Tindal (2004) found a benefit of extra time in a writing test for Grade 5 (approximately age 10) students, but not Grade 8 (approximately age 13) students (with Grade 5 being the lower grade). Onwuegbuzie and Seaman (1995) reported an effect of extra time in a maths test for candidates with high test anxiety but not those with low test anxiety. The results of Ofiesh and Runyan in particular align with the Maximum Potential Thesis (MPT), described by Zuriff (2000), This posits that extra time should only bring benefit to candidates with learning disabilities because non-disabled candidates should already be working at their maximum potential under “timed conditions” (page 101). In other words, time limits may have been set in these tests such that most candidates were able to fully process and respond to the test under standard time conditions, but those with learning disabilities required more time to do so. Runyan (1991) concluded that the additional time levelled the playing field between candidates with and without learning disabilities.
No effects for any groups of candidates
A relatively small number of studies reported no statistically significant benefit of extra time for any groups (although some individuals within those groups may still have experienced some benefit). These findings suggest that whilst some individual students may have increased their marks when receiving extra time compared to their performance in standard time, this is not a consistent or reliable effect across the cohort. Three studies reported no statistically significant differences in English language or writing, or maths test scores when working under standard versus extra time conditions, regardless of disability status (Elliott & Marquart, 2004; Goegan & Harrison[footnote 6], 2017; Munger & Loyd, 1991). Brooks, Case and Young (2003) and Lee, Osborne and Carpenter (2010) also reported no statistically significant effects of extra time, although these studies did not present separate effects for those with learning disabilities.
A possible explanation for these findings might be that the majority of candidates were given ample time to complete the tests under standard time conditions. Indeed, both Elliott and Marquart (2004) and Munger and Loyd (1991) reported that most candidates in their test (including those with learning disabilities) completed all test items within the standard time limits, suggesting ample time. The test used by Brooks et al. (2003 – the “Stanford 10”) is also designed to be administered so that “all children have sufficient time to complete it” (Brooks et al., 2003, page 5). Thus, even for disabled candidates, there may have been no need for extra time to be provided.
Individual differences
Most studies largely focused on group effects, such as to compare effect sizes for candidates with or without learning disabilities. However, where studies have further explored group effects for individual differences, effects have not been found to be homogeneous within those groups. For example, Elliott and Marquart (2004) found that effect sizes varied from “large negative” to “large positive” between candidates in a learning disability group. Lovett and Leja (2015) found that students reporting more learning difficulty symptoms benefited less from extended time. Cahalan-Laitusis et al. (2006) reported variations in the extent to which exam candidates with learning disabilities felt they needed more time in their reading and maths tests. Spenceley and Wheeler (2016) reported that many learning-disabled candidates were able to complete the test within standard time limits, but there was variation in time needed depending on specific diagnoses.
As Mandinach et al. (2005, page 2) noted, a candidate’s disability will vary in “form and severity”, meaning each will differ in their extra time needs. Of course, individual variation is not limited to those with a disability. Zuriff (2000) noted in his review that even where studies find no overall effect of extra time for non-disabled candidates, there are always some individuals who do benefit, and Ofiesh (2000) also reported variations in effects both within disabled and non-disabled groups.
Differences by test or subject area
Differential effects have also been noted for assessments from different subject areas (such as the sciences versus the humanities), because individuals’ needs may vary across different assessments. For example, Gregg and Nelson (2012) and Cahan, Nirel and Alkoby (2016) both cited several studies showing differential effects of extra time on maths versus reading tests. In a practical examination of human anatomy, Zhang, Fenderson, Schmidt and Veloski (2013) reported a negative effect of untimed assessment for some areas of anatomy, but not for others. Brown, Reichel, and Quinlan (2011) reported minor benefits of extra time for vocabulary test scores, but larger benefits for reading comprehension test scores.
Discussion
The findings reviewed suggest that while extra time can have positive effects on outcomes, the presence or size of these effects can vary. The benefit of extra time appears to depend upon the general impact of the test time limit on outcomes (in other words, the test speededness), and the interaction of individual needs with the demands of the assessment. Where candidates were given enough time to start with, extra time seemed to have little benefit. Candidates with learning disabilities seemed more likely to benefit from extra time than their peers without learning disabilities, so long as they had the knowledge and understanding to complete the necessary tasks. Individuals sometimes varied some way from the group mean effect, possibly because the experiences and needs of students who require access arrangements are so broad (Hipkiss et al., 2021).
Some evidence suggests that extra time may have greater effects in some tests or subject areas than others. Hipkiss et al. (2021) interviewed students awarded extra time and suggested that a student’s decision to use the extra time was dependent on the content of each individual examination paper on the day. As well as variations in the types of tests and their standard time limits, the differential effects of extra time found in the research may also be explained by significant variations in how much extra time was granted across the studies (Duncan & Purcell, 2020). Various studies have used unlimited (Runyan, 1991), 100% (Elliot & Marquart, 2004), 50% (Lewandowski, Cohen, & Lovett, 2013) and 25% extra time (Duncan & Purcell, 2017), which makes it harder to draw firm conclusions.
Several individual factors might also impact upon students’ time needs in an assessment. For example, candidates with learning disabilities may process information more slowly than non-disabled candidates, thus being affected to a greater degree by time restrictions than their non-disabled peers (for example, see Cahan et al., 2016). Other individual difference factors might cause similar variations in time needs, such as differences in time management skills, and levels of assessment anxiety, resilience, motivation and stamina.
Finally, a small number of studies reported some detrimental effects of extra time (Camara, Copeland, & Rothschild, 1998; Ofiesh, 2000), possibly because candidates second-guessed their answers when presented with more thinking time, or when checking them, making a correct answer wrong (see Zoller, Ben-Chaim, & Kamm, 1997). While this may simply reflect a lack of secure understanding, rather than being due to the extra time per se, it demonstrates the complex way time limits may affect test outcomes.
The number and range of factors that can interact with and affect the extent to which candidates benefit from extra time means it is difficult to draw conclusions about how well the adjustment of extra time meets its main purpose. In other words, it is difficult to judge the extent to which extra time levels the playing field in terms of assessment conditions. It would be extremely challenging to identify the specific amount of extra time that would be required for each individual student to have an equal opportunity to show what they know, understand and can do. The concern here is usually that candidates may be disadvantaged if not given enough extra time to offset their particular time needs, but some reported cases suggest that candidates may be given an unfair advantage in assessment outcomes if they are given too much extra time relative to their needs (see Cahalan, Mandinach, & Camara, 2002; Lewandowski et al., 2013; Thornton, Reese, Pashley, & Dalessandro, 2002).
To avoid any unfair advantage or disadvantage, Ofiesh, Hughes, and Scott (2004) suggested that extra time decisions should be informed by knowledge of the specific deficits a candidate may have in combination with knowledge of the test the adjustment will be offered for. Tindal and Ketterlin-Geller (2004) went a step further, proposing that decisions may need to recognise the interaction between an individual candidate’s skills and the characteristics of individual items or item types, suggesting that the variance in performance may be affected by features of specific items, not just test specific.
These suggestions might well improve the validity of extra time provision, but would be extremely difficult to implement fairly, and would hugely increase burden on the system. Accurately determining precise time needs for every individual in the context of every test they take would be a very difficult and time-consuming task, especially in paper-based assessments. For example, one would need to know how much time each candidate would need to fully process what they are being asked to do. However, this is dependent upon the multiple factors noted above (for example, knowledge, skills, understanding, access needs, time management, anxiety, resilience, motivation, stamina). One would then need to know how many minutes of extra time would be needed to mitigate any disadvantage, which may be almost impossible to determine as this may not be a simple relationship to account for. Further complicating the matter is the fact that candidates might benefit from extra time, even if they do not seem to use it, for example, simply having extra time available might help reduce the effects of anxiety (Elliott & Marquart, 2004).
There does not seem to be an empirical basis for why 25% extra time is typically approved in England over any other amount. Some authors have suggested that this amount may have been chosen simply for administrative purposes (Duncan & Purcell, 2017). However, the JCQ system in England allows different amounts of extra time to be applied for depending on an individual’s scores on standardised speed of working tests. Interestingly, similar concerns have been levied in the USA, against their more common usage of 50% or 100% extra time (for example, see Lewandowski et al., 2013; Miller et al., 2015). One study identified in this review did conclude that 25% would have been the appropriate amount of extra time to give to candidates with learning disabilities in their reading test, to allow them to attempt the same number of items as the control group (Lewandowski et al., 2013). However, this could well be test specific and may depend upon how candidates were classified into groups. For example, Lewandowski et al. (2013) focused only on learning disabilities in the previously noted study, but learning difficulties only make up a proportion of those who are eligible for extra time. Different classifications of ‘disability’ might lead to different results, so this finding should not be relied upon without confirmation in other contexts.
One mitigating factor with any risk of over-allocating extra time is that if most or all students who do not qualify for extra time are able to fully show what they know and can do within the standard time, then this fixed allocation of extra time would not be a problem. ‘Too much’ extra time would not give any real advantage since everyone would be able to maximise their performance in the time they had. Therefore, the goal should be to make sure the assessments in question are not significantly speeded for those without extra time, where speededness is not part of the construct measured.
When considering the range of research reviewed here and the gaps that may exist in it, most of it focuses on extra time in tests of reading and maths, several (9 out of 35) also using the same single test (the Nelson-Denny Reading Test). This is likely because reading skills are used across many subject areas (Fuchs & Fuchs, 2001). However, research is still needed to assess how generalisable effects are in assessments of different subject areas. Due to differences in the educational systems and populations of the USA and England, more evidence from English-specific contexts may be worthwhile. There has also been a focus in the research literature on the effects of extra time for candidates with learning disabilities. More work could be done to explore effects for candidates with other types of disabilities (for example, a physical disability) and other access needs, for whom extra time is also available.
Methodological improvements might also be sought to draw more robust conclusions. For example, greater use of randomisation in allocating study participants to standard or extra time conditions is needed where possible to account for the multiple potential confounds present in these types of studies. Similarly, where study participants are tested under both standard and extra time conditions, counterbalancing groups should be randomised to account for order effects. These best practices have not always been followed in the research literature to date. Future studies should also be clear about the objectives of extra time. The purpose of extra time should be to achieve equal opportunity, not necessarily equal outcomes. However, some studies to date have nonetheless focused on the latter.
Several implications of this literature review are picked up in further work Ofqual has carried out.
Issues related to when the speed at which work is completed should be a part of an assessment, and ways to conceptualise and measure speededness of tests, are discussed in Holmes (2025), with a focus on tests in England. Holmes also gives some thoughts on what this means for the provision of extra time. An example of estimating the speededness of a set of written GCSE examinations based on test administration data is detailed in He and El Masri (2025). This used unanswered items at the end of the tests, which are assumed to be omitted because time ran out, to estimate the test speededness for the groups of students who sat the test.
In conclusion, the appropriate use of extra time has clear importance for assessment validity and fairness. While there seems to be little doubt that extra time can lead to improvements in assessment outcomes, the extent to which this occurs may be dependent upon the interaction between individuals and the nature of the assessments that they are taking. To conclude that extra time is or is not effective or appropriate on a wholesale basis would be to ignore the fact that different tests may be speeded to different degrees for different candidates.
References
Abedi, J., Hofstetter, C., & Baker, E. (2001). NAEP Math Performance and Test Accommodations: Interactions with Student Language Background. (CSE Technical Report 536). UCLA Center for Research on Evaluation, Standards, and Student Testing. Retrieved from https://eric.ed.gov/?id=ED466961
Alster, E. H. (1997). The Effects of Extended Time on Algebra Test Scores for College Students With and Without Learning Disabilities. Journal of Learning Disabilities, 30, 222–227. http://doi.org/10.1177/002221949703000210
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (Eds.). (2014). Standards for Educational and Psychological Testing. American Educational Research Association. Retrieved from https://www.testingstandards.net/uploads/7/6/6/4/76643089/standards_2014edition.pdf
Bridgeman, B., Cline, F., & Hessinger, J. (2004). Effect of Extra Time on Verbal and Quantitative GRE Scores. Applied Measurement in Education, 17, 25–37. http://doi.org/10.1207/s15324818ame1701_2
Brooks, T. E., Case, B. J., & Young, M. J. (2003). Timed Versus Untimed Testing Conditions and Student Performance. Pearson. Retrieved from https://images.pearsonassessments.com/images/tmrs/tmrs_rg/TimedUntimed.pdf?WT.mc_id=TMRS_Timed_Versus_Untimed_Testing
Brown, T. E., Reichel, P. C., & Quinlan, D. M. (2011). Extended time improves reading comprehension test scores for adolescents with ADHD. Open Journal of Psychiatry, 1, 79–87. http://doi.org/10.4236/ojpsych.2011.13012
Cahalan-Laitusis, C., King, T. C., Cline, F., & Bridgeman, B. (2006). Observational Timing Study on the Sat Reasoning Test for Test-Takers With Learning Disabilities and/or AD/HD. (College Board Research Report No. 2006-4). The College Board. Retrieved from https://www.ets.org/Media/Research/pdf/RR-06-23.pdf
Cahalan, C., Mandinach, E. B., & Camara, W. J. (2002). Predictive Validity of SAT I: Reasoning Test for Test-Takers with Learning Disabilities and Extended Time Accommodations. (College Board Research Report No. 2002-5). The College Board. Retrieved from https://files.eric.ed.gov/fulltext/ED562756.pdf
Cahan, S., Nirel, R., & Alkoby, M. (2016). The Extra-Examination Time Granting Policy. Journal of Psychoeducational Assessment, 34, 461–472. http://doi.org/10.1177/0734282915616537
Camara, W. J., Copeland, T., & Rothschild, B. (1998). Extended time on the SAT I: Reasoning test score growth for students with learning disabilities. (College Board Research Report No. 1998-7). The College Board. Retrieved from https://files.eric.ed.gov/fulltext/ED562679.pdf
Crawford, L., Helwig, R., & Tindal, G. (2004). Writing performance assessments: How important is extended time? Journal of Learning Disabilities, 37, 132–142. http://doi.org/10.1177/00222194040370020401
Duncan, H., & Purcell, C. (2017). Equity or Advantage? The effect of receiving access arrangements in university exams on Humanities students with Specific Learning Difficulties (SpLD). Widening Participation and Lifelong Learning, 19, 6–26. http://doi.org/10.5456/WPLL.19.2.6
Duncan, H., & Purcell, C. (2020). Consensus or contradiction? A review of the current research into the impact of granting extra time in exams to students with specific learning difficulties (SpLD). Journal of Further and Higher Education, 44(4), 439-453. http://doi.org/10.1080/0309877X.2019.1578341
Elliott, S. N., Kratochwill, T. R., McKevitt, B. C., & Malecki, C. K. (2009). The effects and perceived consequences of testing accommodations on math and science performance assessments. School Psychology Quarterly, 24, 224–239. http://doi.org/10.1037/a0018000
Elliott, S. N., & Marquart, A. M. (2004). Extended Time as a Testing Accommodation: Its Effects and Perceived Consequences. Exceptional Children, 70, 349–367. http://doi.org/10.1177/001440290407000306
Fuchs, L. S., & Fuchs, D. (2001). Helping Teachers Formulate Sound Test Accommodation Decisions for Students with Learning Disabilities. Learning Disabilities Research and Practice, 16, 174–181. http://doi.org/10.1111/0938-8982.00018
Fuchs, L. S., Fuchs, D., Eaton, S. B., Hamlett, C. L., & Karns, K. M. (2000). Supplementing Teacher Judgments of Mathematics Test Accommodations with Objective Data Sources. School Psychology Review, 29, 65–85. http://doi.org/10.1080/02796015.2000.12085998
Gibson, S., & Leinster, S. (2011). How do students with dyslexia perform in extended matching questions, short answer questions and observed structured clinical examinations? Advances in Health Sciences Education, 16, 395–404. http://doi.org/10.1007/s10459-011-9273-8
Gilbertson Schulte, A. A., Elliott, S. N. & Kratochwill, T. R. (2001). Effects of Testing Accommodations on Standardized Mathematics Test Scores: An Experimental Analysis of the Performances of Students with and Without Disabilities, School Psychology Review, 30(4), 527-547, https://doi.org/10.1080/02796015.2001.12086133
Goegan, L. D., & Harrison, G. L. (2017). The Effects of Extended Time on Writing Performance. Learning Disabilities: A Contemporary Journal, 15, 209–224. Retrieved from https://eric.ed.gov/?id=EJ1160642
Gregg, N., & Nelson, J. M. (2012). Meta-analysis on the Effectiveness of Extra time as a Test Accommodation for Transitioning Adolescents With Learning Disabilities. Journal of Learning Disabilities, 45, 128–138. http://doi.org/10.1177/0022219409355484
He, Q. and El Masri, Y. (2025). An exploration of the effect of speededness in a selection of GCSE examinations. (Ofqual research report 25/7267/3). Ofqual. Retrieved from https://www.gov.uk/government/publications/an-exploration-of-the-effect-of-speededness-in-a-selection-of-gcse-examinations
Hipkiss, A., Woods, K. A., & McCaldin, T. (2021). Students’ use of GCSE access arrangements. British Journal of Special Education, 48(1), 50-69. http://doi.org/10.1111/1467-8578.12347
Holmes, S. (2025). Time limits and speed of working in assessments: When, and to what extent, should speed of working be part of what is assessed? (Ofqual research report 25/7267/2). Ofqual. Retrieved from
https://www.gov.uk/government/publications/time-limits-and-speed-of-working-in-assessments
Huesman, R. L., & Frisbie, D. A. (2000). The validity of ITBS reading comprehension test scores for learning disabled and non learning disabled students under extended-time conditions. Paper presented at the Annual Meeting of the National Council on Measurement in Education. Retrieved from https://eric.ed.gov/?id=ED442210
JCQ (2025). Adjustments for candidates with disabilities and learning difficulties: Access Arrangements and Reasonable Adjustments. Joint Council for Qualifications. Retrieved from https://www.jcq.org.uk/wp-content/uploads/2025/08/JCQ-AARA-2025_FINAL.pdf
Kellogg, J. S., Hopko, D. R., & Ashcraft, M. H. (1999). The Effects of Time Pressure on Arithmetic Performance, Journal of Anxiety Disorders, 13(6), 591–600. http://doi.org/10.1016/S0887-6185(99)00025-0
Lee, K. S., Osborne, R. E., & Carpenter, D. N. (2010). Testing Accommodations for University Students with AD/HD: Computerized vs. Paper-Pencil/Regular vs. Extended Time. Journal of Educational Computing Research, 42, 443–458. http://doi.org/10.2190/EC.42.4.e
Lesaux, N. K., Pearson, M. R., & Siegel, L. S. (2006). The Effects of Timed and Untimed Testing Conditions on the Reading Comprehension Performance of Adults with Reading Disabilities. Reading and Writing, 19, 21–48. http://doi.org/10.1007/s11145-005-4714-5
Lewandowski, L. J., Cohen, J., & Lovett, B. J. (2013). Effects of Extended Time Allotments on Reading Comprehension Performance of College Students With and Without Learning Disabilities. Journal of Psychoeducational Assessment, 31, 326–336. http://doi.org/10.1177/0734282912462693
Lewandowski, L. J., Lovett, B. J., Parolin, R., Gordon, M., & Codding, R. S. (2007). Extended Time Accommodations and the Mathematics Performance of Students With and Without ADHD. Measurement and Evaluation in Counseling and Development, 25(1), 17–28. http://doi.org/10.1177/0734282906291961
Lewandowski, L. J., Lovett, B. J., & Rogers, C. L. (2008). Extended Time as a Testing Accommodation for Students With Reading Disabilities: Does a rising tide lift all ships? Journal of Psychoeducational Assessment, 26, 315–324. http://doi.org/10.1177/0734282908315757
Lovett, B. J. (2010). Extended Time Testing Accommodations for Students With Disabilities. Review of Educational Research, 80, 611–638. http://doi.org/10.3102/0034654310364063
Lovett, B. J., & Leja, A. M. (2015). ADHD Symptoms and Benefit From Extended Time Testing Accommodations. Journal of Attention Disorders, 19(2), 167–172. https://doi.org/10.1177/1087054713510560
Lovett, B. J., Lewandowski, L. J., & Potts, H. E. (2017). Test-Taking Speed: Predictors and Implications. Journal of Psychoeducational Assessment, 35, 351–360. http://doi.org/10.1177/0734282916639462
Mandinach, E. B., Bridgeman, B., Cahalan-Laitusis, C., & Trapani, C. (2005). The Impact of Extended Time on SAT Test Performance. (College Board Research Report No. 2005-8). The College Board. Retrieved from https://files.eric.ed.gov/fulltext/ED563027.pdf
Miller, L. A., Lewandowski, L. J., & Antshel, K. M. (2015). Effects of Extended Time for College Students With and Without ADHD. Journal of Attention Disorders, 19, 678–686. http://doi.org/10.1177/1087054713483308
Munger, G. F., & Loyd, B. H. (1991). Effect of Speededness on Test Performance of Handicapped and Nonhandicapped Examinees. The Journal of Educational Research, 85, 53–57. http://doi.org/10.1080/00220671.1991.10702812
Ofiesh, N. S. (2000). Using Processing Speed Tests to Predict the Benefit of Extended Test Time for University Students with Learning Disabilities. Journal of Postsecondary Education and Disability, 14, 39–56. Retrieved from https://eric.ed.gov/?id=EJ649045
Ofiesh, N. S., Hughes, C., & Scott, S. S. (2004). Extended Test Time and postsecondary Students with Learning Disabilities: A Model for Decision Making. Learning Disabilities Research and Practice, 19, 57–70. http://doi.org/10.1111/j.1540-5826.2004.00090.x
Ofiesh, N. S., Mather, N., & Russell, A. (2005). Using Speeded Cognitive, Reading, and Academic Measures to Determine the Need for Extended Test Time among University Students with Learning Disabilities. Journal of Psychoeducational Assessment, 23, 35–52. http://doi.org/10.1177/073428290502300103
Onwuegbuzie, A. J., & Seaman, M. A. (1995). The Effect of Time Constraints and Statistics Test Anxiety on Test Performance in a Statistics Course. The Journal of Experimental Education, 63, 115–124. http://doi.org/10.1080/00220973.1995.9943816
Pardy, B. (2016). Head Starts and Extra Time: Academic Accommodation on Post-Secondary Exams and Assignments for Cognitive and Mental Disabilities. Education and Law Journal, 25, 191–208. Retrieved from https://ssrn.com/abstract=2828420
Piper, B., & Zuilkowski, S. S. (2016). The role of timing in assessing oral reading fluency and comprehension in Kenya. Language Testing, 33, 75–98. http://doi.org/10.1177/0265532215579529
Portolese, L., Krause, J., & Bonner, J. (2016). Timed Online Tests: Do Students Perform Better With More Time? American Journal of Distance Education, 30, 264–271. http://doi.org/10.1080/08923647.2016.1234301
Powers, D. E., & Fowles, M. E. (1997). Effects of applying different time limits to a proposed GRE writing test. (GRE Board Research Report No. 93-26cR). Educational Testing Service. Retrieved from https://www.ets.org/Media/Research/pdf/GREB-93-26CR.pdf
Rodeiro, C.V., & Macinska, S. (2022). qual opportunity or unfair advantage? The impact of test accommodations on performance in high-stakes assessments, Assessment in Education: Principles, Policy & Practice, 29(4), 462-481, https://doi.org/10.1080/0969594X.2022.2121680
Runyan, M. K. (1991). The Effect of Extra Time on Reading Comprehension Scores for University Students With and Without Learning Disabilities. Journal of Learning Disabilities, 24, 104–108. http://doi.org/10.1177/002221949102400207
Rutkowski, D., Rutkowski. L., Svetina, D.V., Canbolata, Y., & Underhill, S. (2023). A Census-Level, Multi-Grade Analysis of the Association Between Testing Time, Breaks, and Achievement. Applied Measurement in Education, 36(1), 14-30. https://doi.org/10.1080/08957347.2023.2172019
Sireci, S. G., Scarpati, S. E., & Li, S. (2005). Test Accommodations for Students With Disabilities: An Analysis of the Interaction Hypothesis. Review of Educational Research, 75, 457–490. http://doi.org/10.3102/00346543075004457
Spenceley, L. M., & Wheeler, S. (2016). The use of extended time by college students with disabilities. Journal of Postsecondary Education and Disability, 29, 141–150. Retrieved from https://eric.ed.gov/?id=EJ1113036
STA. (2025). 2026 Access Arrangements Guidance. Standards and Testing Agency. Retrieved from https://www.gov.uk/government/publications/key-stage-2-tests-access-arrangements
The Guardian. (2019). One in five GCSE and A-level pupils granted extra time for exams. Retrieved November 6, 2025, from https://www.theguardian.com/education/2019/nov/21/one-in-five-gcse-and-a-level-pupils-granted-extra-time-for-exams
The Telegraph. (2019). A fifth of students now get extra time in exams amid calls for rise to be investigated. Retrieved November 6, 2025, from https://www.telegraph.co.uk/news/2019/11/21/fifth-students-now-get-extra-time-exams-amid-calls-rise-investigated/
Thornton, A. E., Reese, L. M., Pashley, P. J., & Dalessandro, S. P. (2002). Predictive Validity of Accommodated LSAT Scores. Law School Admission Council. Retrieved from https://files.eric.ed.gov/fulltext/ED469183.pdf
Tindal, G., & Fuchs, L. (1999). A summary of research on test changes: An empirical basis for defining accommodations. University of Kentucky. Retrieved from https://eric.ed.gov/?id=ED442245
Tindal, G., & Ketterlin-Geller, L. R. (2004). Research on mathematics test accommodations relevant to NAEP testing. Commissioned paper synopsis for NAGB Conference on Increasing the Participation of SD and LEP Students in NAEP, Feb 26-27, 2004. Retrieved from https://eric.ed.gov/?id=ED500433
Tsui, J. M., & Mazzocco, M. M. M. (2006). Effects of math anxiety and perfectionism on timed versus untimed math testing in mathematically gifted sixth graders. Roeper Review, 29, 132–139. http://doi.org/10.1080/02783190709554397
Wei, X. & Zhang, S. (2023). Extended Time Accommodation and the Academic, Behavioral, and Psychological Outcomes of Students With Learning Disabilities. Journal of Learning Disabilities, 57(4), 242-254. https://doi.org/10.1177/00222194231195624
Woods, K., James, A., & Hipkiss, A. (2018). Best practice in access arrangements made for England’s General Certificates of Secondary Education (GCSEs): where are we 10 years on? British Journal of Special Education, 45, 236–255. http://doi.org/10.1111/1467-8578.12221
Zhang, G., Fenderson, B. A., Schmidt, R. R., & Veloski, J. J. (2013). Equivalence of students’ scores on timed and untimed anatomy practical examinations. Anatomical Sciences Education, 6, 281–285. http://doi.org/10.1002/ase.1357
Zoller, U., Ben-Chaim, D., & Kamm, S. D. (1997). Examination-Type Preferences of College Science Students and Their Faculty in Israel and USA: A Comparative Study. School Science and Mathematics, 97, 3–12. http://doi.org/10.1111/j.1949-8594.1997.tb17334.x
Zuriff, G. E. (2000). Extra Examination Time for Students With Learning Disabilities: An Examination of the Maximum Potential Thesis. Applied Measurement in Education, 13, 99–117. http://doi.org/10.1207/s15324818ame1301_5
Footnotes
-
Note that the terms ‘student’ and ‘candidate’ are used interchangeably throughout this report for readability. ↩
-
For example, guidance for Key Stage 2 assessments is given by STA (2025). While there is no centralised application system for all vocational and technical qualifications, all awarding organisations are required by equalities law as well as Ofqual’s conditions to provide these adjustments. Some awarding organisations offering these qualification types are part of JCQ and may use the same online system that is used for GCSEs, AS and A levels, while other organisations use their own systems to approve and manage reasonable adjustments. ↩
-
Gilbertson Schulte, Elliot and Kratochwill (2001) used an experimental research design but were not included in table 1 because it was not possible to isolate the effects of extra time on participants. ↩
-
Speededness can be defined as “the extent to which test takers’ scores depend on the rate at which work is performed as well as on the correctness of the responses”. (AERA, APA and NCME, 2014) ↩
-
Goegan and Harrison (2017) did report that extra time allowed candidates to write more words in their writing test, but there was no difference in scores. ↩